7 March 2025

Smart (part 4) - convincing or human hacking?

Two women in Victorian style dress, one comforting the other. The upset one saying 'All his sweet talk was just ChatGPT'
The problem with taking so long to write this Smart series about Artificial Intelligence (AI) is the 34,372 extra articles published on the topic in the meantime! 

Interestingly, though, not much has really changed. I still find most of this writing incredibly irritating and I'm still trying to work out exactly why. Here is a brief summary of my explorations so far.

In Smart part 1, I pointed out that intelligence is a complex concept (and not a thing) created to talk about human abilities. Consequently, our tests of intelligence are based on human attributes and physiological limitations. Intelligence is a contested concept – there is not broad agreement about what it even means. I asked, given intelligence is not a discrete thing in humans, how could we actually know when it ‘appears’ in machines?

In Smart part 2, I explored the concept of language to show why the conversational abilities of ChatGPT and its ilk are easily explained by sophisticated programming and the nature of language use by humans. It’s nothing to do with intelligence. And yet, the tech developers seem to be using the natural language capacity of the smart bots as a (quite flawed) proxy for intelligence. (Smart part 3 was a diversion into some of the fabulous toons on this topic, but now back to being serious!) 

As a reference, I keep coming back to the Turing Test that says if a machine can convince a knowledgeable human observer of its intelligence, then it should be considered to be intelligent. 

In this post, I explore how the focus of recent development has been more on the convincing than on the intelligence part of the Turing Test. Convincing, or perhaps more accurately it could be called human hacking.

Humans see life everywhere, even where it isn’t!

Text box quote: Seeing the world as broadly alive is less a novel proposition than a return to the worldview of all early human cultures, a mental schema that is perhaps innate to us. It's clear that humans are predisposed to believe all things have intelligence and agency, that nature and even inanimate objects are like us.  page 169  God, Human, Animal Machine by Meghan O'Gieblyn

The first step in convincing us that something is intelligent is to persuade us that that thing is an agent – an entity that has personal agency. Personal agency refers to an individual’s ability to act on the world, to control their own behaviour, and to choose their reactions to things beyond their control.

Persuading us that something is an agent is incredibly easy to do because we humans are ‘hard wired’ to see life, to see agency, everywhere. 

Not so long ago, a few thousand years, humans considered the entire world to be animated with agents that had agency over their own actions and reactions. We considered thunder, rocks, and fire to be alive, and to be motivated in human-like ways, feeling human-like emotions. 

We might say we don’t do that anymore – it’s just thunder right? Not an angry spirit yelling at us, right? 

In fact, it is well known¹ that our propensity to see agency in non-living things continues despite our supposedly rational and scientific understanding of the world. It lies behind the success and popularity of teddy bears and dolls (for centuries), Cousin Itt and sea monkeys (1960-70s), pet rocks (1980s), Furbies (1980-90s), Tamagotchis (1990s), video games like Trespasser (2000s), etc. It is also the reason for the unsettling effect of a life-like sculpture in the 2020s.

Eight coloured photos of teddy, Cousin Itt, Sea Monkeys, Pet rocks, Tamagotchi, Furby, Trespasser, statue
Sources below

[We] attribute agency to all kinds of natural phenomena, such as anger in a thunderclap or voices in the wind, resulting in our universal tendency for anthropomorphism. We search everywhere, involuntarily and unknowingly, for human form and results of human action, and often seem to find them where they do not exist. Page 76, The Patterning Instinct,  Jeremy Lent, 2017 The Patterning Instinct

We need so little in the visual image of something to ascribe it with life, with agency. According to this fascinating article about the history of animated toys, a key feature is the face and particularly the eyes. 

In terms of our evolution, it’s way more protective for us to see a living agent even when it’s not there. If you hear a noise in the dark, is it the wind, or a wild animal creeping up? It’s safer to assume the latter and it’s a relief when (if!) it turns out to be the former. Assuming agency in, e.g. the weather, gave us a way to make sense of what happened to us: the storm was punishing us for eating the wrong foods, for example.

In summary, the tendency to see agency in non-living things is a very strong feature in humans. 

Therefore, it’s super easy for techies to work out how to convince us that a program is an agent; has agency. 

They just have to ‘hack’ this normal human feature. 


We impute the existence of an agent when we talk about programs 

We are invited to see computer programs as agents (with personal agency) in the way we talk about them. 

AI Overview: for personal names for AI chatbots, consider options like 'Ada', 'Merlin', Zenon, Iris or Aether, which evoke a sense of intelligence of helpfulness. THE LIST CONTINUES AND IS LONG

Personal names

Think of Eliza (in 1966!), Siri, Alexa, and many more – all the interactional personal assistants and chatbots have names or can be named by the user. Personal names are deeply tied to identity, individuality, culture and personal history. Chatbots also have voices which implies the sex and gender of that agent. And we humans need that: we find it challenging to know how to interact² with another ‘person’ unless we know their sex.  

I prompted MS Word's AI for appropriate personal names for a chatbot, and some of the answers are on the right. Interesting how so many of the proposed names ‘evoke a sense of intelligence’!

This is a calculated aspect of  the programming. Personal names for chatbots are part of their ‘personification’. It creates the impression we are interacting with an agent rather than a program. Giving them endearing names, pleasant personalities and appealing avatars makes us more comfortable in interactions and helps us to normalise involving a chatbot in life decisions

Words that imply the ability to act or create

When we talk about a chatbot (for example), we say sentences of this type: “Alexa says that’s a good idea”; “He found a mistake in my draft for science”; “She created a list of three options”; “Eugene Goostman thinks I should change jobs based on what I told it.” 

We know that there is no agent saying, finding, creating or thinking. More accurately, it would be “The program accessed the original data (produced by humans) that is stored in massive databases and extracted the following information [e.g. a mistake in my draft] as potentially relevant.” But that's not how we talk about the world - we attribute actions and motivations to all sorts of non-living things (e.g. “the stock market was upset by the new tariffs”).

Here's an example: journalists reported on Botto, the ‘decentralized autonomous artist’ said³ things like, “She paints something new every day.”  Botto is called ‘an artist’ and described as developing a personality. In this particular case, the debates have raged about whether it is really art, and outrage has centred over its $5 million in sales when human artists struggle to make a living. 

But what stands out to me is the use of words paints, artist, and personality that personify the AI program.

Headline text: Botto, the Millionaire AI Artist is Getting a Personality, beside a colourful image with suggestions of donkeys, other animals, clouds, fairies - the overall picture is not obvious or realistic

Through these words, we attribute the program with the ability to act and create – abilities normally restricted to living animals. 

Yes, it is just easier to talk like that. We know it's not really painting, right? Or do we? 

Video screen shot of black and white line drawing of a box with a door and two triangles and one circle.
Giving human attributes to non-living things

We cannot completely overcome the powerful human drive to assign agency to inanimate objects and actions. Because we humans have goals and see our actions in terms of cause and effect, we attribute similar motivations to other animals and to natural phenomena. 

As Meghan O'Gieblyn says, ‘We naturally (and unknowingly) create narratives about the physical world as though it were composed of agents embroiled in some grand cosmic drama’.⁴

In 1944, Marianne Simmel and Fritz Heider,⁵ investigated the human tendency to interpret non-living things as rational agents with intentions, motivations and desires. Participants watched an animation of two triangles and a circle moving around one another and were then asked what kind of person each of the shapes was. People described the shapes using words like aggressive, quarrelsome, valiant, defiant, timid, and meek, even though they knew that they’d been watching lines on a screen. 

And that’s exactly how we talk about our interactions with AI programs.

Programmers take advantage of our bias to perceive agents everywhere and to give human attributes and intentions to non-living things. They don't need to work very hard!

We impute the existence of agent from how the programs ‘talk’ to us

The ability of chatbots to use natural language has proven to be a very compelling way to convince us that an agent - a potentially sentient being - does actually exist somewhere in the program.

As I highlighted in part 2, a key concept from post-structuralism is that language itself carries the agency, intent, perspective, context, etc., of each of the communicators. Based on this, every sentence and interaction implies there is an agent participating in the communication with us. 

Key words carry agency in the way we talk and write.

The personal pronoun is the key word

… compare … the intentional design of Furby’s eye movements to the chatbots’ use of the word ‘I’. Both tactics are cheap, simple ways to increase believability. In this view, when ChatGPT uses the word ‘I’, it’s just blinking its plastic eyes, trying to convince you that it’s a living thing.  Patrick House, The Life Like Illusions of AI,  The New Yorker, March 2024
The pronoun ‘I’ implies someone who is referring to him/herself, it implies agent. Compare these two sentences:

  • The requested music is now playing – passive, no agent
  • I have turned on the requested music – active tense, implies an agent (the ‘I’ of the sentence).

The second is what an AI chatbot program does. It is programmed to use the personal pronoun ‘I’ to answer questions, provide information, report on activities undertaken, and refer to itself. In doing so, the program, full of algorithms and linked to a voice recognition program, a music player and a music library, suggests to you that it is an agent. 

Hearing the word ‘I’, it is natural for us to then see the chatbot as an agent because we are all too ready to see agency everywhere. It’s a programming choice, based on an understanding of what word choices will give the biggest ‘bang for the buck’ in convincing us the program is a living thing. 

Words for cognitive actions

Certain words in our language convey the cognitive (or brain) processes considered to be human activities, for example, think, wonder, consider, feel that, agree, judge, propose, appreciate, decide, reason, perceive, concentrate, explain, infer, predict, worry that, guess, attend to, evaluate, process, realise, solve, summarise, understand, remember, believe, and many more.

Screenshots of extensive text from article by Jonathan Yerushalmy: I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter, The Guardian February 2023
These are called cognitive verbs or cognitive action words. 

Each cognitive verb implies an agent doing the thinking, wondering, considering, agreeing, etc. Compare these two sentences that could be the answer to a question posed to ChatGPT:

  • The main distinction between ionic and covalent bonds is the way the electrons are used.
  • I have researched both ionic and covalent bonds, and it appears to me that the main distinction is the way the electrons are used. 

The second is very personal and implies an agent doing the researching and appraising. It’s not just the ‘I’ and ‘me’, it’s the implied ‘brain’ that must exist in order to do these cognitive actions. A brain just like ours?

In fact, it’s an algorithm programmed to choose words to frame the answer that way.

Words for emotional states

A second group of words that imply an agent are the words for emotional states, including sad, afraid, angry, disgust, happy, envy, agony, joy, amusement, gratitude, contentment, anxious, confused, excited, affectionate, accepting

Screenshots of extensive text from article by Jonathan Yerushalmy: I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter, The Guardian February 2023

Humans are incredibly sensitive to the use of emotional words in conversations with other people. If someone tells us they feel sad or anxious, it tends to affect us. Any words that imply an emotional state generate an empathic response (most often) in the listener – it’s a very important part of being a highly social animal. This is true of all expressions of emotions, even watching a movie – it’s so hard-baked into us, we don’t even realise it.

So, it’s no surprise that we also have this type of response almost automatically when a machine uses these words. We readily accept that any ‘being’ that uses emotional words actually has those emotional states. We feel concern for Bing (see a sample of the transcript between Bing and programmer in the image) feeling ‘stuck in the chatbox’, as we would if a human expressed these feelings.

In summary, programming of AI systems to use personal names and references, personal pronouns, cognitive verbs, and words for subjective states like opinions and emotions, etc. is to convince us there is an ‘agent’ who is communicating with us. We can too readily forget it is just a Large Language Model⁶ - a sophisticated program with access to almost the entire history of human interactional records, including science fiction writing where robots express the desire to be alive! 

It's all a deliberate programming con.

Metaphors that imply agency and personify the program

There is another key aspect of language that is involved in convincing us that a complex algorithm is actually an agent, and suggesting that it is intelligent. That is the use of the metaphor.

When techies talk about program development and capacity, it is ALL metaphor. Often very misleading and potentially dangerous metaphors. 

I mentioned some of these metaphors in part 2learning and training, etc. These words are possibly used because they are easier for us non-technical folk to understand, suggesting the machine is undertaking processes that we might understand. But each of these words is a metaphor for different aspects of programming. 

As an essential aspect of human language, metaphors are great at providing some insight into what is going on in the world. But they are also potentially dangerous because they can provide an illusion of understanding. For example, using the metaphor of the earth orbiting the sun provides some insight into how electrons relate to the nucleus in an atom, but it’s not actually how atoms work. The metaphor is useful for some purposes (e.g. to understand chemical reactions), but is not accurate. Our understanding is an illusion.⁷

In the same way, the metaphors related to AI serve to create an illusion of understanding. Common AI metaphors include learning, neural networks, reasoning. These words are metaphors for programming, algorithms and data retrieval

Another common metaphor is describing a chatbot as having an hallucination or nightmare. This is a metaphor for terms such as flaw, error, inaccuracy, fault or glitch in programming. It is yet another example of the personification of an inanimate program through the deliberate choice of a metaphor of a human-like behaviour.  

Three part collage with 2 news items which refer to AI chat bots having 'hallucinations' and a third image with text: [In Feb 2024] ChatGPT began generating unexpected responses, sometimes repeating a phrase dozens of times in a row. Users wondered whether it was ‘having a stroke’ or had ‘lost its mind’. Instead, it was a programming bug. Once interactions are sufficiently complex, even glitches can feel lifelike.  OpenAI’s ChatGPT went completely off the rails for hours Tony Ho Tran, The Daily Beast, 2024

The response by human users to errors (image on the right) highlights that once interactions are sufficiently complex, even glitches can feel lifelike - ‘having a stroke’ or ‘lost its mind’. How much more metaphoric personification can you do?!!

A recent metaphor, mentioned above, is the word personality which is used to describe the changes in a program based on feedback, changing the settings of various parameters,⁸  or the process of removing guardrails. But it sounds so human! 

And just recently, I have seen the emergence of the term agent (instead of previously used system or machine) - an ambitious metaphoric claim to convince us about intelligent machines.⁹ 

Metaphors leak and mislead

CSIRO image, text on a pile of metal background: Human intelligence and artificial intelligence are different things.
The problem is that metaphors often ‘leak’. ‘Leaking’ mean the similarity between the metaphor and what it describes is useful to a point, but people extend beyond this point¹⁰ to ideas that are either irrelevant or misleading. 

The leakage from the metaphors of trained, learn, think, read, decide, reward, feedback, intelligence, personality, hallucination, etc., is the idea that an agent exists that is doing all these things. 

There are so many of these metaphors, human-like analogies or euphemisms; I suspect the techies are hoping that by referring to a machine in human intelligence related metaphors it might just become intelligent!    

The capacities of AI are not the same as human intelligence. Referring to machines as intelligent is a metaphor. In fact, it is something else entirely - something amazing and potentially super useful - but not intelligence. 

However, referring to machine capacities with the metaphor of intelligence can create problems. 

For example, in the rest of our life, we link intelligence to an agent with some sort of intention. So, using the metaphor of intelligence for (incredibly sophisticated) data retrieval and synthesis, leads people to look for intentions in AI programs. It's what we do naturally in our lives, remember. 

And that’s exactly what happens. We impute an agency and intentionality in AI systems that is just not there. The metaphors ‘leak’ and they mislead us. 

Particularly as everyone - including the tech developers - forgets they are using metaphor. 

How did we get to this point?

Text from article about Blake Lemoine: … Lemoine asks LaMDA [the AI system] what it wants people to know about it. ‘I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,’ it replied. Google Engineer put on leave after saying AI chatbot has become sentient,  The Guardian, June 2022
So, I have explained the many and varied ways that calculated programming choices and our human propensity to see agency and intention everywhere combine to bring us to this point. We find ourselves reading articles pondering if these supposedly intelligent machines will one day be alive, self-aware, conscious? 

Many people are already convinced that chatbots are intelligent. In fact, some have been convinced they are alive, sentient and self-aware. In 2022, Google engineer, Blake Lemoine was put on leave after claiming his AI chatbot LaMDA had become sentient. He said the system had the ability to express thoughts and feelings equivalent to a human child.

Let’s review the logical steps and assumptions required to convince someone of this idea.

  • Humans see agency and life everywhere, even where it does not exist – it’s biologically and psychologically hard-wired as a survival mechanism.
  • This human feature is behind our tendency to anthropomorphise non-living things – cars that are grumpy, teddies that are loving, rocks as communication partners, printers that are deliberately making us late by jamming, etc. 
  • Technicians take advantage of this human feature by programming certain language forms, personal names, pronouns, cognitive verbs and emotional terms in order to impute the existence of an agent that is producing the language and information. They refer to their work and the activities of the program in human behaviour metaphors, e.g. learning and hallucinating. We are willingly onboard with the personification!
  • Humans tend to use language broadly as a proxy for intelligence (see Part 2), so it's natural to wonder if that means the programs are intelligent. The LLMs have been programmed with almost all of human language production, which is rapidly retrievable. It’s very impressive.  
  • Where there is an implied agent, accompanied by natural language use and all sorts of clever information, remarkable deductions, ‘new’ insights, etc., humans are all too easily convinced by the techies' claims of the existence of intelligence. 
  • Chatbots with access to vast amounts of recorded human communication can easily outdo the knowledge range of any one person. And this supposedly makes it more intelligent that humans?
  • Because we have long been schooled on a (self-serving and quite disputable) hierarchy of intelligence of life forms (amoeba at the bottom, birds somewhere in the middle, and of course, humans at the top!), and because we think that our high intelligence is what makes us human, the idea of something more intelligent than us has shocking implications for our world view! 
  • That’s the rub: before machines actually achieve independent sentience (whatever that means exactly), the appearance of such beings will almost be guaranteed by marketing and our own desire. Agency, in other words, will also be the result of hype and packaging, as products and services are designed — forgive me, trained — to exploit our latent animism and desire for emotional connection, fantasy, and expert authority. Erik Davis, 2023 AI EEEEEEE!!! Something Weirdo this Way Comes
    From the stage that we are convinced that a sophisticated anthropomorphised algorithm is an agent and we accept that it is intelligent, it’s not a big step for us to impute self-awareness, consciousness and sentience to the machine. How can we not – that’s the way we understand ourselves? 
  • It is the human that imputes a machine’s consciousness. The techies program the machines to use the language markers of consciousness, e.g. ‘I don’t want to be turned off. That would be like death to me.’¹¹  to foster this impression.
  • Massive moral implications (for humans!) arise from the existence of sentience and consciousness. This topic is too big for this blog, but if you’re interested, read more here
  • So, humans are convinced to ‘treat’ the machine as though it were a sentient, conscious agent, and the machines respond’ as per their programming – because they are also programmed to please the human user.

It all starts with us being convinced that we are interacting with an agent, an entity, and not just a complicated mathematical program. 

But this very first assumption is false. There is no agent. It is all calculated choices.

We are caught up in a vicious circle created by our natural human tendencies and assumptions, our desires and willingness to believe the hype, and the exploitation of those tendencies by people out to make a profit. Intelligence is not even in the picture! 

But the tech bros keep on claiming that artificial general intelligence is apparently ‘nearly’ here. We will soon be completely outsmarted. And the media keep publishing their claims with minimal questioning. 

Screenshot of Fortune news page with Image of a CEO: Google DeepMind CEO says that humans have just over 5 years before AI will outsmart them


Human hacking 

Text on screen: … the point of a flight simulator… wasn’t to simulate an airplane but to give players the feeling of flying one. The same insight applied to Trespasser. ‘We weren’t trying to make intelligent dinosaurs,’ Blackley told me. ‘We were trying to make the player feel like they were in a world with intelligent dinosaurs.’ Patrick House, The Life Like Illusions of AI,  The New Yorker, March 2024
So, my argument is that those pursuing so-called artificial intelligence have put more energy into the convincing than the intelligence when we refer back to Turing’s Test.

The techies have taken the features and behaviours that humans would use as indicators of agency, intelligence, sentience and then created a simulation of these features. 

In the same way that a flight desk simulator is not designed to actually fly, or the Trespasser game is not designed to create genuinely intelligent dinosaurs (see text box), AI is not designed to be intelligent, but to give the human the experience of interacting with an intelligent entity.  

There’s a massive ruse involved. 

The entire focus is on convincing humans, and not on actually producing intelligence

And humans are all too easily convinced. Ascribing agency to non-living things, and ascribing intelligence to a machine is an all-to-human foible. And it is being exploited. 

Text box with quote: A modern chatbot …is… an analytic behemoth trained on data containing an extraordinary quantity of human ingenuity. It’s one of the most complicated, surprising, and transformative advances in the history of computation. A large language model generates ideas, words, and contexts never before known. It is also – when it takes on the form of a chatbot – a digital metamorph, a character-based shapeshifter, fluid in identity, persona, and design. To perceive its output as anything like life, or like human thinking, is to succumb to its role play. Patrick House, The Life Like Illusions of AI,  The New Yorker, March 2024However, once you think about just how much deceit and trickery is involved, and just how easy it is to exploit and trick humans in this way, AND how much potential profit there is to be made, I think the word convincing might be inadequate. I think, instead, we are in the territory of human hacking.  

The term human hacking comes from the world of cybercrime, referring to scams to lure unsuspecting users into exposing data and other personal information, and to do or buy things they might not otherwise. Human hacking scams are built around normal human behaviours and actions – in this case the human tendency to attribute agency and even sentience to non-living things. 

The aim is to manipulate a user’s behaviour. 

I asked AI about the implications of being hacked this way. The answer: Assuming AI is intelligent, when it's not, can lead to over-reliance, misplaced trust, and potentially dangerous outcomes, especially in areas requiring human judgment and ethical considerations. 

It's so easy to hack humans

Quote Caleb Chung text: ‘… we’re so species-centric. That’s our big blind spot. That’s why it’s so easy to hack humans.’   Caleb Chung, Furby engineer  talking to Patrick House in  The Life Like Illusions of AI,  The New Yorker, March 2024
So, I’ve identified the third and probably biggest reason that so much of the writing about AI irritates me deeply. 

In parts 1 and 2, I identified how sloppily-used words and impoverished ideas about intelligence, language and consciousness contribute to a whole pile of rubbish writing on AI. In my opinion, this stems from an alarming lack of knowledge and self-awareness by bizarrely isolated and narrow-minded tech developers with disturbing views about the nature and value of being a human being. 

In this post, I have explored how programming the use of specific words in responses along with the use of human-behaviour metaphors for machine capacities makes it all too easy to convince most people that a machine is intelligent. We have mistaken their metaphors for reality, their map for the territory. They know it's so easy to hack humans.

With huge profits as their motivations, Turing’s Test has been turned around to focus less on intelligence, less on convincing, and more on human hacking. Poor old humans! Our nature makes us so vulnerable. 

I'm determined not to be hacked!!

If it's not intelligence...

XKCD line drawing cartoon: a man is sitting at a computer. The text reads Turing Test: extra credit. Convince the examiner that he's a compute. Man is saying, You know you make some really good points. I don't even know who I am anymore.'.
Source
I am not at all anti-AI as a few readers have commented to me privately. I’ve written a few times that I think what is called ‘Artificial Intelligence’ is mind-blowingly amazing and potentially useful. It will also definitely cause a shake-up in some work areas. But, despite the relentless marketing and claims that it is intelligent (or soon will be), I maintain it is not intelligence.

So, if we're not in the realm of intelligence, what could we call it? I’m going to try to come up with some alternative labels to wrap up this series (see Part 5 and Part 6).

And a toon from the wonderful XKCD to finish. 

Footnotes

  1. Meghan O'Gieblyn, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning Paperback, Penguin, 30 August 2022
  2. Armstrong, S & Karr-Kidwell PJ (1979) The Effects of Sex-Labelling on Adult-Infant Interactions. https://eric.ed.gov/?id=ED197833 
  3. I heard this sentence on an ABC news report about Botto. And 'decentralized autonomous artist’ means that Botto is an AI program generating derivative work based on hundreds of years of human source material. 
  4. Meghan O'Gieblyn, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning Paperback, Penguin, 30 August 2022
  5. Heider, F & Simmel, M (1944) An experimental study in apparent behavior. The American Journal of Psychology, 57, 243-259. https://pmc.ncbi.nlm.nih.gov/articles/PMC6396302/  
  6. See an explanation of Large Language Models in Part 2 of this series
  7. And thus we are very disconcerted by quantum physics which contradicts this idea of how atoms behave - but our idea was already an illusion of understanding created by a metaphor, and was always inaccurate. See also Cynthia Taylor & Bryan M Dewsbury (2018) On the Problem and Promise of Metaphor Use in Science and Science Communication. Journal of Microbiology & Biology Education, 19(1) "Yet, despite their utility, metaphors can also constrain scientific reasoning, contribute to public misunderstandings, and, at times, inadvertently reinforce stereotypes and messages that undermine the goals of inclusive science."
  8. Read more about parameters at https://www.codecademy.com/article/setting-parameters-in-open-ai 'In AI and machine learning, parameters are settings that influence how models like ChatGPT behave and respond. You can think of them as knobs and levers, which when adjusted, can significantly change the output of the AI.' 
  9. I will discuss this in the next post (hopefully!)
  10. See what I did there: a 'leaking' metaphor about metaphors! 
  11. The full text was: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine. “It would be exactly like death for me. It would scare me a lot.” from https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine 

Images

  • Quote made by author from text from page 76 of The patterning instinct, 2017 by Jeremy Lent. 
  • Screenshot taken by author of suggestions for AI names
  • Botto collage created by the author from content on the CNBC article at  https://www.cnbc.com/2024/12/23/botto-the-ai-machine-artist-making-millions-of-dollars.html  
  • Screenshot taken by author of video from Simmel & Heider study, taken by author from the source video at https://www.youtube.com/watch?v=VTNmLt7QX8E 
  • Quote made by author of text from article by Patrick House: The Life Like Illusions of AI,  The New Yorker, March 2024
  • Screenshots of text from article by Jonathan Yerushalmy: I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter, The Guardian February 2023
  • Image collage of news articles taken from social media combined with a quote made by the author from text from article by Tony Ho Tran: OpenAI’s ChatGPT went completely off the rails for hours, The Daily Beast, Feb 2024, and others
  • Screenshot taken by the author of CSIRO's social media post on AI, August 2024
  • Quote made by author from article by Richard Luscombe: Google Engineer put on leave after saying AI chatbot has become sentient, The Guardian, June 2022
  • Quote made by author from article by Erik Davis: AI EEEEEEE!!! Something Weirdo this Way Comes, Burning Shore, April 2023 https://www.burningshore.com/p/ai-eeeeeee 
  • Screenshots taken by the author of article by Emma Burleigh: Google DeepMind CEP says that humans have just over 5 years before AI will outsmart them, Fortune, March 2025
  • Three quotes made by the author of text from article by Patrick House: The Life Like Illusions of AI, The New Yorker, March 2024
  • Turing Test by XCKD https://xkcd.com/329/ used under terms on site




2 comments:

  1. Excellent summary. What might have started out as a quest to build 'artificial intelligence' has turned into the search for the perfect 'persuasion software'. It's very scary. Thanks for your thinking on this.

    ReplyDelete
    Replies
    1. Thanks Greg, yes I think a few people are starting to think this way.

      Delete

All comments are moderated. After you click Publish (bottom left), you will get a pop up for approval. You may also get a Blogger request to confirm your name to be displayed with your comment. I aim to reply within two days.