2 May 2025

Smart (part 6) - a world view we don't share

Photo of a cup with text The Future is, then the final word covered by a discount sticker that reads $2
This is the sixth and final post in the Smart series where I have been exploring why so much of the writing about ‘artificial intelligence’ causes me such irritation. I’ve identified so many wordly issues with key terms like intelligence, language, agency, learning, personality, decision-making, etc. You can read previous posts here: 

This post will draw on all this previous content. 

Okay, it’s definitely time to stop adding to the plethora of writing about so-called ‘AI’ . 

But before I do, I must spend some time exploring some better names, more accurate names, less misleading names. Alternative names that can help us be smart about this amazing technology. Alternative names that might help us avoid selling off our future cheaply.

Let’s narrow it down 

To begin, a reminder from Part 1: very different types of programs are collectively called ‘artificial intelligence’ (AI).  The most important distinction to be aware of is the distinction between ‘narrow’ and ‘general’ artificial intelligence. 

Text box with quote: Meta’s chief AI scientist, Yann LeCun, responding to a claim from the xAI founder, Elon Musk. “It’s just not happening. We have systems that manipulate language, and fool us into thinking that they are smart, but cannot understand the world.”
Artificial Narrow Intelligence (ANI) focuses on a specific and narrow capacity. It includes pattern matching software, coding software, pattern extrapolation software, heuristic decision-making software, prediction software. Many of these are incredibly exciting and potentially helpful. Many of them are potentially disruptive and destructive of things that humans care about. 

What is called ANI is nothing like human-like intelligence¹. Despite all sorts of risks, likely disruptions, and things to protect ourselves from, I think that ANI has enormous potential for good for humanity (in fact, we already use it every day). 

The creation of Artificial General Intelligence (AGI), on the other hand, is a fantasy project of those who want to ‘play god’; a fantasy of complete power. The enduring quest to ‘create life’ represents a human flaw, a dissatisfaction or discomfort with being human as we are. I mentioned in part 5, we can source the quest for AGI in a certain type of person (most often man) who read their youthful science fiction novels at only the surface level.² 

What we've been hyped as AGI is also not intelligence.  

All the AI that you may have experienced and read about is ‘Artificial Narrow Intelligence’– even Large Language Models (LLMs, see part 2) fit into this category. 

I think that better names - more accurate names than AI - could stem from the nature of that narrowness. 

Throwing shade at ‘artificial intelligence’ is not renaming it 

Image of Sam Altman with tears drawn onto his image and headlines about the theft of intellectual property by the Chinese company DeepSeek
Noam Chomsky calls LLMs plagiarism software programs. Every day, recorded communication, including this writing, is being sucked up into their programming systems. It is taken without consent. It is theft.

Laughingly, the big US tech companies complained recently about their thieved material being stolen by the Chinese creators of DeepSeek. All of our interactions with AI are continuing to be taken without our consent and used to ‘train’ AI further.

In a similar vein, I have seen art generation software (e.g. CanvaAI, Artbreeder, Dall-E with ChatGPT, Runway) described as derivative software. This makes sense to me in the common meaning, especially when I see its derivative ‘art work’ (and not the specific meaning of derivative within computer science, from maths.)

I understand the impulse to insult and throw shade at the companies and technology visiting massive injustices on anyone with content in the public sphere. I understand the sense of powerlessness and dismay when your hard work is taken and changed by a machine – for what?! – as was Jingna Zhang's creation in the image below. But I'm not looking for critical labels.

Image of Facebook post by Jingna Zhang expressing her dismay and disgust at the use of her image of a young woman with head down and eyes closed and covered in petals, being run through AI to alter it in several ways, especially in making the woman face camera and open her eyes. The artist asks what is the whole point of this??

So, plagiarism or derivative software names label HOW the software achieves its outputs/products – plagiarising or deriving – but not what the software's function IS. 

I’m looking for some alternative names that pick out what these programs ARE (given they are not intelligent). 

Alternative names for ‘artificial intelligence’  

Here’s what a few people have suggested. 

In his podcast The Ezra Klein Show on 19 Mar 2023, Ezra suggested that rather than saying ‘AI is smarter than humans’, we could just say that AI (i.e. ANI) is ‘better than humans at doing x’. 

We’re used to thinking like this: mechanical hoists are better than humans at lifting heavy weights; analytical software is better than humans at finding patterns in data; flight simulators are better than humans at highlighting the gaps in flight safety preparation. 

In fact, programming ‘ANI’ for just one narrow and specific function is the reason it has been so successful, e.g. AlphaFold.  

So why not call them by that specific function, Pattern Discernment or Pattern Matching software or Enhanced Coding software. Very useful for all sorts of things! Many of the successful and exciting programs could be called Accentuated Pattern Identification and Extrapolation Software (APIES!). The capacities of some software in this field is amazing!! 

My favourite project using ANI pattern matching software has to be Happywhale.³  The ANI matches the unique pattern on each whale's fluke to track their travels across the ocean, without the need for tagging. There are heaps of these types of projects.

Image collation of four photos of whale flukes mainly in Antarctic waters
Images taken from Happywhale Facebook page

A reader mentioned hearing the term heuristic programsHeuristic programs use ‘recipes’ for solving data-processing and decision-making problems. In the past, these ‘recipes’ would have been thought out and encoded by human programmers. The code runs along the lines of: if [this] = x, then complete action y. These days, the heuristics themselves are ‘extracted’ by the program from the many⁴ decision-making examples in the LLM and the decision-making data bases it uses.  

I’ve used this word before: heuristics are something that humans use all the time in their thinking – they are shortcuts and recipes for decision making.⁵ ‘Training’ programs with existing human heuristics for decision-making does make functioning more efficient, but these decisions will suffer from the inaccuracies and biases inherent in the shortcut, and they have even more chance of making errors as humans do! (Quite a lot actually!)  So, they haven’t been as successful as the pattern matching programs.

Text box with quote from Jaron Lanier: The most pragmatic position is to think of A.I. as a tool, not a creature, an innovate form of social collaboration,
A name I quite like is dialogue simulators for LLMs. This name was suggested about a month after the Smallville experiment in 2023 (I covered in part 4). A group of researchers were very concerned about the dangers inherent in the human tendency to anthropomorphise the AI chatbot agents. They suggested the term dialogue simulators because the agents were ‘capable of role-playing an infinity of characters’ and simulate a multitude of humanlike personas at the same time with multiple human users.

Possibly my favourite alternative name for LLM-based AI comes from Jaron Laniersocial collaboration tool. Lanier is an insider in the tech world who writes that what is being developed is nothing like intelligence (and he questions what the techies even mean by that term, like me!). He writes:  

“The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating - but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.”

A tool for “illuminating previously hidden concordances between human creations” – I love that. I guess a concordance could be thought of as a pattern that has meaning – something meaningful to people. How about Concordance Illumination software? 

So people have been saying for some time now that intelligence is the wrong word; as an insider, Jaron Lanier even said he thinks the use of the word intelligence is dangerous. But, for some reason this term is persisting. 

I wonder why. These (wonderful and useful) projects are not dealing in any realm of what we would call intelligence. Perhaps it’s easier to get funding for research projects if the word AI is included. Maybe the word is persisting because of marketing departments (as has happened all along as things got ‘smart’), or maybe it is being driven by ego, misguidedness, pigheadedness. 

Or could it be something else? 

An intelligence fetish

Text box with quote by Sam Altman: We are already well on our way to an inevitable and literal merging with machines. ‘We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.’
I think the answer lies with the ideas and values of the people⁶ driving the AI race.

Erik Davis (a tech reviewer) says that the culture of those developing AI is weirder than most folk can imagine. This includes their rampant drug use, a whole pile of nutty ideas about the Singularity, their desire for immortality, and their willingness to embrace ‘human hacking’ to achieve their aims (see part 4). But beyond this, Davis says the culture and the philosophy of Silicon Valley sits somewhere between merciless utilitarian rationalism and hedonistic libertarianism, both deeply flawed and de-humanising belief systems⁷.

They think they alone know what the ‘truth’ is. They consider many normal human attributes to be weaknesses or failings. As Elon Musk recently said, "The fundamental weakness of Western culture is empathy”. They cannot abide illness or disability.

They think they know what humans want and how humans think. 

Text box with quote from Erik Davis: ...the computational bias of computer engineering has bloomed into a totalizing psychology. In other words, our brains are just running algorithms, making statistical guesses, and generating predictive processes that compose our reality almost entirely from the inside...
It is their view of humanity that is the most concerning. ‘AI’ development is intricately bound up with the question of what is means to be human (and thus for them, what it means to be transhuman or ‘post-human’). They base their software development on a simplified, dumbed-down and narrow view of  humanity, what Erik Davis refers to as a single totalizing⁸ psychology. One might even call it dehumanising. 

But behind all this, I think there is something more.

They have strong views on how human society SHOULD function. 

Those leading the ‘AI’  race are open about their belief that they – the ‘smartest’ people, the cognitive elite – should be making all decisions about society. They have a clear (and to me quite disturbing) view of humanity and the ideal society

And in their view, the ‘smartest’ people have the highest intelligence scores. And more intelligence, with a bigger number, is even better. And super-human intelligence is as good as it gets.

They have what I call an intelligence fetish. 

Look at the way these ‘high IQ, high value individuals’ and Donald Trump, a major facilitator in their quest, use intelligence as a compliment and an insult.⁹

Collation of 4 images, 1. FB post of Elon Musk says Instagram Users have IQ of less than 100; X post by Donald J Trump with text: Sorry losers and haters, but my I.Q. is one of the highest - and you all know it! Please don't feel so stupid or insecure, it's not your fault; FB post with Politics Headline: Trump offers mixed praise on Elon Musk - 'Seriously high Iq individual" who has his faults; Instagram post of Donald Trump and Jerome Powell in two separate scenes. Text over, as words from DT: 'Micky Mouse is smarter than him..."

Image of Stephen Hawking in his wheelchair with text over the image, People who boast about their IQ are losers.
We find ourselves at the logical conclusion of a mistaken idea of what intelligence is, based on the flawed assumption that intelligence is a ‘thing’, when in fact it is an idea constructed by humans for specific purposes (see part 1). Way back in 1981, Stephen Jay Gould predicted outcomes of this nature in his book The Mismeasure of Man. Combine this flawed assumption with the anti-human impacts of an extractivist capitalism – which is making those developing AI very rich – and we have something dangerous and potentially disastrous in the making.  

Without being open about it, they are playing the eugenics game, based on the idea that some people are better and of more value than other lesser humans. And no surprise: THEY are better, the rest of us are lesser. (Of course, not all of those with high IQs think this!)

It’s not just about known and likely risks of AI itself, it’s about leaving a bunch of egotistical, narrow minded, out of touch, intelligence-fetishists to control the development and application of  something that will affect all of us. 

We need to take the wind out of their sales! (Yes, I know it's sails, lol.) 

How would we know if a machine was super-intelligent? 

Text box with quote from Hern: Superhuman artificial intelligence that is smarter than anyone on Earth could exist next year, Elon Musk has said, unless the sector’s power and computing demands become unsustainable before then.
As a thought experiment, imagine, just for a moment, that all this programming and engineering was on the verge of creating genuine artificial general intelligence – at least of human capacity, and then greater than human intelligence – superintelligence, as they claim.

How would we actually know it is intelligent? 

Way back in part 1, I explored the complex and contested concept that we call human intelligence, and the problems with using formal intelligence tests to measure and ‘determine’ people's intelligence. As I pointed out these tests have many problems, but the most relevant aspect for this discussion is that their design and their use is derived from known and normal human physiological limitations – for example, how many digits (numbers) people can recall. 

The Intelligence Quotient (IQ - the number generated by the test) is a very narrow and contrived measure. Various studies have also shown a correlation between IQ and socioeconomic status. This suggests that most IQ tests really measure one's privilege – and the way the tech-bros see it – it  also gives them a way to justify that privilege.

Text box with quote from Terrence J. Sejnowski: What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs.
So, because of their design, we can’t use IQ tests to test machines. All we do is set tasks that are hard for humans and say that if a machine can beat a human, it MUST be more intelligent. For example we assume that only the most intelligent people can win international chess tournaments – so if a machine can beat a human at chess, it MUST be intelligent. Or we assume that language requires intelligence – so if a machine can use language, it MUST be intelligent.

I showed the fundamental flaws in these assumptions and this logic in parts 1 and 2. In fact, it's a ludicrous bit of circular logic. 

Our current ‘AI’ is remarkable and potentially useful, but it isn't intelligence! But we have no way to measure or test the claims from the tech companies.

In reality, there is no external and objective reference for measuring what we call intelligence if it were to ‘appear’ in a machine. So, we have to rely on the claims of the tech companies themselves. We have to rely on the claims from those ‘high IQ individuals’, ‘high value people’ who hold a world view where we are lesser humans

Image of Sam Altman with text over as his words, GPT-5 will have an IQ of ...
Sound like a good idea to you? Should we trust the people who already think they are better than everyone else and think only people with high IQs are valuable? Should we trust the people who think humans only really want to consume, consume, consume? Should we trust the claims of people who stand to make humungous piles of money from convincing us that machines are intelligent?

I don't think so. 

We need to completely drop the intelligence word. Instead, I keep coming back to Ezra Klein’s suggestion that we should focus on and name the specific capacity of each program or system.  

Risks associated with 'AI' 

There’s a lot written about the possible dangers of ‘AI’ and they are real risks for sure. These include: 

  • ‘AI’ systems making mistakes that have terrible consequences for humans, (e.g. errors in AI generated real estate ads) but with no person as an accountable source for that error!
Collation of three images: 1. Instagram images with text about A man in Missouri who spent 17 months in jail after an AI facial recognition program misidentified him as a suspect. 2. Facebook image from Wired with image of close up of a thumbprint and text This AI Tool helped Convict people of murder. Then someone took a closer look. 3. Facebook post by Jacobin with image of four robots working on computers with text The Fight Against the AI Systems Wrecking Lives
  • ‘AI’ replacing humans in many jobs and rendering large numbers of people without work – with all the economic, social and philosophical implications that would result. 
Collation of three images of Facebook posts: 1. Unilad post: Duolingo CEO slammed after announcing AI will replace contract workers in shocking email to employees, with image of the CEO. 2. McSweeney's (satire) with image of a robot sitting in front of a computer, a woman with coffee cup in the background, while in the foreground a man is carrying out the contents of his desk including a desk plant. Text is Sorry Dan, But It's No Longer Necessary for a Human to Serve as CEO of this Company. 3. The Economist with a luminescent escalator with one person on it, and two at the base, with text Artificial Intelligence looks like to widen social divides
  • ‘AI’ systems being programmed to persuade people to act against their own best interests, for example see this incredibly scary article from Wired where personnel from OpenAI openly state: “...the persuasive powers of language models runs deep... As humans we have this 'weakness' that if something communicates with us in natural language [we think of it as if] it is a human.” How is this research into persuasion not illegal?? It's everything bad I said about human hacking in Part 4
  • ‘AI’ generated content swamping human content, leaving us ‘drowning in a sea of slop’, so that our current social media and other platforms become useless.
Collation of three images of Facebook posts: 1. image of Sam Altman with images of people lips beside him and text OpenAI is Testing Its Powers Of Persuasion. 2. Washing Post post with an illustration of a robot with a cable to a human user out of shot with text AI friendships claim to cure loneliness. Some are ending in suicide. 3. Science: a long bar graph, too small to read, but showing great increase to the right side of image Text says A new investigation find AI-generated commentary articles flood the literature with poor-quality publications and case doubt on he metrics of scholarly output and impact.
  • ‘AI’ based emote war and increasing surveillance applications, which have been happening for ages already.
  • Massive use of finite resources for ‘AI’, e.g. water for cooling the machines, and production of massive amounts of carbon¹º due to the high levels of energy required to run the ‘AI’ machines. 
Collation of three images of Facebook posts: 1. image of a robort with Thai military uniform and face covered with text: Thailand Unveils AI Robot Cop with Facial Recognition, 360 degree View patrol. 2. Reuters: Image of Elon Musk looking down on Donald Trump who is mostly obscured with text EXCLUSIVE Musk's DOGE using AI to snoop on US federal workers, sources say. 3. Rebecca Solnit post with an image of a nuclear reactor beside a large power line with text Google will help built seven nuclear reactors to power its AI systems.
  • ‘AI’ being bloody annoying when you try to do an internet search, turning up slop and inaccurate answers, and generating WAY more carbon in the process. (Read this footnote¹¹ about how to avoid that.)

I’m not going to write more on these matters; you can read about them anywhere, as you can tell by the many articles I've snipped from various social media.

They are quite scary possibilities, but I see bigger risks elsewhere. 

Risks from assuming 'AI' programs are intelligent

SMBC toon with 8 panels and extensive text about how AI will annihilate humans through humans getting less and less skilled at making decisions.
Source: SMBC
In part 4, I reported this statement from Word’s AI: Assuming AI is intelligent, when it's not, can lead to over-reliance, misplaced trust, and potentially dangerous outcomes, especially in areas requiring human judgment and ethical considerations.

I don’t disagree, but I actually think it’s much more complicated and nuanced than that.

A major risk I see is focusing on the wrong things in development. If we assume a software program is intelligent then we focus on it and what its status is. If we, instead, do not start with that assumption, we can focus on being clearer about the intent, purpose and collaborative mechanisms for these programs. We focus on the needs and goals of the people developing the machines and the people using the machines.

Additionally, if we consider these software programs as almost magic objects or as potentially sentient beings, we cannot do good research with them. We stop asking the right questions. We cannot accurately interpret what we learn from our interactions with them.

A second major risk I see is people losing important capacities.  Writing teacher, John Warner suggests that the adoption of ‘AI’ “is not really solving a problem but is instead being used to paper over a problem in a way that will cause significantly worse problems over time”. For example, he says that while students can now use ‘AI’ to turn in grammatically and formally correct writing (lacking originality or an interesting or personal point of view), they are not going through the act of discovery of what they think; they are not learning to communicate what they think through writing. This is a critical human social capacity, normally years in development, that is being circumvented.

Two quote boxes side by side 1. The employment of these [AI] models is deeply antisocial not only because they cut off communication between two humans, but also and more importantly because they cut off communication with and within the self... foreclosing the possibility that we might use language as a means of discovery.  2. When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity.  [This is in effect] ‘falling asleep at the wheel’.


If we accept the claims that ‘AI’ is intelligent, I worry that we may eventually leave it to do all our thinking, and get so out of practice that we need to fully rely on it to communicate with others and to make decisions. 

And that means we risk giving away something core to being fully human.

Risks from leaving 'AI' development to the tech-bros

Text box with quote: AI is going to blast away whole industries, concentrate even great power in an even smaller group of men, deplete the planets' resources even further, and it's in the hand of reckless, careless people who seem to have no understanding of society. To them, it's just a race; a winner takes-all competition.  This entire AI gold rush is theft. It's not innovation, it's theft.
The ‘AI’ race has be characterised as a “winner-take-all capitalist game already inimical in so many ways to human flourishing, sense-making, and even sanity.”  It started with the theft of personal data, then the theft of personal creativity and work, and now we are seeing the theft of human purpose and activity, with no regard for human flourishing. 

When I think about the ideology, fantasies and biases of those¹² pushing the quest for ‘artificial general intelligence’ and think about the negative influence of the profit motive, I do not want what they are selling. It is now clear that they have turned it into a quest to create the illusion of what they think intelligence is.  

If we leave the development and implementation to those currently driving this game, the risk is we will have no choice but to accept THEIR impoverished view of humanity. To them, the process of personal creativity is not important, genuine emotion is unnecessarily messy, struggle and overcoming obstacles are a waste of time, imperfection and failure has no value, the process of learning a skill is merely tiresome work, frustration is something to be avoided, efficiency is more important than flexibility, and the gradual growth and mastery of a skill through repeated effort is just so passé.

Collation of three text boxes 1.Quote box with text AI is ‘is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary.’ 2. Instagram post with long text saying best to avoid AI because otherwise you will never learn to bullshit. And if you can't bullshit, you will not understand when you are being fed bullshit by others. 3.  Quote box with text Elon Musk: Sooner or later, yes. Machines will create art, compost music and tell stories - better than us. Keanu Reeves: "But will a machine ever know what it feels like to miss something? Or what is it like to create something beautiful out of a moment of sorrow? Creativity does not arise from calculation but from experience, pain, love, and hope."

So much of the writing encouraging the uptake of ‘AI’  focuses on the oodles of free time we will have (where have we heard that before?), and that we can finally be liberated from the frustrating limits of the human mind! We can be set free!

Liberated? Free? From… living? Liberated from reasoning, thinking, problem solving, making mistakes, feeling both competent and incompetent, exploring ideas for size and discarding some, trying out bullsh!t on our friends, and more, more and so much more. All the challenging, sometimes frustrating, but also satisfying interactions and experiences that we need and love about being human

Collation of two images and a quote text box 1. Facebook post from ABC with a woman talking into a smart phone. text says Why you should consider using AI if you've been avoiding it. Quote text box: ‘Thinking well, and the necessary self-reflection that accompanies and facilitates it, increases our capacity for things like empathy, generosity, and solidarity. It deepens our ability to hold two or more contradictory ideas at once. It connects us with ourselves so that we can connect with others. It makes us more thoroughly part of humanity. 3. Poem entitled For a Student who Used AI to Write a Paper by Joseph Fasano

In summary, underlying the development and hype of supposedly intelligent ‘AI’ is this impoverished idea of what being a human means. Accepting the world view of the ‘AI’ developers reduces us to caricatures and consumers of products.

It’s not a view of humanity that I am willing to accept. 

Let’s just play with the initials… 

Because the term ‘AI’ has run away from any concept of what it really is, it’s not going to be easy to promote alternative and more accurate titles like Social Collaboration software, but we can try. 

Maybe something that also uses the initials A and I might catch on more easily? Here are a few of my ideas that fit with the initials A and I, while naming the function of the software more accurately:

  • Accentuated Information software  
  • Agent Imitation software
  • Artificial Interaction software 
  • Agent investigation software
  • Alignment Illumination software

Quote text box: Seeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings… may make it less mysterious. But that’s good, because mystery only makes mismanagement more likely.

Avoiding the word intelligence is not ‘just semantics’ (which is never an insult for me anyway!). It’s the difference between treating these software programs as mystical, possibly sentient beings and being able to accurately identify the uses, risks and restrictions that the programs require. 

If we label LLM-based ‘AI’ as a social collaboration tool (per Lanier) or agent imitation software, we can counteract or at least be aware of our tendencies and vulnerabilities with anthropomorphising.

Whatever we rename ‘AI’, the criteria for alternative names must include describing the technology without ascribing human-like attributes and consciousness to it.  

In this way, the right name will help us resist having our humanity ‘hacked’. The right name stops the sequence of assumptions and logical leaps that we take in projecting agency and even consciousness onto a software program (see part 4), particularly one with an anthropomorphised body 

Dropping the word intelligence, will help us keep the focus on those who are creating the software. That focus gives us actions to take.

This Wordly Exploration draws to a satisfying but disturbing end

Over 6 posts, I have explored the impoverished ideas about the concepts of intelligence and language and also about humanity that sit behind claims of machine intelligence. 

I’ve also explored how we can be tricked very easily, although we do not like to admit it. Advertisers know this, politicians know this, conmen know this. And now the tech-bros, who know this, are happy to exploit this human vulnerability. For potentially massive profit, they are setting themselves us as the gate-keepers to important technology. 

Collation of two quotes and image from FB: 1. Quote text box: But if we don’t attend to it, the people creating the technology will be single-handedly in charge of how it changes our lives. It’s critical that those of us outside the tech industry have a voice in its advances. 2. Trust the Science post with text The underlying purpose of AI is to allow wealth to access skills while removing from the skilled the ability to access wealth. 3. The quest for intelligence has turned into a marketing exercise… and I can’t quite tell if the tech people have worked this out!

This Wordly Exploration has clarified for me what irritates me in the way we talk about ‘AI’. For a start, it’s the cynical hacking of human vulnerabilities, needs and tendencies (e.g. the tendency to see agency in non-living things, see part 4) – with the aim of manipulating us, making a lot of money from our increasing dependency, and even perhaps in the future controlling us. 

But being a wordly explorer, the misuse of the words intelligence, agent, sentience, freedom, etc. bothers me just as much as the end game of manipulation. The abuse of these words reveals the tech developers' impoverished ideas about human flourishing – and that irritates and alarms me. 

Knowing this makes the topic much less irritating. But not a lot less disturbing. 

Image of Ray Bradbury with text "I don't think the robots are taking over. I think the men who play with toys have taken over. And if we don't take the toys out of their hands, we're fools.

Finally, to be super clear, I don’t think I’m being a luddite and mindlessly pushing back on the potential of what is currently misnamed ‘artificial intelligence’ just because it is new. I am in awe of its capacities – it can translate, visualise, code, debug, analyse, provide feedback and more, faster and way better than I will ever be able to. But it is a human-made tool, not an intelligent being. As with all human tools, this one could be used as a weapon against us. We need to be alert to that.

I’m sure the hype and marketing will continue to claim that ‘AI’ is intelligent despite all evidence. The tech moguls with their intelligence fetishes also have enormous power, and a belief that as ‘the cognitive elite’ they should rule the world. They won't be giving that up easily. 

We need to be really smart about this, and remember at all times that the machines are not smart. In fact, those in the tech world are out of touch with most of humanity, and not that smart about the real world. Those of us outside the tech industry need to contribute to development and applications of ‘AI’. 

I think that my view about the most serious risk of ‘AI’ was expressed way back in 1976 by the science fiction writer, Ray Bradbury. I suspect he would be screaming the alarm from the grave if he could see how we’ve left the toys in the hands of the men who have taken over and who think their profits will come by making us all somehow less human.

Footnotes

  1. While  higher abilities at pattern matching, pattern extrapolation, etc., in humans is associated with higher scores on IQ tests, they are NOT equal to intelligence itself.
  2. Being a big sci-fi reader, I know these books are REALLY about human psychology and sociology; they raise philosophical questions. They are not about actually meant to be templates for creating life!! For a fascinating analysis of this try Jill Lepore's podcast The Evening Rocket which explore the science fiction basis of the ideas of Elon Musk.  
  3. I think it’s the humans that are 'happy' about the whales. I know their site filled with whale flukes makes me happy. 
  4. It's squillions!
  5. Of course these ‘recipes’ are simplifications, leaving a lot out, involving generalisations. For example, we might see a person with a certain appearance, and we make immediate decisions based on that little information – take a big shortcut – about that person’s attributes or intentions. It’s has been an evolutionary benefit to allow us to make decisions more quickly to avoid danger, but quite often our shortcuts will be wrong. But seriously, heuristic IT is not a very easy word to use and explain. But something like enhanced decision-making software is giving the software too much credit! 
  6. And speaking of generalising, obviously that's what I'm doing here. But you only need to look at their social presentations, their political machinations, their despicable statements about people in private, their inability to answer charges they their software is based on theft of other people's content, their willingness to toy with the general public through their previous digital technologies such as Facebook and Cambridge Analytica etc., and what they do to people who push up against them (see Cadwalladr for examples) to determine they have a lot of ideas and values that most other people don't find acceptable. 
  7. Read my previous post on Freedom if you want to know more about what I think of libertarianism 
  8. A definition of totalizing from Long, C. P. (2003). Totalizing identities. Philosophy & Social Criticism, 29(2), 209–240. "Totalizing, at its simplest, is taking different categories, identities and possibilities and bringing them under one framework so as to make it seem like there is only one thing, the totality, from which there can be no deviance, to which nothing can be different."
  9. I know Trump is not one of the tech-bros, but he is one of their greatest enablers and fans, and he is also one of their greatest puppets. So he is another example of someone who considers he is in the 'cognitive elite' who SHOULD be ruling the world. Here's a telling quote from an article in The Guardian from way back in 2017 when he was open about his intellectual fetish: We know that Trump has a high IQ, possibly even higher than mine if I’m being modest, because he never shuts up about it. In 2013, for example, he tweeted: “I’m a very compassionate person (with a very high IQ) with strong common sense.” He followed these pearls of wisdom with another tweet, a month later, saying: “Sorry losers and haters, but my IQ is one of the highest – and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.” In fact, he has tweeted about his IQ at least 22 times. In October, he also responded to reports that the US secretary of state, Rex Tillerson, had called him a “moron” by telling Forbes that he would beat Tillerson in an IQ test.
  10. From Rebecca Solnit in The Guardian: Google CEO Eric Schmidt suggested the company should just ‘plunge ahead with AI, which is so huge an energy hog it’s prompted a number of tech companies to abandon their climate goals. ‘We should go all in on AI because maybe AI will somehow, maybe eventually know how to ‘solve’ climate, saying: ‘I’d rather bet on AI solving the problem than constraining it.’ They want to plunge over ‘the brink’ because they are excited about AI. 
  11. By integrating AI into its search function, Google has significantly increased the carbon footprint of a single search, as AI processes require considerably more energy than traditional search queries and thus greater carbon emissions from Google's data centres. To stop AI-based searching, simply type '-ai' at the beginning of your Google search. Or use DuckDuckGo! 
  12. Perhaps some in the tech world continue with noble aims to create something novel and wonderful, but overall, it’s just a race for massive profit, with very little concern for impacts on humans. As an illustration, in a poll of programmers, 10-15% agreed that AI technology could wipe out humans in the foreseeable future. A sizeable proportion think they are building the end of humanity?!!   

Images

No comments:

Post a Comment

All comments are moderated. After you click Publish (bottom left), you will get a pop up for approval. You may also get a Blogger request to confirm your name to be displayed with your comment. I aim to reply within two days.