- Smart (part 1) - now did things get to be smart
- Smart (part 2) - a way with words
- Smart (part 3, an interlude) - toons on AI perils
- Smart (part 4) - convincing or human hacking
- Smart (part 5) - the not-so-secret AI agents
This post will draw on all this previous content.
Okay, it’s definitely time to stop adding to the plethora of writing about so-called ‘AI’ .
But before I do, I must spend some time exploring some better names, more accurate names, less misleading names. Alternative names that can help us be smart about this amazing technology. Alternative names that might help us avoid selling off our future cheaply.
Let’s narrow it down
To begin, a reminder from Part 1: very different types of programs are collectively called ‘artificial intelligence’ (AI). The most important distinction to be aware of is the distinction between ‘narrow’ and ‘general’ artificial intelligence.
Artificial Narrow Intelligence (ANI) focuses on a specific and narrow capacity. It includes pattern matching software, coding software, pattern extrapolation software, heuristic decision-making software, prediction software. Many of these are incredibly exciting and potentially helpful. Many of them are potentially disruptive and destructive of things that humans care about.What is called ANI is nothing like human-like intelligence¹. Despite all sorts of risks, likely disruptions, and things to protect ourselves from, I think that ANI has enormous potential for good for humanity (in fact, we already use it every day).
The creation of Artificial General Intelligence (AGI), on the other hand, is a fantasy project of those who want to ‘play god’; a fantasy of complete power. The enduring quest to ‘create life’ represents a human flaw, a dissatisfaction or discomfort with being human as we are. I mentioned in part 5, we can source the quest for AGI in a certain type of person (most often man) who read their youthful science fiction novels at only the surface level.²
What we've been hyped as AGI is also not intelligence.
All the AI that you may have experienced and read about is ‘Artificial Narrow Intelligence’– even Large Language Models (LLMs, see part 2) fit into this category.
I think that better names - more accurate names than AI - could stem from the nature of that narrowness.
Throwing shade at ‘artificial intelligence’ is not renaming it
Noam Chomsky calls LLMs plagiarism software programs. Every day, recorded communication, including this writing, is being sucked up into their programming systems. It is taken without consent. It is theft.In a similar vein, I have seen art generation software (e.g. CanvaAI, Artbreeder, Dall-E with ChatGPT, Runway) described as derivative software. This makes sense to me in the common meaning, especially when I see its derivative ‘art work’ (and not the specific meaning of derivative within computer science, from maths.)
I understand the impulse to insult and throw shade at the companies and technology visiting massive injustices on anyone with content in the public sphere. I understand the sense of powerlessness and dismay when your hard work is taken and changed by a machine – for what?! – as was Jingna Zhang's creation in the image below. But I'm not looking for critical labels.
So, plagiarism or derivative software names label HOW the software achieves its outputs/products – plagiarising or deriving – but not what the software's function IS.
I’m looking for some alternative names that pick out what these programs ARE (given they are not intelligent).
Alternative names for ‘artificial intelligence’
Here’s what a few people have suggested.
In his podcast The Ezra Klein Show on 19 Mar 2023, Ezra suggested that rather than saying ‘AI is smarter than humans’, we could just say that AI (i.e. ANI) is ‘better than humans at doing x’.
We’re used to thinking like this: mechanical hoists are better than humans at lifting heavy weights; analytical software is better than humans at finding patterns in data; flight simulators are better than humans at highlighting the gaps in flight safety preparation.
In fact, programming ‘ANI’ for just one narrow and specific function is the reason it has been so successful, e.g. AlphaFold.
So why not call them by that specific function, Pattern Discernment or Pattern Matching software or Enhanced Coding software. Very useful for all sorts of things! Many of the successful and exciting programs could be called Accentuated Pattern Identification and Extrapolation Software (APIES!). The capacities of some software in this field is amazing!!
My favourite project using ANI pattern matching software has to be Happywhale.³ The ANI matches the unique pattern on each whale's fluke to track their travels across the ocean, without the need for tagging. There are heaps of these types of projects.
![]() |
Images taken from Happywhale Facebook page |
A reader mentioned hearing the term heuristic programs. Heuristic programs use ‘recipes’ for solving data-processing and decision-making problems. In the past, these ‘recipes’ would have been thought out and encoded by human programmers. The code runs along the lines of: if [this] = x, then complete action y. These days, the heuristics themselves are ‘extracted’ by the program from the many⁴ decision-making examples in the LLM and the decision-making data bases it uses.
I’ve used this word before: heuristics are something that humans use all the time in their thinking – they are shortcuts and recipes for decision making.⁵ ‘Training’ programs with existing human heuristics for decision-making does make functioning more efficient, but these decisions will suffer from the inaccuracies and biases inherent in the shortcut, and they have even more chance of making errors as humans do! (Quite a lot actually!) So, they haven’t been as successful as the pattern matching programs.
A name I quite like is dialogue simulators for LLMs. This name was suggested about a month after the Smallville experiment in 2023 (I covered in part 4). A group of researchers were very concerned about the dangers inherent in the human tendency to anthropomorphise the AI chatbot agents. They suggested the term dialogue simulators because the agents were ‘capable of role-playing an infinity of characters’ and simulate a multitude of humanlike personas at the same time with multiple human users.Possibly my favourite alternative name for LLM-based AI comes from Jaron Lanier – social collaboration tool. Lanier is an insider in the tech world who writes that what is being developed is nothing like intelligence (and he questions what the techies even mean by that term, like me!). He writes:
“The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating - but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.”
A tool for “illuminating previously hidden concordances between human creations” – I love that. I guess a concordance could be thought of as a pattern that has meaning – something meaningful to people. How about Concordance Illumination software?
So people have been saying for some time now that intelligence is the wrong word; as an insider, Jaron Lanier even said he thinks the use of the word intelligence is dangerous. But, for some reason this term is persisting.
I wonder why. These (wonderful and useful) projects are not dealing in any realm of what we would call intelligence. Perhaps it’s easier to get funding for research projects if the word AI is included. Maybe the word is persisting because of marketing departments (as has happened all along as things got ‘smart’), or maybe it is being driven by ego, misguidedness, pigheadedness.
Or could it be something else?
An intelligence fetish
I think the answer lies with the ideas and values of the people⁶ driving the AI race.Erik Davis (a tech reviewer) says that the culture of those developing AI is weirder than most folk can imagine. This includes their rampant drug use, a whole pile of nutty ideas about the Singularity, their desire for immortality, and their willingness to embrace ‘human hacking’ to achieve their aims (see part 4). But beyond this, Davis says the culture and the philosophy of Silicon Valley sits somewhere between merciless utilitarian rationalism and hedonistic libertarianism, both deeply flawed and de-humanising belief systems⁷.
They think they alone know what the ‘truth’ is. They consider many normal human attributes to be weaknesses or failings. As Elon Musk recently said, "The fundamental weakness of Western culture is empathy”. They cannot abide illness or disability.
They think they know what humans want and how humans think.
It is their view of humanity that is the most concerning. ‘AI’ development is intricately bound up with the question of what is means to be human (and thus for them, what it means to be transhuman or ‘post-human’). They base their software development on a simplified, dumbed-down and narrow view of humanity, what Erik Davis refers to as a single totalizing⁸ psychology. One might even call it dehumanising.But behind all this, I think there is something more.
They have strong views on how human society SHOULD function.
Those leading the ‘AI’ race are open about their belief that they – the ‘smartest’ people, the cognitive elite – should be making all decisions about society. They have a clear (and to me quite disturbing) view of humanity and the ideal society.
And in their view, the ‘smartest’ people have the highest intelligence scores. And more intelligence, with a bigger number, is even better. And super-human intelligence is as good as it gets.
They have what I call an intelligence fetish.
Look at the way these ‘high IQ, high value individuals’ and Donald Trump, a major facilitator in their quest, use intelligence as a compliment and an insult.⁹
How would we know if a machine was super-intelligent?
As a thought experiment, imagine, just for a moment, that all this programming and engineering was on the verge of creating genuine artificial general intelligence – at least of human capacity, and then greater than human intelligence – superintelligence, as they claim.Way back in part 1, I explored the complex and contested concept that we call human intelligence, and the problems with using formal intelligence tests to measure and ‘determine’ people's intelligence. As I pointed out these tests have many problems, but the most relevant aspect for this discussion is that their design and their use is derived from known and normal human physiological limitations – for example, how many digits (numbers) people can recall.
The Intelligence Quotient (IQ - the number generated by the test) is a very narrow and contrived measure. Various studies have also shown a correlation between IQ and socioeconomic status. This suggests that most IQ tests really measure one's privilege – and the way the tech-bros see it – it also gives them a way to justify that privilege.
So, because of their design, we can’t use IQ tests to test machines. All we do is set tasks that are hard for humans and say that if a machine can beat a human, it MUST be more intelligent. For example we assume that only the most intelligent people can win international chess tournaments – so if a machine can beat a human at chess, it MUST be intelligent. Or we assume that language requires intelligence – so if a machine can use language, it MUST be intelligent.I showed the fundamental flaws in these assumptions and this logic in parts 1 and 2. In fact, it's a ludicrous bit of circular logic.
Our current ‘AI’ is remarkable and potentially useful, but it isn't intelligence! But we have no way to measure or test the claims from the tech companies.
In reality, there is no external and objective reference for measuring what we call intelligence if it were to ‘appear’ in a machine. So, we have to rely on the claims of the tech companies themselves. We have to rely on the claims from those ‘high IQ individuals’, ‘high value people’ who hold a world view where we are lesser humans.
Sound like a good idea to you? Should we trust the people who already think they are better than everyone else and think only people with high IQs are valuable? Should we trust the people who think humans only really want to consume, consume, consume? Should we trust the claims of people who stand to make humungous piles of money from convincing us that machines are intelligent?We need to completely drop the intelligence word. Instead, I keep coming back to Ezra Klein’s suggestion that we should focus on and name the specific capacity of each program or system.
Risks associated with 'AI'
There’s a lot written about the possible dangers of ‘AI’ and they are real risks for sure. These include:
- ‘AI’ systems making mistakes that have terrible consequences for humans, (e.g. errors in AI generated real estate ads) but with no person as an accountable source for that error!
- ‘AI’ replacing humans in many jobs and rendering large numbers of people without work – with all the economic, social and philosophical implications that would result.
- ‘AI’ systems being programmed to persuade people to act against their own best interests, for example see this incredibly scary article from Wired where personnel from OpenAI openly state: “...the persuasive powers of language models runs deep... As humans we have this 'weakness' that if something communicates with us in natural language [we think of it as if] it is a human.” How is this research into persuasion not illegal?? It's everything bad I said about human hacking in Part 4.
- ‘AI’ generated content swamping human content, leaving us ‘drowning in a sea of slop’, so that our current social media and other platforms become useless.
- ‘AI’ based emote war and increasing surveillance applications, which have been happening for ages already.
- Massive use of finite resources for ‘AI’, e.g. water for cooling the machines, and production of massive amounts of carbon¹º due to the high levels of energy required to run the ‘AI’ machines.
- ‘AI’ being bloody annoying when you try to do an internet search, turning up slop and inaccurate answers, and generating WAY more carbon in the process. (Read this footnote¹¹ about how to avoid that.)
I’m not going to write more on these matters; you can read about them anywhere, as you can tell by the many articles I've snipped from various social media.
They are quite scary possibilities, but I see bigger risks elsewhere.
Risks from assuming 'AI' programs are intelligent
![]() |
Source: SMBC |
A major risk I see is focusing on the wrong things in development. If we assume a software program is intelligent then we focus on it and what its status is. If we, instead, do not start with that assumption, we can focus on being clearer about the intent, purpose and collaborative mechanisms for these programs. We focus on the needs and goals of the people developing the machines and the people using the machines.
Additionally, if we consider these software programs as almost magic objects or as potentially sentient beings, we cannot do good research with them. We stop asking the right questions. We cannot accurately interpret what we learn from our interactions with them.
A second major risk I see is people losing important capacities. Writing teacher, John Warner suggests that the adoption of ‘AI’ “is not really solving a problem but is instead being used to paper over a problem in a way that will cause significantly worse problems over time”. For example, he says that while students can now use ‘AI’ to turn in grammatically and formally correct writing (lacking originality or an interesting or personal point of view), they are not going through the act of discovery of what they think; they are not learning to communicate what they think through writing. This is a critical human social capacity, normally years in development, that is being circumvented.
If we accept the claims that ‘AI’ is intelligent, I worry that we may eventually leave it to do all our thinking, and get so out of practice that we need to fully rely on it to communicate with others and to make decisions.
And that means we risk giving away something core to being fully human.
Risks from leaving 'AI' development to the tech-bros
The ‘AI’ race has be characterised as a “winner-take-all capitalist game already inimical in so many ways to human flourishing, sense-making, and even sanity.” It started with the theft of personal data, then the theft of personal creativity and work, and now we are seeing the theft of human purpose and activity, with no regard for human flourishing.When I think about the ideology, fantasies and biases of those¹² pushing the quest for ‘artificial general intelligence’ and think about the negative influence of the profit motive, I do not want what they are selling. It is now clear that they have turned it into a quest to create the illusion of what they think intelligence is.
If we leave the development and implementation to those currently driving this game, the risk is we will have no choice but to accept THEIR impoverished view of humanity. To them, the process of personal creativity is not important, genuine emotion is unnecessarily messy, struggle and overcoming obstacles are a waste of time, imperfection and failure has no value, the process of learning a skill is merely tiresome work, frustration is something to be avoided, efficiency is more important than flexibility, and the gradual growth and mastery of a skill through repeated effort is just so passé.
So much of the writing encouraging the uptake of ‘AI’ focuses on the oodles of free time we will have (where have we heard that before?), and that we can finally be liberated from the frustrating limits of the human mind! We can be set free!
Liberated? Free? From… living? Liberated from reasoning, thinking, problem solving, making mistakes, feeling both competent and incompetent, exploring ideas for size and discarding some, trying out bullsh!t on our friends, and more, more and so much more. All the challenging, sometimes frustrating, but also satisfying interactions and experiences that we need and love about being human.
In summary, underlying the development and hype of supposedly intelligent ‘AI’ is this impoverished idea of what being a human means. Accepting the world view of the ‘AI’ developers reduces us to caricatures and consumers of products.
It’s not a view of humanity that I am willing to accept.
Let’s just play with the initials…
Because the term ‘AI’ has run away from any concept of what it really is, it’s not going to be easy to promote alternative and more accurate titles like Social Collaboration software, but we can try.
Maybe something that also uses the initials A and I might catch on more easily? Here are a few of my ideas that fit with the initials A and I, while naming the function of the software more accurately:
- Accentuated Information software
- Agent Imitation software
- Artificial Interaction software
- Agent investigation software
- Alignment Illumination software
Avoiding the word intelligence is not ‘just semantics’ (which is never an insult for me anyway!). It’s the difference between treating these software programs as mystical, possibly sentient beings and being able to accurately identify the uses, risks and restrictions that the programs require.
If we label LLM-based ‘AI’ as a social collaboration tool (per Lanier) or agent imitation software, we can counteract or at least be aware of our tendencies and vulnerabilities with anthropomorphising.
Whatever we rename ‘AI’, the criteria for alternative names must include describing the technology without ascribing human-like attributes and consciousness to it.
In this way, the right name will help us resist having our humanity ‘hacked’. The right name stops the sequence of assumptions and logical leaps that we take in projecting agency and even consciousness onto a software program (see part 4), particularly one with an anthropomorphised body
Dropping the word intelligence, will help us keep the focus on those who are creating the software. That focus gives us actions to take.
This Wordly Exploration draws to a satisfying but disturbing end
Over 6 posts, I have explored the impoverished ideas about the concepts of intelligence and language and also about humanity that sit behind claims of machine intelligence.
I’ve also explored how we can be tricked very easily, although we do not like to admit it. Advertisers know this, politicians know this, conmen know this. And now the tech-bros, who know this, are happy to exploit this human vulnerability. For potentially massive profit, they are setting themselves us as the gate-keepers to important technology.
This Wordly Exploration has clarified for me what irritates me in the way we talk about ‘AI’. For a start, it’s the cynical hacking of human vulnerabilities, needs and tendencies (e.g. the tendency to see agency in non-living things, see part 4) – with the aim of manipulating us, making a lot of money from our increasing dependency, and even perhaps in the future controlling us.
But being a wordly explorer, the misuse of the words intelligence, agent, sentience, freedom, etc. bothers me just as much as the end game of manipulation. The abuse of these words reveals the tech developers' impoverished ideas about human flourishing – and that irritates and alarms me.
Knowing this makes the topic much less irritating. But not a lot less disturbing.
Finally, to be super clear, I don’t think I’m being a luddite and mindlessly pushing back on the potential of what is currently misnamed ‘artificial intelligence’ just because it is new. I am in awe of its capacities – it can translate, visualise, code, debug, analyse, provide feedback and more, faster and way better than I will ever be able to. But it is a human-made tool, not an intelligent being. As with all human tools, this one could be used as a weapon against us. We need to be alert to that.We need to be really smart about this, and remember at all times that the machines are not smart. In fact, those in the tech world are out of touch with most of humanity, and not that smart about the real world. Those of us outside the tech industry need to contribute to development and applications of ‘AI’.
I think that my view about the most serious risk of ‘AI’ was expressed way back in 1976 by the science fiction writer, Ray Bradbury. I suspect he would be screaming the alarm from the grave if he could see how we’ve left the toys in the hands of the men who have taken over and who think their profits will come by making us all somehow less human.
Footnotes
- While higher abilities at pattern matching, pattern extrapolation, etc., in humans is associated with higher scores on IQ tests, they are NOT equal to intelligence itself.
- Being a big sci-fi reader, I know these books are REALLY about human psychology and sociology; they raise philosophical questions. They are not about actually meant to be templates for creating life!! For a fascinating analysis of this try Jill Lepore's podcast The Evening Rocket which explore the science fiction basis of the ideas of Elon Musk.
- I think it’s the humans that are 'happy' about the whales. I know their site filled with whale flukes makes me happy.
- It's squillions!
- Of course these ‘recipes’ are simplifications, leaving a lot out, involving generalisations. For example, we might see a person with a certain appearance, and we make immediate decisions based on that little information – take a big shortcut – about that person’s attributes or intentions. It’s has been an evolutionary benefit to allow us to make decisions more quickly to avoid danger, but quite often our shortcuts will be wrong. But seriously, heuristic IT is not a very easy word to use and explain. But something like enhanced decision-making software is giving the software too much credit!
- And speaking of generalising, obviously that's what I'm doing here. But you only need to look at their social presentations, their political machinations, their despicable statements about people in private, their inability to answer charges they their software is based on theft of other people's content, their willingness to toy with the general public through their previous digital technologies such as Facebook and Cambridge Analytica etc., and what they do to people who push up against them (see Cadwalladr for examples) to determine they have a lot of ideas and values that most other people don't find acceptable.
- Read my previous post on Freedom if you want to know more about what I think of libertarianism
- A definition of totalizing from Long, C. P. (2003). Totalizing identities. Philosophy & Social Criticism, 29(2), 209–240. "Totalizing, at its simplest, is taking different categories, identities and possibilities and bringing them under one framework so as to make it seem like there is only one thing, the totality, from which there can be no deviance, to which nothing can be different."
- I know Trump is not one of the tech-bros, but he is one of their greatest enablers and fans, and he is also one of their greatest puppets. So he is another example of someone who considers he is in the 'cognitive elite' who SHOULD be ruling the world. Here's a telling quote from an article in The Guardian from way back in 2017 when he was open about his intellectual fetish: We know that Trump has a high IQ, possibly even higher than mine if I’m being modest, because he never shuts up about it. In 2013, for example, he tweeted: “I’m a very compassionate person (with a very high IQ) with strong common sense.” He followed these pearls of wisdom with another tweet, a month later, saying: “Sorry losers and haters, but my IQ is one of the highest – and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.” In fact, he has tweeted about his IQ at least 22 times. In October, he also responded to reports that the US secretary of state, Rex Tillerson, had called him a “moron” by telling Forbes that he would beat Tillerson in an IQ test.
- From Rebecca Solnit in The Guardian: Google CEO Eric Schmidt suggested the company should just ‘plunge ahead with AI, which is so huge an energy hog it’s prompted a number of tech companies to abandon their climate goals. ‘We should go all in on AI because maybe AI will somehow, maybe eventually know how to ‘solve’ climate, saying: ‘I’d rather bet on AI solving the problem than constraining it.’ They want to plunge over ‘the brink’ because they are excited about AI.
- By integrating AI into its search function, Google has significantly increased the carbon footprint of a single search, as AI processes require considerably more energy than traditional search queries and thus greater carbon emissions from Google's data centres. To stop AI-based searching, simply type '-ai' at the beginning of your Google search. Or use DuckDuckGo!
- Perhaps some in the tech world continue with noble aims to create something novel and wonderful, but overall, it’s just a race for massive profit, with very little concern for impacts on humans. As an illustration, in a poll of programmers, 10-15% agreed that AI technology could wipe out humans in the foreseeable future. A sizeable proportion think they are building the end of humanity?!!
Images
- Cup the Future is $2 snipped from social media, no source
- Quote made by the author using text from Toby Walsh's article Friday essay: Some tech leaders think AI could outsmart us and wipe out humanity. I’m a professor of AI – and I’m not worried, The Conversation Feb 14, 2025
- Altman fake image of crying snipped from social media, no source
- Jingna Zhang social media post, snipped from Facebook, fair dealing
- Four whale flukes from HappyWhale social media posts, snipped from Facebook, fair dealing
- Quote made by the author using text from Jaron Lanier's article There is No AI, The New Yorker, 20 April 2023
- Quote made by the author using text from Sam Altman's The Merge blogpost, December 8, 2017
- Quote made by the author using text from Erik Davis' article AIEEEEEEE! Something Weirdo This Way Comes, 2023.
- Collation of social media images of Elon Musk and Donald Trump referring to their IQ or insulting others as having low IQ or being not smart. Including:
- https://fortune.com/2025/02/19/trump-offers-mixed-praise-on-elon-musk-high-iq-faults/
- https://www.tiktok.com/@joysparkleshine/video/7489589909649411374
- https://www.politico.com/story/2019/05/30/donald-trump-iq-intelligence-1347149
- Stephen Hawking quote snipped from Facebook, fair dealing
- Quote made by the author with text from Alex Hern's article Elon Musk predicts superhuman AI in The Guardian, 9 April 2023
- Quote made by the author with text from Terrence Sejnowski article Large Language Models and the Reverse Turing Test, Neural Computation Volume 35, Issue 3, March 2023
- GPT-5 will have an IQ of image snipped from social media, no source, fair dealing
- Collation of images of stories of mistakes made by AI snipped from social media, fair dealing
- Collation of images of stories about AI replacing humans at work or widening social divides snipped from social media, fair dealing
- Collation of images stories of AI capacities to induce humans to act against their own best interest, and to produce 'slop' snipped from social media, fair dealing
- Collation of images of stories related to war, surveillance applications and the high carbon footprint of AI machines snipped from social media, fair dealing
- Hey Robot, are you gonna apocalypse our asses? SMBC No terms for reuse with attribution on site, but he is just always always on point!
- Quote made by the author with text from Marianela D’Aprile article The Robots Are Coming for Our Souls, Jacobin, 8 September 2024
- Quote made by the author with text from Fabrizio Dell’Acqua article Falling asleep at the wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters. Pap Laboratory for Innovation Science, Harvard Business School, 23 January 2022
- Two quotes made by the author with text by Emanuel Maiberg in the article Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” 404 Media 10 February, 2025
- Quote made by the author with text by Carole Cadwalladr article It’s not too late to stop Trump and the tech broligarchy from controlling our lives, but we must act now. The Guardian 20 April 2025
- Collation of quotes and images
- Quote from Nick Cave, The Red Hand Files, Issue #248, August 2023
- Screen shot of post on Instagram by @goddess.of.sass.2 on bullsh!tting
- Quote from Keanu Reeves quote from Daniel Gugger 888 @daniel-gugger on X, 8 April 2025
- Collation of quotes and images
- Screen shot of ABC post on Facebook AI makes life easier, fair dealing
- Quote made by the author with text from Marianela D’Aprile article The Robots Are Coming for Our Souls, Jacobin, 8 September 2024
- Screen shot of poem by Joseph Fasano on Facebook from the original at: AI removes the miraculous task of living. Fair dealing.
- Quote made by the author using text from Jaron Lanier's article There is No AI, The New Yorker, 20 April 2023
- Quote made by the author with text from Joshua Rothman article Are We Taking A.I. Seriously Enough? The New Yorker, April 2025
- Trust the Science Facebook post snipped from social media, fair dealing
- Quote made by the author with text by Ezra Klein podcast The Ezra Klein Show: My Views on AI 19 March 2023
- Ray Bradbury image snipped from social media, no source, fair dealing. Original quote from an article in Writers Digest
No comments:
Post a Comment
All comments are moderated. After you click Publish (bottom left), you will get a pop up for approval. You may also get a Blogger request to confirm your name to be displayed with your comment. I aim to reply within two days.