The problem with taking so long to write this
Smart series about Artificial Intelligence (AI) is the 34,372 extra articles published on the topic in the meantime!
Interestingly, though, not much has really changed. I still find most of this writing incredibly irritating and I'm still trying to work out exactly why. Here is a brief summary of my explorations so far.
In Smart part 1, I pointed out that intelligence is a complex concept (and not a thing) created to talk about human abilities. Consequently, our tests of intelligence are based on human attributes and physiological limitations. Intelligence is a contested concept – there is not broad agreement about what it even means. I asked, given intelligence is not a discrete thing in humans, how could we actually know when it ‘appears’ in machines?
In Smart part 2, I explored the concept of language to show why the conversational abilities of ChatGPT and its ilk are easily explained by sophisticated programming and the nature of language use by humans. It’s nothing to do with intelligence. And yet, the tech developers seem to be using the natural language capacity of the smart bots as a (quite flawed) proxy for intelligence. (Smart part 3 was a diversion into some of the fabulous toons on this topic, but now back to being serious!)
As a reference, I keep coming back to the Turing Test that says if a machine can convince a knowledgeable human observer of its intelligence, then it should be considered to be intelligent.
In this post, I explore how the focus of recent development has been more on the convincing than on the intelligence part of the Turing Test. Convincing, or perhaps more accurately it could be called human hacking.