Never Mind Artificial Intelligence, Let’s Define ‘Real’ Intelligence
‘Artificial intelligence’ is without doubt the latest breakfast table topic of conversation. Everyone seems to have formed some sort of opinion around it. And some of it appears to be derogatory. I find this interesting, for a number of reasons.
Some people are questioning why artificial intelligence is called artificial intelligence. Specifically the ‘intelligence’ part.
They (whoever ‘they’ are) are saying that if a machine can do something, then how can it be defined as ‘intelligence’? Surely machines aren’t intelligent, even artificially?
Which led me to consider, somewhat philosophically, what is ‘real’ intelligence?
And then I started realising that there were potentially numerous parallels between what we as humans define as real intelligence, and what a machine can do. Which then led me to question what ‘real intelligence’ actually is.
Anything you can do, (A)I can do better?
I was wondering if the things the world’s historically famous geniuses did could be done by a machine.
Isaac Newton, amongst other things, built the first practical reflecting telescope. He set down the principles of modern physics. Worked out that colour is an intrinsic property of light and developed the theory of colour. And of course, what he’s most recognised for… came up with the concept of universal gravitation and the laws of motion.
Galileo played a key role in establishing scientific revolution. He came up with the earliest telescope concept, and invented the military compass.
Thomas Edison invented the electric light bulb and movie camera (along with W.K.L. Dickson and not forgetting Auguste and Louis Lumière), as well as the rechargeable battery. Now there’s some stuff we can’t do without.
And there are others who have set down theories and laws, like Einstein, Langmuir, Kepler, Hawking and Langan. And Terence Tao, the man who worked out how to remove red-eye in photos.
It’s all pretty mind-blowing stuff. So, the question is, can AI do scientific discovery as well as humans?
Well, as it happens, just a few months ago, it was reported that Kepler’s third law of planetary motion had been rediscovered centuries after it was first described. Only this time, AI took all the plaudits.
Dubbed AI-Descartes, this "AI scientist" was developed by a team of researchers from IBM Research, Samsung AI, and the University of Maryland, Baltimore County (UMBC).
AI-Descartes has the ability to work with large volumes of data and generate equations that fit that data. It’s also been programmed with mathematical reasoning, allowing it to see how the realms of generated equations work with existing background theory, and then establish which are useful and valid.
However, AI-Descartes needs to acquire background knowledge first, which it does with the help of a human research team. BUT… the hope for the AI scientist is that, eventually, it will surpass its human teachers and discover its own new theories, rather than applying old theories in new contexts.
AI-Descartes was named after 17th-century mathematician and philosopher René Descartes who argued, much like this article hopes to, that logical deduction played a key role in scientific discovery. That proposition, to me at least, means it is based on experience and ‘frame of reference’, rather than some magical human-only thing known as ‘intelligence’.
What is intelligence
Is intelligence the ability to…?
- Learn new skills
- Store information and retrieve it when required
- Deduce critically
- Use imagination and intuition
- Reason and interpret
- Apply educated insight
- Retrieve information
- Apply learned knowledge
- Match patterns
Are these intrinsic traits, or are they learned qualities? If they are learned qualities, does that mean that they can be learnt by machines? Or are they unique to humans?
And does intelligence come with age? Is a child as intelligent as a centenarian, for example?
Personally, I think if we can learn something over time, AI will eventually be able to do it better. And faster.
But is everything we deem ‘intelligence’ a learned skill?
What about our ability to naturally understand the human experience, to empathise? To forge meaningful relationships? To use gut feeling? To see the bigger picture? To make decisions based on emotion?
Machines can’t do that. Can they?
I asked ChatGPT if AI can empathise. It said this:
AI, including the one you're interacting with, can simulate empathy to some extent, but it's important to understand that this empathy is not genuine. AI can analyse and respond to text input based on patterns and data it has been trained on, but it doesn't have emotions, feelings, or consciousness. It can generate responses that seem empathetic by recognising emotional cues in text, but it's a programmed response rather than a true understanding of emotions. So, while AI can mimic empathy, it's not experiencing it in the way humans do.
Hmmm, interesting Mr Bond. Now that made me think of psychopaths. So, I asked ChatGPT again, can psychopaths empathise? It said this:
In essence, a psychopath might understand that someone is in distress but not feel any emotional response to that distress. This allows them to manipulate emotions effectively, understanding them on a "logical" level without experiencing them.
Sounds more like a succinct version of the first answer to me.
Anyway, to me a moot point. Are empathy, gut feeling and emotion intelligence anyway?
How many times have we, and by that I mean me, made a decision based on gut feeling or emotion, only for it to come back and bite us me on the posterior? Just ask my long suffering wife. I have a loft with over 80 sure fire ideas in it that I just knew in my gut were game changers. Turned out I was wrong. Not very intelligent at all, I’d argue.
So instead of using gut feeling, would I not have been better off considering my decisions based on previous experience?
Is “intelligence” merely just the process of applying previous experience and knowledge?
Here’s a scenario.
Child A was brought up in a plain white room with no toys or educational interactions.
Child B was brought up in an engaging environment with plenty of toys and educational interactions.Child B was brought up in an engaging environment with plenty of toys and educational interactions.
Give ‘Child A’ a puzzle. Will they know how to solve it? If you gave them a computer, would they know how to use it? No. Because they’ve never seen anything like it before.
If you gave ‘Child B’ a puzzle, even if they’d never seen one before, they’d use what they’d learned with their other toys and educational interactions to work out a way of solving it. If you gave them a computer, even if they’d never used one before, they’d apply their learnings with interactive toys and games to work out how to operate it, at least to some extent.
I think this scenario shows that everything we do is based on what we’ve picked up during our lifetimes, the things we’ve experienced. And of course the more someone has been exposed to, the more they’ll know and the greater their reasoning and computing power.
And I think that goes for artificial intelligence systems like large language models too. The more sources they are trained on, the better they will be at giving you the information you want. I guess you could say, the more ‘intelligent’ they would be.
AI is trained to act on the data it receives, whereas humans use divergent thinking. In other words, as humans we use multiple parts of the brain at the same time, which allows us to work on complex tasks. But AI can only do one task at a time. At the moment.
We already have AI ‘deep learning’ – a form of machine learning – that is carrying out complex tasks. And ‘reinforcement learning’, which teaches machines to learn from their mistakes and improve their reactions.
So I truly believe that if we give it enough time – and enough data – AI could become super-powered. ChatGPT for example has only been around a little while. It’s a baby.
Imagine what it could know and do when it’s a teenager? Or a 30-year old, or 50-year old?
How do we judge intelligence?
I’m not sure what AI models are being judged on when ‘people’ say they’re not really intelligent, or even when they say they are. Is there some sort of comparison maybe? I guess it could be said that AI has to be more intelligent than a gorilla. But surely not more so than my personal hero, Bill Gates? I suppose it depends how you look at it.
AI is being used to optimise supply chains and logistics. It’s guiding surgical procedures. It’s creating interactive educational platforms that are boosting children’s academic and social skills. The list goes on. It’s pretty clever stuff. But wasn’t it humans who created the algorithms that power AI? So are humans therefore more intelligent?
Many people use ‘IQ’ to judge intelligence. ‘Intelligence quotient’ tests a person’s memory, mathematical skills, reasoning ability, processing speed, language skills, vocabulary and visuospatial processing. But these are not the be-all and end-all of human talents. So really, it’s not the perfect benchmark.
Consider an Amazonian farmer skilled in living off the land, surviving natural disasters, and navigating a perilous environment. He might indeed score low on a standard Mensa IQ test, which measures a specific set of cognitive skills largely developed and valued in industrialised societies.
On the flip side, a person with a high IQ score, while excelling in abstract reasoning or problem-solving, could find themselves woefully unprepared and potentially in fatal danger if suddenly dropped into the Amazon with no survival skills.
IQ tests measure a narrow range of abilities and don't account for the full spectrum of human ability, including the practical wisdom needed to thrive in diverse environments. Again, that so-called 'wisdom' is learnt, not born with. My point being that in my opinion, IQ tests do not measure intelligence, but rather skillsets valued in the society they are being tested in.
There are also various tests used to evaluate AI systems. These focus on intelligence, problem-solving and human-like behaviour.
There’s the Chinese Room Test, for example, that challenges the idea that AI can genuinely understand and possess consciousness. It leaves open the prospect that a machine could potentially be built that can be more intelligent than a human, albeit one without a mind or intention. But I don’t buy into this idea, as I think it’s a matter of opinion whether all humans ‘understand’ what they are doing all the time.
The Winograd Schema Challenge lets AI systems answer multiple choice questions that call for common sense, reasoning and understanding of the ambiguity of language. If not already overcome, this will be meaningless within months rather than years I would imagine.
The Turing test is a test of a machine’s ability to communicate undetectably as a human. No AI has passed it flawlessly yet, but there have been some close calls. However, for some reason, whenever I imagine this I always think of a scientific-looking old man with a clipboard. Then I imagine it being a child and I think, hmmm, I bet it could fool a child completely. So is it as ‘intelligent’ as, say, an eight year-old?
AI can also be evaluated on its ability to solve problems, like logic puzzles, maths, chess or poker. The tests measure the system’s ability to learn, formulate strategies, and adapt to changing problems.
Unsurprisingly, there are criticisms of all these tests. But then aren’t there criticisms of the IQ test? Like I said, it doesn’t assess every human skill, so it’s not really testing ‘intelligence’ as a whole.
Machines get things wrong. But does that make them “unintelligent”?
It’s well documented that artificial intelligence large language models are renowned for getting things wrong. So does that mean they’re unintelligent?
But wait… don’t humans get things wrong?
- They don’t mercilessly waste 85 million tonnes of paper worldwide every year?
- They didn’t burn billions of tonnes of fossil fuels or cut down trillions of trees and directly cause climate change?
- They didn’t crash the Titanic into an iceberg? Too soon?
- They didn’t forget to replace a safety valve on the Piper Bravo Oil Rig after a safety check, leading to a fatal explosion?
- They weren’t responsible for the sinking of the Prestige oil tanker, dumping 63,000 tonnes of oil into the ocean, wiping out 300,000 birds and severely damaging marine life? Or the crashing of the Exxon-Valdez, which spilled 760,000 barrels of oil into Alaskan waters?
The thing is, when an AI system gets something wrong, providing you give it constructive feedback, it’s very unlikely to do it again.
When a self-driving car crashes, it learns not to repeat the pattern again. But when a human crashes a car, there's a high likelihood that they, or others, will repeat the same action over and over again, ad infinitum.
It’s not about avoiding making mistakes. It’s about what you do when you get things wrong. If you take action to make sure it doesn’t happen again, that could probably be considered intelligent. Or at least common sense. But if you keep repeating the same mistakes, that’s not very clever.
Intelligence is not to make no mistakes, but to see quickly how to make them good.
Some final thoughts… what is intelligence?
If we, as humans, were truly intelligent, would 17% of total global food production be wasted? And would up to 783 million people have faced hunger in 2022?
Would over 14 million – one in five – people have been living in poverty in the UK during 2021-2022?
Would there really be a need for war?
The measure of intelligence is the ability to change.
Or if you prefer...
Intelligence is the ability to adapt to change.
Maybe, one day, artificial intelligence will be so advanced that it will recognise wasted food mountains and automatically arrange to transport them to famine-stricken countries.
Maybe it would unite the insights of educators, business people and lawmakers to identify problems across different areas so that poverty could be resolved, and work out where to send resources for best-use scenarios, like funding well-building or teacher training rather than just handing it to governments.
And maybe, just maybe, it could curb the concept of enemy and educate world leaders on their respective personal and cultural values, instilling mutual equality and acceptance and avoiding wars.
Now that would be intelligent.
If intelligence is about learned skills and learning from mistakes, then I believe that machines most certainly have it.
We humans learned – eventually - to stop needlessly cutting down trees, and to replace them with new ones. We learned not to burn fossil fuels. Not to spray CFCs. To recycle and reuse. There’s a long way to go, but we’re at least pointing in the right direction.
But I think (and personally hope) that AI will ‘get there’ first. Because it’s learning at a massively faster rate than we ever have.
While the discourse around human and artificial intelligence is fraught with disagreement, I firmly believe in the symbiotic potential between the two.
The harmonisation of human intuition with the computing power of AI could revolutionise our world, from automated transport to real-time voting systems.
Yet, when it comes to evaluating intelligence, whether artificial or human, we must first clearly define what intelligence is. I would entrust AI to drive me, fly me, brief me, and even de-brief me (in the sense of a valet robot). But would I ever believe any AI that told me it loved me? Never!
Which leads me to ponder whether in fact human intelligence is love? If so, then I don't believe anything but a human could do that. For me, that's the litmus test for genuine intelligence.
AI will augment our lives in unimaginable ways, but it is the inherently human aspects of soul-to-soul connection, and the shared feeling of meant-to-be-destiny, that serve as the ultimate barometers of intelligence. Everything else we (and machines) learn to do are just parlour tricks. I coined the phrase "more time to do less" and by that I mean, less number crunching, less spell-checking, less box ticking, less....work! and more time to be me. And I wish nothing but the same for you. Here's to the future, I for one cannot wait.