Since the origins of the quest for artificial intelligence (AI), there has been a debate about what is unique to human intelligence and behaviour and what can be meaningfully replicated by technology. In this article we discuss these arguments and the ramifications of 'ignorance' as it is expressed by current AI models.
This article approaches the question of artificial intelligence by posing philosophical questions about the current limitations in AI capabilities and whether they could have significant consequences if we empower those agents with too much responsibility.
Two recent podcast series provide useful and comparative insights into both the current progress towards Artificial General Intelligence (AGI) and the important role of ignorance in our own cognitive abilities. The first is Season 3 of 'Google DeepMind: The Podcast”, presented by Hannah Fry, which describes the current state of art in AI. The second is Season 2 of the BBC's 'The Long History of… Ignorance' presented by Rory Stewart, which explores our own philosophical relationship with ignorance.
Rory Stuart’s podcast is a fascinating exploration of the value that we gain from ignorance. It is based on the thesis that ignorance is not just the absence of intelligence. It feeds humility and is essential to the most creative endeavours that humans have achieved. To ignore ignorance, is to put complex human systems, such as government and society, into peril.
The key question we pose is whether or not current AI appreciates its ignorance. That is, can it recognise that it doesn’t know everything. Can AI embrace, respect and correctly recognise its own ignorance: meaning it doesn’t just learn through hindsight but becomes wiser; and is fundamentally influenced, when it makes decisions and offers conclusions, that it is doing so from a position of ignorance.
The late Donald Rumsfeld is most popularly remembered for his theory of knowns. He described that there are the things we know we know; things we known we don’t know; and things we don’t know we don’t know.
Stewart makes multiple references to this in his podcast. At the time that Rumsfeld made the statement it was widely reported as a blunder—as a statement of the blindingly obvious. Since then, the trinity of knowns has entered the discourse of a variety of fields and is widely quoted and used in epistemological systems and enquiries. Let us take each in turn, and consider how AI treats or understands these statements.
Understanding our 'known knowns' is relatively easy. We would suggest that current AI is better than any of us at knowing what it knows
We also put forward that 'known unknowns' should be pretty straightforward for AI. If you ask a human a question, and they don't know the answer, it is easy to report this an an unknown. In fact, young children deal with this task without issue. AI should also be able to handle this concept. Both human and artificial intelligence will sometimes make things up when the facts to support an answer aren’t known, but that should not be an insurmountable problem to solve.
As Rumsfeld was trying to convey, it is the final category of 'unknown unknowns' that tends to pose a threat. These are missing facts that you cannot easily deduce as missing. This includes situations where you have no reason to believe that 'something' (in Rumsfeld's case, a threat) might exist.
It is an area of huge misunderstandings in human logic and reasoning; such as accepting that the world is flat because nobody has yet considered that it might be spherical. It is expecting Isaac Newton to understand the concept of particle physics and the existence of the Higgs boson when he theorises about gravity. Or following one course of action because there was no reason to believe that there might be another available: all evidence in my known universe points to Plan A, so Plan A must be the only viable option.
In experiments with ChatGPT, there is good reason to believe that it can be humble; that it recognises it doesn’t know everything. But the models seem far more focused on coping with 'known unknowns' than recognising the existence of 'unknown unknowns'. When asked how it handles unknown unknowns, it explained that it would ask clarifying questions or acknowledge when something is beyond its knowledge. These appear to be techniques for dealing with known unknowns and not unknown unknowns.
Through early life, in our progression from childhood to adulthood, we are taught that the more you know and understand, the more successful you will be. Not knowing a fact or principle was not something to be proud of, and should be addressed by learning the missing knowledge and followed by learning even more to avoid failure in the future. In education we are encouraged to value knowledge more than anything else.
But as we get older, we learn with hindsight from the mistakes we have made from ill-informed decisions. In the process, we become more conscious of how little we actually know. If AI in its current form does not appreciate or respect this fundamental concept of ignorance, then we should ask what flaws might exist in its decision-making and reasoning?
To feel that we can understand all aspects of a complex system is hubris. Rory Stewart touches on this from his experience in government. It is a fallacy to believe that we should be able to solve really difficult systemic problems just by understanding more detail and storing more facts about the characteristics of society.
As Stewart notes, this leads to brittle, deterministic solutions based on the known facts with only a measure of tolerance for the 'known unknowns'. Their vulnerability to the 'law of unintended consequences' is proven repeatedly when the solution is found fundamentally flawed because of facts that were never, and probably could never be, anticipated.
These unknown unknowns might be known elsewhere, but remain out of sight to the person making the decision. Some unknown unknowns might be revealed, by speaking to the right experts or with the right lines of enquiry. However, many things are universally unknown at any moment in time. There are laws of physics today that were unknown unknowns to scientists only few decades previously.
Stewart dedicates an entire episode to ignorance’s contribution to creativity, bringing in the views and testaments of great artists of our time, like Antony Gormley. If creativity is more than the incremental improvement of what has existed before, how can it be possible without being mindful of the expanse of everything you don’t know?
This is not a new theory. If you search for “the contribution that ignorance makes to human thinking and creativity” you will find numerous sources that discuss it, with references ranging from Buddhism to Charles Dickens. Stewart describes Gormley’s process of trying to empty his mind of everything in order to set the conditions for creativity. Creativity is vital to more than creating works of art. It is an essential part of complex decision-making. We use metaphors like 'brainstorming or blue sky thinking' to describe the state of opening your mind and not being constrained by bias, preconception or past experience. This is useful, not just to come up with new solutions, but also to 'war game' previously unforeseen scenarios that might present hazards to those solutions.
So, if respecting and appreciating our undefined and unbounded ignorance is vital to making good and responsible decisions as humans, where does this leave AI? Is AI currently able to learn from hindsight – not just learn the corrected fact, but learn from the very act of being wrong? In turn, from this learning, can it be more conscious of its shortcomings when considering things with foresight? Or are we creating an arrogant super-genius unscarred by its mistakes of the past and unable to think outside the box? How will this hubris affect the advice it offers and the decisions it takes?
What if we lived in a village where the candidates for leader were a wise, humble elder and a know-it-all? The wise elder had experienced many different situations, including war, famine, joy and happiness; they have improvised solutions to problems that they have faced in the past, and have learnt in the process that a closed mind stifles creativity; they knew the mistakes they had made, and therefore knew their eternal limitations. The village 'genius' was young and highly educated, having been to the finest university in the land. They knew everything ever written in a book, and they were not conscious of making a bad decision.
Who would you vote for to be your leader?
The concepts described here are almost certainly being dealt with by teams at Google DeepMind and the other AI companies. They shouldn’t be insurmountable. The current models may have a degree of caution built into them to damp the more extreme enthusiasm. But I’d argue that caution when making decisions based on what you know is not the same as creatively exploring the 'what if' scenarios in the vast expanse of what you don’t know.
We should be cautious of the advice we take from these models and what we empower them to do—until we are satisfied that they are wise and creative as well as intelligent. Some tasks don’t require wisdom or creativity, and we can and should exploit the benefits that these technologies bring in this context. But does it take both qualities to decide which ones do? We leave you with that little circular conundrum to ponder.
Thank you for contacting us.
We will get back to you as soon as possible.
Oops, there was an error sending your message.
Please try again later.
Subscribe to our insights