AI is a constant feature in the news these days, but a couple of news items in recent weeks might have struck you as worthy of more thought. First was the announcement by Meta and OpenAI that they will shortly be releasing models that ‘think’ more like people and are able to consider the consequences of their decisions. And the second was an article in the FT that the speed of AI development is outstripping the development of methods to assess risk.
These two developments and the conflicts they raise are related to a quixotical feature of human nature: why do we trust computers more than humans?
If there is a reliable basis of trust in a person or in a piece of technology, then the level of risk being taken can be more clearly understood. Without a sound basis of trust, this risk becomes increasingly uncertain.
In this article, Tom Burton, a cyber security expert and technology thought leader, addresses the historical roots of this dilemma, and also answers the following:
Why is it that a human is more likely to implicitly trust what a computer tells them than what another human tells them?
Before you hit back with disagreement, consider this scenario: How would you react if a gentleman dressed in the regalia of an African prince turned up at your door offering untold riches without any conditions?
Many over the years have been taken in by exactly that offer received by email. Phishing, fake news on social media, and numerous other socially engineered deceptions rely on this digital bias, which has been the subject of plenty of research.
When Tom Burton was responsible for information systems, information management, and information exploitation in his Army Headquarters, he found it striking how many people assumed the accuracy of a unit’s location on a screen was 100% reliable. They would take a similar ‘sticky’ marker on a physical map with caution; recognising that there was implicit uncertainty in the accuracy of the ‘reported’ location, and that the unit in question may have moved significantly since they made that report. Yet, they would be happy to zoom in to the greatest detail on a screen and ask why A Squadron or B Company was on the east side of the track rather than the west.
This
implicit trust has striking implications for many aspects of our digital lives, and will be brought into even sharper focus with the widespread adoption of AI applications.
Tom has a theory. Humans are inherently fallible, deceitful and unpredictable. We make mistakes, sometimes intentionally; sometimes due to tiredness, emotions or bias. And we have spent at least 300,000 years reaffirming this model of each other.
Machines are considered to be predictable and deterministic. No matter how many times two large numbers are entered into a calculator, it’s expected that they will be added up correctly and consistently.
When considering the output of a computer, at least subconsciously, it is considered to be more like a hammer than a human: a predictable tool, that will produce the result it was programmed for.
But even in the case of conventional, non-AI technology, this perspective is a fallacy. Computers are designed and programmed by fallible humans. Mistakes are made, and those mistakes are transferred to the code, and in turn, the results that this code produces. The more complex the code, the less certainty there will be of accurate and consistent results.
A ‘truthful’ response may also be dependent on having a similar perspective to the person who designed a system. If ambiguous problems are interpreted differently by the designer, then the probability that the results will be misinterpreted increases significantly.
People consider their digital tools as predictable as a hammer, but too frequently they operate more like the humans who created them.
This situation is only likely to get more extreme with AI. Technology is actively being designed to operate more like humans. To learn and apply insight from that learning in new situations. The question asked of a system today might well produce a different answer if asked again in the future, because the information and ‘experiences’ that answer is based on will change. In exactly the same way that if one asks a human the same question ten years apart, we are not surprised by a different answer, particularly if seeking an opinion.
If technological tools are increasingly becoming more similar to humans than hammers, then how does this affect risk? The diversity and unpredictability of humans is something with which society is familiar and has been managing for some time; so let's look at the similarities, because, after all, the aim is to replace people with technologies that operate in a similar way.
It's known that people misunderstand tasks because language is ambiguous, and interpretation is based on an individual’s perspective. Everyone has different value systems, influencing where focus is placed and where corners might be cut. At an extreme, these different values may lead to behaviour that is negligent or even malicious. People can be subverted or coerced to do things. All of these behaviours have parallels with complex technology, and AI in particular.
Ambiguity will always create uncertainty and risk. AI models are based on value systems that are intended to steer them towards the most desired outcome; but those value systems may be imperfect, especially when defined in the past for unforeseen situations in the future. And it's known that technology can be compromised to produce undesirable outcomes.
But it is important to note that there are some fundamental differences as well. Groups and organisations tend to have inherent dampers that reduce extremes (though geopolitics might provide evidence against this). Recruiting one person to do a task might result in a ‘good egg’ or a bad one. But recruiting a team of ten increases the chance that different perspectives will challenge extreme behaviours. Greater diversity increases this effect. This does not eliminate risk, and a very strong character might be able to influence the entire team, but it introduces some resistance. However, if the ‘team’ comprises instances of the same AI model, feeding from the same knowledge base, using the same value systems and learning directly from each other, it might operate more like an echo chamber; as seen with runaway trading algorithms that are tipped out of control by the positive feedback of their value systems.
Assuming the trajectory of technology continues into the age of AI, intelligent tools will be used wherever possible to do tasks currently done by humans. Over time, every aspect of business will be decided or influenced by digital systems, using digital tools, operating on digital objects, to produce outcomes that will be digital in nature before they transition into the physical world.
Consequently, there will not be many risks that do not have a very significant digital element. It could therefore be argued that managing cyber-, information- or digital-risk (whichever term you prefer) will be inseparable from the majority of business risks. Going into the future, the current construct of a CISO function managing information risk separate from many of the other corporate risk areas might seem quaint. Instead, it's uncertain whether any area of business risk management will be able to claim they ‘don’t do technology’ and it will be more important than ever for technology risk to be managed with an intimate and universal understanding of the business.
Improving our understanding of risk by considering technology components as people, at least at a conceptual level, is possible. Society is already there in many respects and, as AI solutions emerge over the years and decades to come, this convergence is only going to accelerate. An AI model’s decisions are based on an unpredictable array of inputs that will change over time. They are based on a set of values that need to be maintained in line with business and ethical values. But most importantly, they will learn. Learn from their own experiences, and learn from each other. This sounds far more like a human actor than a hammer.
Tom Burton suggests that we can take lessons from managing human risk and apply them to digital risk. He suggests the following measures that can be immediately adopted by businesses:
There is a lot to be optimistic about in the future. There will be change, and the need to adapt, but the pace of change and the breadth of its impact demands that we take an objective approach to understanding and managing risk—hope is not a strategy.
If we do not understand something, then our trust in it must decrease as a consequence. This does not mean that we should not employ it; after all, the trust we have in our people and our partners isn't binary. But we put controls and frameworks in place to limit the damage that people can do proportionate to this trust.
We need to treat technologies that demonstrate human traits in a similar way.
With over 20 years of experience in business, IT, and security leadership roles, including several C-suite positions, Tom has an acute ability to distil and simplify complex security problems, from high-altitude discussions about business risk with the board, to detailed discussions about architecture, technology good practice, and security remediation with delivery teams. With a tenacious drive to enhance cyber security and efficiency, Tom has spent a significant amount of time in the Defence, Aerospace, Manufacturing, Pharmaceuticals, High Tech, and Government industries, and has developed an approach based on applying engineering principles to deliver sustainable business change.
If you would like to speak to Tom or anyone from the Cyber Security team, please use the form below.
Cambridge Management Consulting (Cambridge MC) is an international consulting firm that helps companies of all sizes have a better impact on the world. Founded in Cambridge, UK, initially to help the start-up community, Cambridge MC has grown to over 160 consultants working on projects in 20 countries.
Our capabilities focus on supporting the private and public sector with their people, process and digital technology challenges.
For more information visit
www.cambridgemc.com or get in touch below.
Thank you for contacting us.
We will get back to you as soon as possible.
Oops, there was an error sending your message.
Please try again later.
Subscribe to our insights