AI is rarely out of the headlines these days, with experts and developers all seeming to have different levels of concern about how much of a threat to human existence it poses.
On the one hand there are those who view it in a totally positive light and see it helping to improve the lives of millions of people as its applications (particularly perhaps medical ones) grow and make life easier and safer. On the other hand are those who see it as a clear and present danger to human existence, with the possibility of an ‘extinction event’ occurring in the not too distant future. An article in a recent edition of the British Medical Journal Global Health1 helps to clarify the issues in non-technical language.
The authors suggest there are three categories of threat to human health and well-being from the misuse of AI. Firstly there is the threat to democracy, liberty and privacy. The enhanced ability to process vast amounts of data, develop targeting and mis-information and implement-enhanced systems of surveillance could lead to increased societal divisions and entrenchment of inequalities.
Secondly there are threats to peace and safety caused by the ability to develop and deploy lethal autonomous weapon systems (LAWS) that have enhanced lethal capacity together with dehumanisation of use of lethal force.
Thirdly there is the threat to human work and livelihoods as a result of large-scale replacement of work and employment through AI driven automation. The subsequent health outcomes from widespread unemployment are likely to be increasingly adverse for physical, mental and spiritual health worldwide.2
We also face the existential threat of the emergence of self-improving Artificial General Intelligence (AGI). This could augment all the problems listed above, disrupt systems we depend on, use up resources we depend on and ultimately attack or subjugate humans.
Apparently the simplistic ‘couldn’t we just turn them off’ solution isn’t tenable – by the time they were an obvious threat we could be too dependent on the continued functioning of multiple networked AI and AGI systems to survive without them.
Another area for concern is how interaction with intelligent machines may affect the emotional development of children.3 Research by Kate Darling4 indicates that children who grow up interacting and playing with robotic pets are well aware that the robots are not alive, but they understand them as being ‘alive enough’ to be a companion or a friend. It seems many children develop a new category – or new way of thinking – about their robotic toys.
As one group of researchers wrote: “It may well be that a generational shift occurs wherein those children who grow up knowing and interacting with life-like robots will understand them in fundamentally different ways from previous generations.” 5 In other words, how might human relationships become distorted in the future if children increasingly learn about the meaning of love and intimacy from their interactions with machines?
So how do we respond to all this? It is good to remind ourselves that we are all created in God’s image, and that human creativity, imagination, the ability to do science and medicine and develop useful technology like AI all result from our God-given capacity. Unfortunately of course we are not perfect, so the freedom God has given us allows us to do harm as well as good. Our capacity for self-delusion and arrogant pride can also stop us seeing the potentially destructive consequences of what we may create.
We face the age-old dilemma of should we do or create something just because we can. History suggests that we almost always choose to do first and only consider the necessary ethical behavioural constraints later. It seems to me that with AGI there must be international monitoring and agreement about boundaries and precautions to limit and control the development of this technology which we are only beginning to grapple with. We need to lobby our elected representatives to press for the setting up of an international AI/AGI monitoring body. This is perhaps especially needed from those of us living in the UK, as our current Prime Minister wants to establish the UK as a key development hub for AI development and regulation.5
We can I think take some encouragement from the nuclear industry, where we have an immensely powerful technology that could be used for the destruction of mankind as well as for the (not without risk and problems) powering of electricity generating plants. Knowing the likely outcome, the nations of the world that have the capacity have managed by the grace of God not to use a nuclear bomb in war for the last 78 years.
There are international agencies actively monitoring the production and use of nuclear materials. Surely we urgently need the same for AI, to ensure we can reap the benefits of this technology whilst minimising the risks and harms. Unfortunately AGI may prove much harder to control than nuclear power, but it is a challenge that as God’s vice-regents on Earth we cannot afford not to meet.
This post first appeared on the PRIME monthly international email. Reposted with permission.
Images – All images were created by PRIME’s PR & Communications Manager using AI with Vecstock.
- Religion as a social force in health: complexities and contradictions. BMJ 2023; 382 doi: https://doi.org/10.1136/bmj-2023-076817
Dr Huw Morgan is a retired GP Training Programme Director in Bristol, UK and a former PRIME Education Lead and Executive Member. This article is based on a previous personal blog post by Huw Morgan.