AI and its challenges to humanity (Ethics of AI blog #2)

Percept
March 2025
Dave Strugnell | June 2018
Before we get down to what AI might mean for humanity, I guess we need to start by asking what we mean by AI.
Any term that captures the zeitgeist in the way that Artificial Intelligence does, must expect to be opened up for abuse. And indeed, in any conversation that any two human beings may have about AI, three-quarters of their differences are likely to be down to definitional discrepancies, and a neutral observer may find herself reminded of the famous exchange between Alice and Humpty Dumpty:
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean- neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master-that’s all.”
So we should start by defining some terms to give shape to the conversation from here on out. AI as we know it has its origins in a 1956 conference organised by computer scientist John McCarthy, which brought together for the first time researchers from a variety of cognate disciplines to explore “…the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Six decades later, dictionary definitions of AI remain rooted in a view of the field as a sub-discipline of computer science:
- Oxford English Dictionary: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
- Chambers (almost identically): “The development and use of computer systems that can perform some of the functions normally associated with human intelligence, such as learning, problem-solving, decision-making, and pattern recognition.”
- Merriam-Webster: “A branch of computer science dealing with the simulation of intelligent behaviour in computers.”
All of these are so broad as to be unhelpful in any practical discussion of the field. For a more pragmatic dissection, Kris Hammond’s 2015 Computer World article is about as on-the-money as it gets. He starts by defining three different flavours of AI varying in their focus and objectives:
- Strong AI aims to have machines simulate human reasoning as closely as possible, in order to emulate human cognition and decision-making.
- By contrast, Weak AI is focussed only on the outcomes, not on the process; hence, emulation of human neural pathways is irrelevant to the purpose, and what matters is the quality of the output.
- And then a third stream flows between these two pillars: here human reasoning informs the process but is not a requirement to which the research and development must hold.
Another key distinction made by Hammond is between narrow and general AI (the latter commonly referred to as Artificial General Intelligence, or AGI). The narrow variety is directed at a specific task, and maximising some objective function in relation to that task. Pocket calculators would be an excellent old-school example of this; more recently, we’ve seen the likes of Deep Blue and Alpha Go, in the strategy-game domains of chess and Go respectively. The latter is a prime example of how far we’ve come in narrow AI in recent years: as recently as late 2015, the original Alpha Go became the first machine to beat a human champion in a game in which expertise had previously been thought to be so deeply rooted in human creativity that machine supremacy would remain forever a pipe dream.
It is the development of Alpha Go since then however that points a finger towards the possibility of generalisation. In October 2017, its next incarnation, Alpha Go Zero, which self-learned Go strategy (which is to say, used no past human game data and learned by playing games against itself and progressively pruning its behaviour in evolutionary fashion) sufficiently well to beat its predecessor 100-0. And hot on its heels followed Alpha Zero, which generalised its Go ability by learning chess and shogi in addition, and beating world-champion computer programs in each specific discipline, as well as a version of Alpha Go Zero. And that shows the beginnings of the inevitable march towards AGI: the ability to generalise across domains and reason intelligently in the way that humans (sometimes) do. While restricted to games of strategy for now, the goal of AGI is an intelligence that can reason logically and abstractly to the same level as human beings.
But why stop there? If an AGI with the capability of the best of us were to emerge, it would by definition be better at most narrow tasks (chess, calculating cube roots, maximising firm revenue, and so on). It seems likely that from AGI would very quickly then emerge a Superintelligence, to borrow Nick Bostrom’s phrase, that is smarter than humans in every conceivable way, and that would signal an event horizon, the technological singularity, which is probably beyond our current abilities to imagine, never mind comprehend. I’ll unpack Superintelligence a little more in the next blog post, on AI-induced existential risk.
AI clearly has profound implications for philosophy and how we understand and engage with the world. Our metaphysics, our concept of reality, may have to evolve, as may our epistemology, our understanding of what knowledge is and how sentient beings come to know. Challenges in the political realm are already rearing their heads (in the blog post after the next, I’ll outline how the issue of legal personhood for robots and other forms of AI is already causing lawmakers to scratch their heads), and we may have to rearrange our aesthetics to incorporate new views of beauty, art and creativity in the era of AI: can a computer-generated work of art be as beautiful as an equivalent human work? And of course, ethical landmines are buried all across the AI landscape: both new, previously unthought-of challenges and alternative perspectives on age-old problems.
In attempting to deal with any meaty issue in a forum as light as a series of blog posts, the risk of over-simplification is always present; in the case of cataloguing the ethical implications of AI, it’s a virtual certainty. So without any claims as to the exhaustiveness of this list, let me pick out four main areas in which it seems to me that AI poses important moral questions to us, ranging from those that play out most materially far in the future to those that are pertinent right here and now:
- Existential risk to homo sapiens, the critical role for AI alignment and how we need to go about developing AI responsibly;
- How AIs should be treated by humans, and the flip side of this: how should we be expected to be treated by the superior intelligences we may, and probably will, create?
- The economics of AI: will progress make the world a happier or less happy, a more or less equal, place for its citizens?
- The risk of embedding human bias and error into our machine algorithms, with undesirable social implications.
In the next four blog posts, we’ll deal with each of these in turn. Stay tuned!