MV#11 — AI, risk, fairness & responsibility — John Zerilli

AI is already changing the world. It’s tempting to assume that AI will be so transformative that we’ll inevitably fail to harness it correctly, succumbing to its Promethean flames.

While caution is due, it’s instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. This is one of the key themes that emerge in this discussion with John Zerilli. John is a philosopher specializing in AI, Data, and the Rule of Law at the University of Edinburgh, and he also holds positions at the Oxford Institute for Ethics in AI and the Centre for the Future of Intelligence in Cambridge.

For instance, John points out that some of the demands we make of AI with respect to fairness are simply impossible to fulfill — not due to some technological or moral failing on the part of AI, but that our demands are in mathematical conflict. No procedure, whether executed by a human or a machine, can consistently meet these requirements. We have AI research to thank for illuminating this.

In contrast, concerns over a ‘responsibility gap’ in AI seem to overlook the legal and social progress made over the last centuries, which has, for example, allowed us to detach culpability from individuals and assign it to corporations instead.

John also notes that some of the dangers of AI may be more commonplace than we imagine — such as the use of deep fakes to supercharge hacking, or our psychological tendency to become complacent with processes that mostly work, leading us to n unwarranted reliance on AI.

We get pretty speculative toward the end, John throws in a fascinating thought which I’ve been chewing over: is the intelligence we have as humans linked to our metabolic dependence on the external world? It’s not uncommon to hear that AI may need to be embodied to keep progressing. Intuitively, the ability to manipulate the world and explore its response does seem to be an important feature of learning that would move us from the somewhat Pavlovian regime that is currently applied to AI. However, could being incarnate also be important?

There are features to our bodies, our processing of food, that do add extra dimensions to our experience. Small children, much to the horror of their parents, love to put objects in their mouths. That’s got to be a great way of appreciating stuff and understanding its size, texture, and solidity. Our mouths are rooms stuffed with sensors. Flavour, taste, the feel of my own heartbeat, weariness, moodiness — there are physical and physiological aspects to experience that relate to being incarnate.

I’m not convinced that these are necessary features for a general intelligence, certainly not conceptually, but perhaps they contribute significantly to our self-awareness and our awareness of the porous boundaries between ourselves and the world. And it may be hard to imbue a machine with this knowledge without making it incarnate.

Notes

Leave a Comment