When Professor Vallor, specialist in the ethics of data, robotics and artificial intelligence (AI), heard about plans for a research hub in this area at Edinburgh’s new interdisciplinary Edinburgh Futures Institute (EFI), she felt compelled to put herself forward to be its director. The University welcomed her expertise and she relocated from her home in California in the United States, just before the coronavirus pandemic hit, in February 2020.
Within six months, the Centre for Technomoral Futures, which is sponsored by a generous donation from large-scale investment business Baillie Gifford, was ready to launch. For her it represents the embodiment of her own long-held vision: “I had been looking for some time to start an interdisciplinary research centre in AI and data ethics, one that unites the social, moral, and technical knowledge that traditionally lives only in the kind of closed research silos that define most academic disciplines.”
Having felt for some time that this is an emerging and essential field, she knew it was being held back by the fact that scholars can be confined to a single subject, such as philosophy, computer science, sociology or law. As a result, their chances to connect with other academics exploring different methodologies and theories can be limited.
“Often, we have to seek these connections beyond our own institutions,” explains Professor Vallor. “I saw the chance to lead the Centre for Technomoral Futures in the Edinburgh Futures Institute as an opportunity to create a new way to do this kind of research, one that can be a model for the field’s future.”
Key to the concept behind the Centre is a reframing of how technology and ethics link up: “It’s common to depict technical experts as having no special need for social and moral knowledge, and equally common to treat social and moral knowledge as wholly independent of technical expertise.”
However, the two have become more and more intertwined recently and for Professor Vallor the time has come to fully embrace the impact of this on contemporary life: “In 2022, it is simply impossible to talk about a good society or how to flourish as humans without talking about the built, engineered world and how we design a sustainable future in it.”
How humans flourish
Human flourishing is a theme Professor Vallor references in talks, interviews, podcasts and her writing, citing the early use of the word ‘eudaimonia’ by Greek philosophers, which refers to living well not just as individuals but as part of a community.
“We are interdependent social animals who share our planet with other living things, and just as a garden isn’t flourishing if only one plant is healthy, you can’t flourish in a community that’s collapsing,” she says. “So human flourishing is a way to talk about the kind of good that ethics is ultimately seeking. It’s also linked with the notion of ‘the good life’ where that’s not just material success or power but living well morally, politically, physically and mentally.”
How does this tie in with technology? “Technologies have always been essential for human flourishing; we can’t survive without techniques and tools, much less live well,” Professor Vallor replies.
The examples are many; sanitation, clean water, vaccines, antibiotics and surgery all exist through technological advances and are vital to our living well. On the flip side, some technological advances, in particular industrial technologies such as air and road transportation, are now widely recognised as posing a significant threat to the flourishing of planetary life.
Professor Vallor uses AI as a prime example of the crossroads humanity finds itself at.
“AI, if we use it irresponsibly, can amplify the social and economic inequalities that continue to prevent shared human flourishing. It could also make it harder to meet carbon emission reduction targets,” she says. “Alternatively, we can use technologies like AI, clean energy and sensor networks as new tools for securing human flourishing and strengthening communities and living systems. We are at a critical point where we have to decide which of these directions will define and guide AI’s future development.”
The Centre is here to help steer things in the right direction: “The Centre for Technomoral Futures is about building the special kind of knowledge that we need to make good technologies, and the kind of knowledge we need to make good lives and societies in a fragile world, recognising that these tasks can no longer be seen as separate.”
Professor Vallor, who holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at EFI, as well as being appointed in the School of Philosophy, Psychology and Language Sciences, has acquired this special kind of knowledge through a career in both academia and industry. Her impressive CV includes roles such as Professor of Philosophy at Santa Clara University and AI Ethicist at Google. Straddling both worlds has enabled her to understand better the need for good technologies and how to encourage technologists to think about the purpose to society of their products from the early stages of development rather than as an afterthought.
“If you want to shape how people build, govern or use technologies, you cannot be out of your depth,” she says. “You have to know a fair amount about how these technologies actually work, not just what you read in popular media or experience as a consumer. You also have to understand the motives and incentives and organisational cultures that drive technology development.”
“So, for me,” she continues, “having the chance to see this from the inside at Google, while also helping their internal teams better understand the ethical impact of AI products, was a vital learning experience; one of the most important and rewarding of my life. My research is far better than it would have been without that time in industry.”
Professor Vallor’s current research involves two interdisciplinary collaborations funded by UK Research and Innovation’s (UKRI) Trustworthy Autonomous Systems programme. One is led in the University’s School of Informatics and the other in the School of Philosophy, Psychology and Language Sciences.
“In both, we are looking at the challenge of responsibility for actions or decisions taken by autonomous systems,” she explains. “When an AI system or robot is operating in the world without human direction or immediate control, or when the systems themselves are too complex for any human to fully grasp, or when they are built and maintained by a widely distributed network of people and organisations, who is responsible when the system does something unexpected or unwanted?
“Systems like this are already in use and affecting our lives, and we need humans to be answerable for the power we allow these tools to have. It’s a very practical, immediate research challenge that demands better governance solutions than we have got so far.”
Professor Vallor believes the Centre’s involvement will “amplify and extend” the impact of this research. She plans to share the results within the University community, giving internal stakeholders the knowledge needed to lead in responsible and ethical AI, and support external organisations in improving their own practices.
Sharing knowledge and expertise beyond the University is a key part of Professor Vallor’s role, which is evident in the number of boards, committees and groups she has joined since her arrival at Edinburgh: “I serve as an advisor to several initiatives in Scottish and UK government, and chair Scotland’s Data Delivery Group. The Centre and EFI are also linked with the larger Data-Driven Innovation programme that is part of the Edinburgh and South-East Scotland City Region Deal, so we lend support to their objectives as well. As part of that, I work to help industry, policymakers and regulatory bodies understand emerging ethical questions about AI and data that need to be addressed.”
Another aspect of Professor Vallor’s research is her forthcoming book The AI Mirror: Rebuilding Our Humanity In An Age of Machine Thinking. Aimed at a broad audience, it offers readers a chance to explore some of the more philosophical theories around the subject.
“I’m asking questions about how AI is changing our sense of who we are, our vision of what it is to be human, of what’s good or valuable in humanity, and of what kind of futures we should bring about for the next generations,” she says. “Along the way it tries to educate the reader about what AI today really is, what it isn’t, and what it could be that we haven’t imagined yet. But just as a mirror is really a way to look at ourselves, I’d say the core of the book is more about who we are with AI, who we are becoming, and who we want to be.”
A growing impact
As well as the impact of Professor Vallor’s own research, collaborations and advisory roles, the Centre has been making serious headway in many of the goals it set out to achieve since launching in August 2020.
A new website, increased academic and operations staff and a public events series, Technomoral Conversations, have all contributed to its growth over the past 24 months.
However, for Professor Vallor, it is the impressive efforts of the postgraduate researchers affiliated with the Centre and sponsored by Baillie Gifford that have ‘met and exceeded’ the expectations laid down in the first two years of its existence: “It’s really our PhD researchers who I think have the most potential for impact down the line. They’re already having it, here at the University and beyond.”
Take, for example, Joe Noteboom who is based in the Moray House School of Education and Sport at Edinburgh. “He now works with the University’s AI and Data Ethics Advisory Board,” says Professor Vallor, “bringing his expertise to the wider University on the governance of learning analytics and operational data in higher education.” In addition, he has helped evaluate EFI’s new postgraduate curriculum.
Over in the School of Informatics, Savina Kim works on algorithmic fairness in financial services and has recently been working with Smart Data Foundry, as Professor Vallor explains: “She hosted an event with the Foundry on algorithmic bias for International Women’s Day and authored a white paper on managing Fair AI for the Foundry’s website.”
“Another of our cohort,” she continues, “Aditya Singh from the Roslin Institute, whose thesis explores models of data governance in agriculture, worked last year with the Open Data Institute’s Data Institutions programme to help them think more deeply about approaches to data governance and stewardship that meaningfully empower communities.” Aditya has also been prolific in publishing public-facing articles and blog posts on data governance and stewardship for the Fair Data Society, the Platform Cooperative Consortium, BotPopuli and the AI Now Institute.
The University’s Usher Institute is also linked with the Centre through PhD researcher Jamie Webb, who works on deliberative democracy in AI-driven decision making in health. “As an extension of his research he has worked on the UK’s Pandemic Ethics Accelerator, producing rapid ethics reviews and tracking public engagement work related to the pandemic,” says Professor Vallor. “He also gave evidence and spoke at an event in the Houses of Parliament in May.”
Yet another Centre PhD researcher in Informatics, Bhargavi Ganesh, recently won a prestigious best paper award at the WeRobot law conference in the United States for her work co-authored with Professor Vallor and Professor Stuart Anderson, which grew out of her contributions to the Edinburgh-led UKRI Trustworthy Autonomous Systems project on AI governance.
These and other PhD students in the Centre for Technomoral Futures are having impact locally, nationally and internationally: “They come from computing, law and policy, humanities, education, arts, social and health sciences backgrounds and have formed an incredibly vibrant community. They have already authored research papers together, produced work for policymakers and wider publics, and formed links to other universities, research communities and projects.”
Professor Vallor, who acts as a mentor to all the PhD students in the Centre, is clearly proud of what they have achieved so far and has high hopes for what’s to come: “I can’t wait to see how their work will progress.”
This piece was originally published on the Edinburgh Impact website.
Photography: Chris Close