Making AI work for everyone

Promising Trouble’s Executive Director Rachel Coldicutt OBE has been at CogX Festival and Partners for a New Economy, explaining why ethics still matter in the age of AI.

 

Illustration Elly Jahnz

The hype surrounding the planned AI Safety Summit this November has prompted wide discussion about the risks and opportunities of AI, and its role in shaping our future.

These are legitimate questions, but one that is just as pressing is to ask ourselves, ‘How can we ensure that AI is built on solid foundations, enabling it to contribute to a more equitable and prosperous society?

Here, we set out three key issues that must be addressed to ensure that AI works for everyone.

  1. Inherent bias and community data

Computer scientist Meredith Broussard explains that some types of AI — the kind powered by large language models (LLMs) — are a “mathematical method for prediction”, or “statistics on steroids”. This type of AI can spot patterns and analyse certain kinds of data much more effectively than a human might, but the range of inputs it is able to respond to are limited by a number of factors, including the availability of relevant data sets. 

One of the primary risks with the development of LLMs is their capacity to exacerbate biases and harms. It is now a well-established principle in machine learning that many do not draw upon representative data, which is a contributor to biased outputs. As David Spiegelhater says, “[W]hen we want to use data to draw broader conclusions about what is going on around us, then the quality of the data becomes paramount, and we need to be alert to the kind of systematic biases that can jeopardize the reliability of any claims.”

Early evangelists for data and digital technologies upheld the Value-Neutrality Thesis, declaring technology to be a neutral tool. But data is neither neutral nor foolproof: “garbage in, garbage out” is a well-known computing axiom that refers to what goes wrong when automated processes rely on poor-quality data, and changing social realities and attitudes mean that historic data sets and approaches to classification may be inappropriate inputs for automated contemporary decision-making. 

Community data (small, rather than big data) is part of the answer to this challenge. By including data that is not normally included in ‘mainstream’ data sets – in other words data by or with people rather than data about people – it is possible to create better policy, products and services, and make communities visible.

2. Environmental harms

AI requires significant volumes of water, energy, and materials, such as metalloids and rare metals, as well as land. For instance, graphics processing units (GPUs), in demand during the current ‘AI boom’, are manufactured using an array of various rare metals, such as tantalum and palladium. 

LLMs are particularly notable for their excessive water consumption, as a result of liquid cooling used in data centres — with an estimated 700,000 litres of water required to train OpenAI’s GPT-3 in Microsoft’s data centres. Data centres are already competing with people and agriculture for scarce water resources: in three boroughs in west London the grid has run out of power to support new house building due to the number of data centres being built along the M4 corridor while in Cambridge, water scarcity is one factor slowing down the house-building ambitions that are essential to turning the city into a “science capital”. 

As things stand, we run the risk of baking dirty models into the system, harming the path to net zero. Yet solutions do exist. If the Government wishes to see AI models built into the future of the UK economy, we need to see more action to advocate for champion greener AI models, and the financial backing to do so. 

3. The policy influence of corporate actors

There is significant research knowledge and domain expertise within industry — indeed some commentators have been raising the risk of corporate capture of AI research activities for several years. Tech firms’ high levels of investment in lobbying activity is well known — and it has been incredibly effective in the UK. 

While AI businesses should be one of the consulted stakeholder groups for policy development, the intent of the recently announced AI Safety Summit appears to be shaped by corporate concerns; the failure to engage civil society and the wider research community, let alone to reflect wider public interest, is both undemocratic and unrepresentative.  In the lead up to the Summit this November, it is incumbent upon us all to highlight the vital contribution of a multiplicity of actors in creating AI that works for everyone – as well as the risks of failing to do so.

So, where from here?

AI can and should contribute to a more equitable prosperous society for everyone. But in a just society, harm should not be regarded as a necessary outcome of innovation; in fact, innovation should be calibrated to produce societal benefits.

In many technology policy debates, the “opportunities” created by new technologies are depicted as necessary exponential economic or other improvements that may, at some point, cause sufficient levels of public harm to require regulatory guardrails. This ex-post approach means the harms caused by technologies and their applications are often not regulated until they are already widespread, and so become difficult — if not impossible — to rectify. This, in turn, gives rise to complex legislation and more complex regulatory environments; such an approach is not well suited to rapidly changing technologies and markets. 

We are in the early days of the AI revolution, and do not yet know how the long-term impacts of automated decisions will play out. But now is the time for expansive and meaningful action to create the foundations of our AI future.

Previous
Previous

We’re recruiting a Chairperson

Next
Next

The real cost of home broadband