Public Voices in AI is a new project which aims to ensure that public voices are front and centre in artificial intelligence research, development and policy.
UK Research and Innovation (UKRI) is investing £850,000 in Public Voices in AI, led by the Digital Good Network at the University of Sheffield.
Professor Helen Kennedy, Director of the ESRC Digital Good Network, will be the project lead, with support from Dr Ros Williams, Digital Good Network Associate Director.
Dr Susan Oman of The Information School leads the evidence review work package to understand how public voices are currently included in responsible AI research policy and practice. This work package will:
- draw evidence together (academic, grey, other literatures)
- review evidence production (methods, motivations, money)
- categorise and assess how public voices have been included
- develop an open and accessible database that is findable and reusable (according to FAIR principles) for stakeholders across the RAI community (and those individuals and communities who may want access to RAI resources)
- publish reports for sectors, domains, stakeholder audiences
The project builds on previous work by Dr Oman, Professor Kennedy, and others on Living With Data exploring the role that inequalities play in shaping public perceptions of data and AI. Project collaborators include the Ada Lovelace Institute, the Alan Turing Institute, Elgon Social Research and University College London.
A central aspect of responsible AI research, development and policy is ensuring that it takes account of public hopes, concerns and experiences. With concern about the societal impacts of AI growing and pressure for its effective regulation mounting, understanding and anticipating societal needs and values can inform responsible AI developments and deployments. Yet public voice is frequently missing from conversations about AI, an absence which inhibits progress in RAI. Addressing this gap is essential to enable AI RD&P to maximise benefits and prevent harms and to ensure that responsible AI works for everyone.
Structural inequities in society mean that certain groups are more negatively impacted by AI deployments than others – for example, in welfare systems, at borders, in policing. Some groups have more resources and access to power to shape AI technologies than others. There is also a participation gap between those with the social capital to participate in shaping AI and those without. Public Voices in AI will therefore centre those most impacted and underrepresented.
Public Voices in AI will distribute up to £195,000 of its funding to support participatory projects with people from groups which are negatively affected by or underrepresented in AI research, development and policy. The Public Voices in AI Fund will be launched on 31st May 2024, with a closing date for applications of 20th June 2024.
Project lead Helen Kennedy said, “Public voices need to inform AI research, development and policy much more than they currently do. This project represents a commitment from UKRI and RAI UK to ensuring that happens. It brings together some of the best public voice thinkers and practitioners in the UK, and we’re excited to work with them to realise the project’s aims.”The project will run from April 2024 to March 2025.
An online launch event will take place in early June – more information to follow soon.
Comments