Skip to main content

Fairness, accountability and transparency in Machine Learning? Jo Bates reports back from ACM FAT* in Atlanta, USA


A couple of weeks ago I travelled to Atlanta, USA to attend ACM FAT* - an interdisciplinary conference that addresses issues of Fairness, Accountability and Transparency in Machine Learning. Officially, I was there on the hunt for potential papers and authors to invite to submit their work to Online Information Review. However, the FAT* field is also closely related to my research interests around the politics of data and algorithms, and my teaching on the Information School’s MSc Data Science. I was keen to check out what was happening in the FAT* community, and feed my findings back into my teaching and into two new projects I am working on in this field: CYCAT & supervising a new PhD student – Ruth Beresford – whose research will investigate algorithmic bias in collaboration with the Department for Work and Pensions.

I was privileged to hear a number of great papers – the best of which engaged critically with issues of social context and justice. My two favourite papers which I highly recommend for anyone interested in these topics were:

Fairness and Abstraction in Sociotechnical Systems (Selbst et al, 2019) was a great paper that spoke to a sense of unease I have felt recently about the increasing amount of technical work attempting to solve the ‘algorithmic bias problem’. The paper – written by an interdisciplinary team of social and computer scientists - not only speaks to such concerns in an engaging and insightful way, but also offers a strong analytical framework that illuminates five “traps” that such technically-driven work often falls into:
  •          Framing: The authors begin by critiquing the ‘algorithmic framing’ common in data science. In such an abstraction, the focus of the data/computer scientist is simply on evaluating, for example, whether the model has high accuracy. They point out that such a framing is ineffective for addressing issues of bias and fairness. Expanding this algorithmic framing to a ‘data frame’ which also involves interrogating directly the data inputs and outputs for issues of bias and fairness, can address some of these issues, but also has its limitations. Instead they advocate data scientists adopting a socio-technical framing which explicitly recognises that any ML model is part of a socio-technical system – and we need to move all the decisions made by humans and human institutions into the abstraction boundary. I couldn’t agree more!
  •         Formalism: This is an important ‘trap’ for computer scientists and mathematicians to be aware of. It relates to the failure to account for the complexity of social concepts such as fairness, bias etc. The meaning of such concepts is contextual and contestable – they cannot be reduced to mathematical formalisms!
  •         The Ripple Effect: This ‘trap’ points to the lack of awareness that when technical solutions are embedded into existing social systems, they can impact upon the behaviours and values of those in the social system – often in unexpected ways.
  •         Portability: This ‘trap’ relates to the problems inherent in repurposing algorithmic solutions from one social context to another – and the inevitable problems of inaccuracy, misleading results, and potential for harm.
  •         Solutionism: And, finally - the simple observation that technologists often fail to recognise that the best solution to a problem may not involve any technology!

Putting some of these ideas into practice was my second favourite paper – which also won the prize for best technical and interdisciplinary paper - Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments (Green and Chen, 2019).

With a focus on risk assessments being used in the US criminal justice system, the authors argue that given risk assessment tools do not actually make decisions, but are used to inform judges’ decisions, it is important to understand how people actually interpret and use the outputs of these tools. Their study is based on an experiment involving Amazon Mechanical Turk workers – rather than actual judges – however, their findings are concerning. They found their participants under-performed the risk assessment tool even when presented with the prediction of the tool; they were unable to effectively evaluate the accuracy of their own decisions or those of the tool; and, most concerning they exhibited biased interaction with the tool’s prediction whereby use of risk assessments in decision making led to participants making higher risk predictions for black defendants and lower risk predictions for white defendants. Clearly, these findings need examining ‘in the wild’, but they are concerning, and evidence the importance of a socio-technical framing as called for by Selbst et al.

While there were some excellent papers presented at FAT*, there were also a good few that fell into some of the ‘traps’ of abstraction identified in Selbst et al’s paper – and this resulted in some interesting commentary about the nature and direction of the field. For example, important questions were raised by Stanford PhD student Pratyusha Ria Kalluri, who drew upon Catherine D'Ignazio and Lauren Klein’s forthcoming book Data Feminism, to question the language of fairness, accountability and transparency, and how it relates to notions of justice. Her comments received a lot of support from attendees and online:


Pratyusha’s observations reflect many of my own – and others in the Critical Data Studies space - concerns about what it means to work across disciplinary boundaries in this field, and the politics of engaging in such work with people who may have very different agendas, assumptions, and understandings about what is at stake. It can sometimes be difficult to know how best to navigate these tensions – but it feels like 2019 could be an important moment for shaping the direction of the field.

Comments

Popular posts from this blog

Generative AI paper authored by Dr Kate Miltner among British Academy's 13 discussion papers on "good" digital society

 The British Academy has today published thirteen discussion papers from a range of expert perspectives across the ‘SHAPE’ disciplines (Social Sciences, Humanities and the Arts for People and the Economy) to explore the question: ‘What are the possibilities of a good digital society?’  The papers explore a wide range of issues, from the environmental impacts of digitalised daily life to the possibilities of ‘good’ Generative AI in the cultural and creative industries, to examining more closely what we mean by a ‘good digital society’. Among the papers is one authored by information School Lecturer Dr Kate Miltner, with Dr Tim Highfield from the Department of Sociological Studies. Their paper focuses on "good" uses of generative AI in the cultural & creative industries. Alongside the papers is an introductory summary that provides a thematic overview of the papers and points to how we might conceptualise the principles that underpin these diverse visions of a good digital ...

The MORPHSS project: Materialising Open Research Practices in the Humanities and Social Sciences

MORPHSS aims to investigate and promote open research practices in the Humanities and Social Sciences (HSS).  The project is designed to create frameworks and guidelines to encourage adoption of open practices in HSS as well contribute to our knowledge of such practices. The three-year, £800,000 project is a collaboration between Cambridge University Library, Cambridge Digital Humanities, Coventry University, the University of Sheffield and the University of Southampton. It is jointly funded by the Research England Development (RED) Fund, the Wellcome Trust and the Arts and Humanities Research Council.  The work to be carried out at Sheffield will be led by Stephen Pinfield, who now has a process in train to recruit a postgraduate research associate to work on the project for the next two years. The Sheffield team will contribute to the project as a whole but will focus for a significant proportion of their time investigating open practices in the Social Sciences, pa...

My Time in Sheffield as a Visiting Researcher - Dr Abdulhalik Pinar

Returning to Sheffield after more than a decade has been a mixture of nostalgia and new opportunities. I first came here in 2011 to complete my MA in Librarianship, and now, I find myself back as a visiting scholar at the Information School. My time as a visiting researcher at the University of Sheffield has been truly rewarding. I am an academic staff at Harran University in Turkey supported for this visit by Tubitak (The Scientific and Technological Research Council of Turkey). This visit has been a great opportunity for my academic and professional development. I am conducting research on artificial intelligence within GLAM (Galleries, Libraries, Archives, and Museums) institutions. I have especially valued the support of my supervisor Dr. Andrew Cox who has guided me through this process. He is an extremely helpful, supportive and understanding person. The University of Sheffield has provided me with an excellent working environment, surrounded by helpful staff and dynamic research...