algorithms

Algorithms are as biased as the human beings who create them. So how do we ensure that algorithms don’t simply amplify the biases already inherent in our societies and further entrench the human tendency to allow the past to shape the future? What does socially sustainable AI look like and will it push us to explore our own humanity in new ways?

Algorithms are part of modern life. Every time a new app appears on the market, someone, somewhere has written a bunch of algorithms to make it happen. They are commercial and issues of fairness have been left almost entirely up to the markets. In some cases such an approach might work, in other cases it has gone badly wrong. Racial bias in predictive policing tools and gender bias in recruitment software. Recall Amazon’s failed attempt to find top performing software engineers based on analysis of the CVs of past applicants. Sounds sensible, but no one thought to consider the male-dominated nature of the industry when these algorithms were designed.   

‘Bias is part of being human’ – Assistant Professor of the Ethics of Technology, Olya Kudina

Predictive algorithms use the past to shape the future, yet human beings have been using inductive reasoning for millennia. Olya Kudina, Assistant Professor of the Ethics/ Philosophy of Technology at the University of Delft, Netherlands argues that bias is part of the human condition. From an evolutionary perspective it provides a short-cut to meaning making, a sort of muscle memory that helped our ancestors survive. Nevertheless, the sort of split-second decisions that arise from such biases, are not helpful when making long term decisions. Although this sort of reasoning may be hard wired, it doesn’t mean that we shouldn’t or couldn’t be aware of it.

Julia Stoyanovich, Associate Professor at the NYU Tandon School of Engineering, maintains that  new algorithms are not needed right now. Rather, we need to focus on understanding how to make those we already have, more ethically aware. ‘We need to rethink the entire stack’ she admits. This is no small task. It requires the education of all those involved in the development of algorithms and those who use them. It also requires us to grapple with tough questions like; What should and shouldn’t algorithms do?

‘ Fairness is deeply contextual – there is no single definition’ – Microsoft Chief Responsible Officer, Natasha Crampton

Natasha Crampton, Chief Responsible AI Officer, Microsoft agrees that operationalizing  fairness is difficult. Even for Microsoft. Thus far, teams at Microsoft have succeeded in breaking it down into the labelling of what have been identified as different types of harms that algorithms might do. These are: quality of service harm (e.g. facial recognition technologies); allocation harm e.g. housing and employment and representational harm. This last involves reinforcing stereotypes by over or underestimating the prominence of particular groups. Crampton explains that the last is the least understood at present but in order to reduce all of these causes of harm, real world testing at all stages of the development cycle is needed.

A lack of methodology and norms around the concept of fairness makes the work of engineers more difficult. ‘Fairness is deeply contextual’ says Crampton, ‘there is no single definition’. It is clear that different notions of fairness will arise at different times and in different places. But Stoyanovich makes an interesting suggestion. Why not use the tried and tested scientific method in order to ascertain whether the tools we build actually work? Using hypotheses that can be falsified and tested will help provide concrete evidence that an algorithm does what is says on the tin. Further, there should be greater transparency with regards to the creation and the implementation of algorithms. As former US Congressman, Will Hurd explains, engineers must be able to explain how an algorithm makes a decision especially if it is being deployed to consumers. ‘I don’t know, is not good enough’. 

Who is responsible?

The question of responsibility looms large over AI. Who is responsible when algorithms misbehave?  Stoyanovich points to the importance of distributed accountability structures to ensure that AI use is responsible all the way from creation to application and consumer use. ‘Who’s responsibility is it? Each and every one of us!’. Crampton agrees that the European Union’s approach to digital regulation, including AI, is ‘ambitious’. It places more requirements on engineers to specify design time, the testing obligations made on developers are also more demanding.

From the consumer side, Stoyanovich and Herd agree that individuals must be able to contest decisions made by algorithms. For this to happen, there has to be a great deal more transparency about how they work. Standards for public disclosure are key here. Consumers too need to educate themselves to avoid being passive bystanders in this process. Perhaps  Kudina’s more philosophical perspective here is helpful. She is keen to avoid what she terms a purely technical, instrumental perspective on AI but advocates instead for an interactionist view. From such a perspective AI shifts our perspectives and societies in subtle ways and we in turn respond to this.

Strengthening our understanding of what it means to be human.

‘We’re growing with each other and we’re pushing each other’s boundaries, our ethical frameworks are co-evolving with what technology presents us with but it doesn’t mean anything goes’, explains Kudina.  Perhaps it comes down to fear. Fear of new, advanced technologies that we do not fully comprehend and a desire to protect what we know. If we approach it with awareness and a clear sense of agency, Kudina suggests that AI may help us strengthen our understanding of what it means to be human. Science fiction books and films have raised similar questions for decades. To finish then, a question from Philip K. Dick: Do Androids dream of Electric Sheep?

Thoughts?!

This site uses Akismet to reduce spam. Learn how your comment data is processed.