algorithms

Why is bias in algorithms so difficult to avoid?

Algorithms are as biased as the human beings who create them. So how do we ensure that algorithms don’t simply amplify the biases already inherent in our societies and further entrench the human tendency to allow the past to shape the future? What does socially sustainable AI look like and will it push us to explore our own humanity in new ways?

Algorithms are part of modern life. Every time a new app appears on the market, someone, somewhere has written a bunch of algorithms to make it happen. They are commercial and issues of fairness have been left almost entirely up to the markets. In some cases such an approach might work, in other cases it has gone badly wrong. Racial bias in predictive policing tools and gender bias in recruitment software. Recall Amazon’s failed attempt to find top performing software engineers based on analysis of the CVs of past applicants. Sounds sensible, but no one thought to consider the male-dominated nature of the industry when these algorithms were designed.   

‘Bias is part of being human’ – Assistant Professor of the Ethics of Technology, Olya Kudina

Predictive algorithms use the past to shape the future, yet human beings have been using inductive reasoning for millennia. Olya Kudina, Assistant Professor of the Ethics/ Philosophy of Technology at the University of Delft, Netherlands argues that bias is part of the human condition. From an evolutionary perspective it provides a short-cut to meaning making, a sort of muscle memory that helped our ancestors survive. Nevertheless, the sort of split-second decisions that arise from such biases, are not helpful when making long term decisions. Although this sort of reasoning may be hard wired, it doesn’t mean that we shouldn’t or couldn’t be aware of it.

Julia Stoyanovich, Associate Professor at the NYU Tandon School of Engineering, maintains that  new algorithms are not needed right now. Rather, we need to focus on understanding how to make those we already have, more ethically aware. ‘We need to rethink the entire stack’ she admits. This is no small task. It requires the education of all those involved in the development of algorithms and those who use them. It also requires us to grapple with tough questions like; What should and shouldn’t algorithms do?

‘ Fairness is deeply contextual – there is no single definition’ – Microsoft Chief Responsible Officer, Natasha Crampton

Natasha Crampton, Chief Responsible AI Officer, Microsoft agrees that operationalizing  fairness is difficult. Even for Microsoft. Thus far, teams at Microsoft have succeeded in breaking it down into the labelling of what have been identified as different types of harms that algorithms might do. These are: quality of service harm (e.g. facial recognition technologies); allocation harm e.g. housing and employment and representational harm. This last involves reinforcing stereotypes by over or underestimating the prominence of particular groups. Crampton explains that the last is the least understood at present but in order to reduce all of these causes of harm, real world testing at all stages of the development cycle is needed.

A lack of methodology and norms around the concept of fairness makes the work of engineers more difficult. ‘Fairness is deeply contextual’ says Crampton, ‘there is no single definition’. It is clear that different notions of fairness will arise at different times and in different places. But Stoyanovich makes an interesting suggestion. Why not use the tried and tested scientific method in order to ascertain whether the tools we build actually work? Using hypotheses that can be falsified and tested will help provide concrete evidence that an algorithm does what is says on the tin. Further, there should be greater transparency with regards to the creation and the implementation of algorithms. As former US Congressman, Will Hurd explains, engineers must be able to explain how an algorithm makes a decision especially if it is being deployed to consumers. ‘I don’t know, is not good enough’. 

Who is responsible?

The question of responsibility looms large over AI. Who is responsible when algorithms misbehave?  Stoyanovich points to the importance of distributed accountability structures to ensure that AI use is responsible all the way from creation to application and consumer use. ‘Who’s responsibility is it? Each and every one of us!’. Crampton agrees that the European Union’s approach to digital regulation, including AI, is ‘ambitious’. It places more requirements on engineers to specify design time, the testing obligations made on developers are also more demanding.

From the consumer side, Stoyanovich and Herd agree that individuals must be able to contest decisions made by algorithms. For this to happen, there has to be a great deal more transparency about how they work. Standards for public disclosure are key here. Consumers too need to educate themselves to avoid being passive bystanders in this process. Perhaps  Kudina’s more philosophical perspective here is helpful. She is keen to avoid what she terms a purely technical, instrumental perspective on AI but advocates instead for an interactionist view. From such a perspective AI shifts our perspectives and societies in subtle ways and we in turn respond to this.

Strengthening our understanding of what it means to be human.

‘We’re growing with each other and we’re pushing each other’s boundaries, our ethical frameworks are co-evolving with what technology presents us with but it doesn’t mean anything goes’, explains Kudina.  Perhaps it comes down to fear. Fear of new, advanced technologies that we do not fully comprehend and a desire to protect what we know. If we approach it with awareness and a clear sense of agency, Kudina suggests that AI may help us strengthen our understanding of what it means to be human. Science fiction books and films have raised similar questions for decades. To finish then, a question from Philip K. Dick: Do Androids dream of Electric Sheep?

Digital authoritarianism vs democracy – what’s at stake in a tech Cold War?

A Cold War between China and the US will be fought in the virtual world, via digital means. Not with nuclear warheads and border controls like the last one. But like the last Cold War, there is a growing sense of ‘them’ vs ‘us’. The stark difference between democracies and authoritarian regimes has been highlighted by the pandemic.  The sharp increase in disinformation campaigns and the fight for control of critical digital infrastructure has made it clear that the next Cold War will be digital. As China rapidly expands its tech ambitions, democracies find themselves caught between digital authoritarianism and the surveillance capitalism models offered by tech giants like Facebook and Google. Neither are attractive. But what are the alternatives and can one be found before liberal democratic values are seriously undermined?  

‘What does a democratic technological universe look like?’ asks Director of Alliance for Securing Democracy, Laura Rosenberger. This is a discussion that has not been given enough input she argues. She highlights the need to ‘think much more robustly about data’. How can one ensure that data is available to be used in critical technologies like AI where it is essential, without compromising individual privacy? She advocates for a much closer alliance between the US and the EU on data governance in order to counter the ‘digital authoritarianism’ promoted by China.

Alice Ekman, of the European Union Institute for Security Studies agrees that the West must face it’s fear of data surveillance. ‘If we don’t invest in the dark dimension of tech, it will be left to  countries that do’ she warns. By grappling with the thorny issue of data safety and regulation, democracies like Europe and America will have a greater say in the norms and values that underpin these technologies. A competitive, ethical model may also be exported abroad to countries who do not necessarily share these values, Ekman points out.

China has invested in the creation of conglomerates of tech companies. They work together to develop comprehensive packages of infrastructure technologies that are fully compatible, Alice Ekman explains. SMART city packages like Alibaba’s ‘City Brain’, are sold to provincial governments in China and further abroad. Rosenberger warns that Chinese companies often send officials along with these products to third countries. They help with installation, training and even, in some cases, with the drafting of legislation related to the introduction of the new systems. Clearly the values that underpin such legislation are in keeping with the digital authoritarianism of the Communist regime in China. It is important therefore, that both the US and Europe are present in developing countries, maintains Rosenburger.

‘The US is at risk of losing the technology competition against China’- Eric Schmidt, former CEO of Google.

Many Western tech companies have traditionally focused on producing separate components for sale, consistent with a market-based model. The lack of strong state involvement, especially in the United States, means that the West now needs to focus on creating ‘ecosystems of technology’. Such ecosystems should be compatible with one another and with our democratic values. They would then be ready for export, says Ekman.

Former CEO of Google, Eric Schmidt, puts it more bluntly, ‘The US is at risk of losing the technology competition against China’. He points out that China has a national strategy with regards to technology development and global positioning. ‘What is the American response?’ he asks. Schmidt suggests that the secret of China’s success is an initial stage of ‘enormously brutal competition’ in their private sector. From this, a winner is selected who is then promoted relentlessly with the full power of the State. ‘Can you imagine if America did that?’ he asks.

Centralised control, when done well, can, in certain situations, produce better outcomes in a world involving large amounts of data and data control. Digital authoritarianism has its attractions. ‘We need a response’ says Schmidt. For the former Google CEO, it should begin with a renewed focus on investing in the development of talent and research in the sciences. He expresses concern that America is currently in danger of denying its top companies and universities, top talent because of visa restrictions.

Focus on innovation not regulation

Schmidt also points to the fact that national funding for basic scientific research is just 0.7% of GDP,  and has fallen consistently since the 1950’s. He is concerned that these trends will affect ‘the great things about the West’. And admits that he is primarily concerned about making sure that ‘our innovation, our creativity and our democracy are not crushed by a well-funded autocracy.’ ‘It’s important to understand that it’s a competition and we should win it!’ he states.

Growing concerns regarding the ability of technologies like social media and AI to undermine privacy and individual rights, has caused many Western governments to focus on regulation rather than innovation. Schmidt would rather focus on ‘unleashing the creativity of the next generation’ and then regulating ‘as bad things happen’. Before we ‘fixate on regulation’ why don’t we focus on how to make the West’s technology stronger and better. Then, ‘when we compete with the Chinese model, we win as many times as we can.’

In keeping with this philosophy, he would rather not ban companies like  Huawei. ‘I would like to compete with them and win.’  This may be easier said than done. Rosenberger points out that China is in the process of developing a new IP that would essentially allow Beijing to control internet traffic. Its SMART city products like Safe City by Huawei have found their way into a number of European cities. Not to mention the 5G network debates currently raging in Europe and America.

Digital authoritarianism spreads to Hong Kong

The core role played by technology in the battle between democracies and autocracies is on full display in Hong Kong, Rosenburger points out. Big tech companies like Google and Facebook have stopped handing over data to the Hong Kong government in wake of the introduction of the National Security Law. Situations such as these further highlight the problem at hand: how should liberal democracies use technology in a way that advances individual liberties?

Schmidt mentions initiatives like ORAN (open radio access networks) that are founded on the principle of decentralised control rather than the integrated systems epitomised by Huawei. Others have suggested the creation of a sort of digital Schengen zone. An internet freedom league where data flows freely without borders, underpinned by specific values and principles. There are of course many possible solutions. But the challenge is finding the right ones, in a digital world that is compressing time in a manner hitherto unseen.  

Losing weight with AI – a quiet revolution?

Human intelligence, that much vaunted, but still imperfectly understood phenomenon is the starting point for artificial intelligence.  AI is the simulation of human intelligence in machines. What does intelligence in a machine look like?  Experts now speak of strong and weak AI. But all AI is dependent on data and, like other high quality raw materials, good data is not guaranteed. I spoke with Dr. Romani, who is using AI to provide a sustainable weight loss programme. He and others like him are both hopeful and cautious about the huge advantages and the many challenges associated with using AI to improve healthcare.

Modern machines have an advantage over humans in their ability to process vast amounts of data at high speed. AI harnesses this ability and in so doing, is able to perform what to many may seem like modern miracles. In healthcare for example, algorithms are now capable of detecting various diseases, in some cases more accurately than doctors. AI can now diagnose lung and breast cancer better than humans with an error rate of just 3%. It also has implications for medical trials involving new drugs or treatments.

Bias in AI is a problem.

With the help of AI, the trialing process can be done much more quickly and efficiently. This means that new drugs could reach the market in a matter of weeks or months as opposed to years or even decades. But AI is based on algorithms, complex mathematical instructions that are trained on large data sets. It is now becoming clear that both the creators of the algorithms and the data itself can be biased. Bobby Bahov, founder of AI Lab One at the Hague Tech, explains that ‘Data is everything when it comes to AI.’

Bearing in mind the innate human tendencies toward bias which in turn are transferred to the artificially intelligent machines they programme, it is worth considering that the more simple, ‘weak’ forms of AI may well be less susceptible to extreme forms of bias. These more simple, single-task orientated algorithms may be employed to streamline a myriad of small but vital tasks. For example, all the daily tasks necessary for the efficient running of a hospital. So too can they be used to assist with global health issues like weight.

Simple algorithms can be used to measure daily fluctuations in our weight.

The problem of weight gain is increasingly common. For decades health care professionals and the business sector have come up a with variety of solutions that promise success at losing those extra pounds. I spoke with GP and sports science expert, Dr. Renato Romani, who has been working for a number of years now on addressing this problem using AI. His initiative has developed a simple algorithm that allows people to measure their weight each day using a specially designed weight monitor or scale.

The weight monitor contains no numbers but instead shows the weight watcher a trend in either weight loss or weight gain. This trend is based on numerous readings each day taken by an app, which communicates with the scale. It takes into account the fact that your body weight fluctuates, in the order of kilograms, each day. By providing a more realistic picture of your weight range, Dr. Romani has found that patients are motivated to continue making small adjustments to eating and exercising patterns. Overall, they have found a 40% improvement in long term weight loss using this method.

‘Weight loss is a journey’ – Dr. Romani

As Dr. Romani points out, ‘weight loss is a journey’. After a 5% change in weight, the body takes time to adjust to this new state. Romani admits that further research, with a wider number of patients, is still needed to determine how long such an adaptation period is, on average. The scale also provides generic advice given the type of trends in your weight loss or gain that it notices. For example, ‘try eating more salads’ or ‘increase your water intake’.  However Dr. Romani is adamant that this device is designed to be used with a healthcare professional. He still believes firmly in the importance of human contact. He sees his invention providing dietitians, doctors or gyms with the necessary scientific information to support their patients/ clients to achieve ‘sustainable and efficient weight loss’.

In one of their first pilot schemes at the ASPR offices in Brazil, 55% of the employees of the company with whom they worked, needed to lose weight. The doctor tells me how many went from skepticism of the scale to a more positive approach when they saw it had no numbers on it. And then, to a growing awareness of weight as they discussed their progress with one another on coffee breaks etc. Finally, he received thanks from a near-by fitness center for sending 10 employees to their gym. He explained that neither he nor the company involved had done any such thing, rather, those involved had taken it upon themselves to sign up for fitness classes.

‘We are not talking about replacing doctors any time soon’ – Bobby Bahov

Speaking with a variety of medical practitioners at the Hague Tech last week, it became clear that many who are positive about AI see it as a tool that will free-up their time so that they can focus on more complex problems. ‘We are not talking about replacing doctors any time soon’ says Bobby Bahov. The focus is rather on speeding up simple tasks so that waiting time can be reduced.’ For example, 25% of people don’t have visible veins, this can cause difficulties when taking blood. AI can be trained to solve this problem and save  doctors and nurses time to spend on other tasks.

The financial aspect of all of this is also very real. As one of the participants pointed out, ‘waste is not profit’. In healthcare, particularly hospitals and public health, the best way we could improve the system would be to avoid waste.  Waste can be reduced with the help of AI to streamline the system rather than reinvent the wheel. Insurance companies in the Netherlands are interested in AI, particularly when it comes to health care. Indeed, much basic healthcare in the Netherlands, is now largely a question of protocol which general practitioners are trained to follow. Such a system lends itself to the introduction of AI.

AI: Less is more

A focus on simplicity and human connection characterizes such initiatives. This is what Romani and others in the health care industry believe will revolutionise health care in the foreseeable future. Although investors, developers and the media may focus on the more advanced, sometimes dystopian aspects of deep learning and strong AI, this is not the stuff of which reality is currently made. It is also clear, that a slower, less high level approach might well provide society with the time needed to consider the many complex ethical questions that AI raises and with which even human intelligence is still struggling!  

Vivienne Ming

Vivienne Ming – theoretical neuroscientist and superwoman.

Listen up!

Theoretical neuroscientist, Vivienne Ming has turned her talents to writing algorithms that solve real world problems like diabetes, autism, even job satisfaction. Ming’s rags to riches story might explain why she has the answers to some of the toughest questions: Will I lose my job to AI  and how can I ensure that my child’s future is ‘robot-proof’?

Dr Ming is tall, almost too tall, blonde and speaks in a low voice that suggests she might once have been a heavy smoker. She has a strong Californian twang and is dressed in black leather jacket and leggings. In short, she looks more like a route 66 biker than a neuroscientist. But once she starts talking, it becomes clear that one is in the presence of a formidable intellect. Vivienne Ming speaks fast and low because she has a lot to say, and all of it is relevant. This is a woman who has an answer, a good one, to every question posed. The problem is whether one can absorb all the high-level information that pours forth so effortlessly.

‘Purpose is everything. – Vivinne Ming

Vivienne Ming is a one-of-a-kind in many ways. Owner of five successful AI companies, she has been offered Chief Scientist jobs by Amazon, Uber and Netflix. But turned them all down. But it is the road that lead to her remarkable success that is perhaps most impressive. Dr Ming speaks candidly of the ‘fifteen wasted years’ of her life. Some some of which were spent homeless, living in her car. At one point she bought a gun and seriously contemplated suicide. But on that fateful night, she recalled the words of her father, ‘Live a life of substance’. Vivienne Ming decided that if happiness was out of her reach, she would aim instead, to make a difference to the lives of others. ‘ Purpose is everything’she came to realise. Once she had made this decision, she set about achieving her goal.

Vivienne Ming is happier as a woman.

But it took her ‘five years of hell’ working at a convenience store and then at an abalone factory, to save enough for the college education she had rejected. Dr. Ming went on to ace her studies in computational neuroscience and completed her bachelors degree in just one year. She also met and fell in love with her wife-to-be. Yet still the deep seated sense of self-loathing persisted. Beginning when Vivienne was as young as ten or eleven, she found it increasingly difficult to care about anything.

Finally, on the eve her 34th birthday, Vivienne Ming admitted to her fiance that she would be happier as woman. The couple went on to marry, Ming in tux, in 2006. But shortly thereafter she began the long journey of transition. In 2008, at age 37, Vivienne Ming finally underwent the 46 hours of surgery that would change her biological gender. What comes through most clearly in this incredible story is Ming’s perseverance. As she puts it, ‘for 10 years, still profoundly unhappy, I kept going.’

Vivienne Ming enjoys a unique perspective.

Vivienne Ming’s journey gives her a unique perspective. She has made up for lost time by involving herself in a huge variety of initiatives. All of them have one thing in common however, they are dedicated to making the world a better place. A statement such as this might seem trite coming from anyone else. But Ming’s confidence and sincerity comes from hard-won life experience. She lacks all traces of self-pity, and instead agrees that growing up as a white middle class male in California, gave her every advantage that life could offer. Having experienced life as both a man and woman places her in a fairly unique position of being able to compare the advantages and disadvantages of both. But she admits that she prefers being a woman.

Dr. Ming’s book, The Tax on being Different, explores exactly this issue from an economic perspective. Her research into the gender wage gap involved the analysis of data from 60 000 companies using AI, and revealed that women in leadership positions within companies was the biggest single predictor of a reduced wage gap. In her book, Vivienne Ming discusses how big data provides numerical evidence for the significant role played by factors like race and gender on hiring decisions and ultimately affect one’s life chances. These biases are quantifiable in terms of opportunities and salary level, hence her use of the term ‘tax’. However, unlike other taxes, the tax on being different benefits no one. It simply represents a loss to both individual and society.

‘AI tests our ability to well articulate a problem’ –  Vivienne Ming.

The neuroscientist and tech entrepreneur insists that it is life experience that teaches one resilience and this is one of the biggest predictors of success. She designed a bunch of algorithms that trawled through huge sets of raw data on job candidates. The goal; to identify which qualities were the best predictors of success. Results showed that the college you attended and years of experience on the job were not so important. But resilience and problem-solving ability came up time and again. Dr Ming is clear, ‘AI is not about data. If you know how to fix the problem, AI can change the economics.’ The big problem, she maintains, is that many of those hired, based on traditional recruitment methods, don’t know how to solve problems.

Vivienne Ming admits that the first company she started with her wife, dedicated hundreds of hours to researching problems for which no ready solution was available. Often they failed. Now they focus on smaller bits of big problems. They also found that work on one project can lead to  unexpected break-through’s in other fields. For example, a small emoticon project that they worked on years ago, resulted in the creation of a facial recognition tool. This was used to help autistic children improve their ability to correctly identify facial expressions. They found that this process also improved empathy in these same children. This same technology would later be used to develop what Dr Ming terms, ‘an incredibly sleazy game’ called Sexy Face. Which formed the basis for technology used to help identify orphan refugees and reunite them with their lost relatives, worldwide. All within three minutes.

‘I want to make better people.’ – Vivienne Ming

So what of the future? Dr. Ming’s independent think tank, Socos Labs, focuses on a range of areas, including education, inclusive economics and the future of work. The goal that underlies research in all of these is simple: the maximization of human potential. ‘I want to make better people.’ she says with calm conviction. So Project Muse is technology that allows parents to monitor and become more pro-active in their child’s everyday development. Using feedback from a child’s daily activities, the app creates a short, tailor-made activity that parents and children can do together.

Her book, How to Robot- Proof your Kids, focuses on how parents can help prepare their children for a future. A place where, the only job description will be ‘creative, adaptive problem-solver’. And changes in public and private policy will produce ‘a society of explorers’. In short, Vivienne Ming predicts a future job market that will be ‘radically de-professionalized by automation and AI’. But there is reason to believe these changes will produce an even richer set of jobs, if we are prepared!

‘Transition should be celebrated’ – Vivienne Ming

 Dr. Ming finishes by pointing out, with a hint of her trademark dry humour, that if you’re a strawberry picker or a real-world problem-solver, there is no need to fear AI. The likes of consultants and legal professionals may well find themselves losing against the ability of algorithms to analyse spreadsheets or find holes in contracts more quickly and cheaply. But this is not really what Vivienne Ming is about. There is something evangelical in her devotion to what she clearly sees as her purpose in life. Again she draws our attention to the power of transition, in whatever shape or form it may come – ‘Transition should be celebrated! ‘ And again she invites her listeners to focus on adding value in life. Perhaps it is the hard-won sincerity with which she delivers her message or the ample evidence of her own generosity in this regard that makes her a difficult woman to ignore.

Up next on Souwie on …

French writer Edouard Louis
French author, Edouard Louis