rainbow sport

Rainbow sport for the full gender spectrum?

Photo by Harry Quan on Unsplash

Sport continues to be categorised by outdated notions of both sex and gender. Only free male citizens were allowed to participate in the Olympic games of ancient Greece. For millennia athletic prowess has been inextricably bound up with ideas about masculinity. The hunter gathering communities of our distant ancestors relied on physically strong men to hunt and fight for their tribes. Biological differences between men and women have been used as an excuse to maintain strict gender divisions in sports. But these outdated notions of male and female continue to colour even scientific research into issues of sex and gender. Is it time to consider a rainbow gender spectrum? Will this help professional sports become more tolerant of the full spectrum of athletes looking to compete?    

Editor in Chief of the largest gay newspaper in the Benelux countries, Rick van der Made, recently explained how difficult it still is for top sportsmen to come out as gay. Preliminary research conducted by the Mulier Institute for the John Blankenstein Foundation, here in the Netherlands, found that seven out of ten professional football players rate general acceptance of homosexuality and bisexuality as insufficient. While 46% of players say that it is difficult to be openly gay as a professional football player. This is in a country with a history of high  tolerance levels for gay rights. The Gay Krant editor, explains that many top players are concerned about their ability to transfer to other clubs in less tolerant societies if they are open about their sexuality.

Outdated ideas about sex and gender continue to inform discriminatory legislation in parts of the European Union.

Sadly, one needn’t look very far to find places where LGBTQ rights are under threat. Victor Orban’s government in Hungary has recently passed legislation banning gay people from featuring in TV shows and in educational material for under 18’s. The Vatican urged the Italian government recently to change a proposed law that would criminalise homophobia. Czech President, Milos Zeman, called transgender people ‘disgusting’ in a recent interview with CNN. He also affirmed his agreement with Hungarian laws banning educational material that ‘promotes’ homosexuality. The Christian Church has promoted a simplistic equation of sexual identity and biological gender for centuries. It is no coincidence that all of these countries have a strong Catholic tradition which promotes sexuality almost exclusively for purposes of reproduction.

Professional sport has been struggling with simplistic definitions of sex and gender for some time now. Take the case of South African athlete, Caster Semenya. Born with exceptionally high levels of testosterone in her system, she is classified as ‘intersex’, also known as a disorder of sex development. The Court of  Arbitration in Sport (CAS) recently ruled that Semenya would have to lower her testosterone levels in order to be eligible to compete as a women in certain races at the upcoming Tokyo Olympics. This goes against their own rules which, from the 2000 Sydney Olympics onwards, involved abandoning sex determination in favour of gender for professional athletes. It was agreed that there would be no tests of gender other than self-identification.

International Association of Athletics Federation contravenes its own gender-based approach to inclusion in Semenya case.

Semenya has always identified as female, she was raised as female and is legally categorised as female. Contrast this case with that of New Zealand weightlifter, Laurel Hubbard. The first transgender athlete to compete at an Olympic games. Previously she competed at men’s events before coming out as transgender in 2013. She will compete in the women’s 87 kg weightlifting category after the IOC changed its rules in 2015. Transgender athletes are now allowed to compete as women if their testosterone levels are below a certain threshold.  The problem with such an approach, apart from the fact that it contravenes the International Association of Athletics Federation’s  own gender-based approach and basic human rights, is that it is based on insufficient scientific research.

Hyperandrogenism is a term used to describe high levels of testosterone. But there are different forms of hyperandrogenism and none of them are fully understood. Testosterone levels are one thing but testosterone receptors are another. In people with partial androgen insensitivity syndrome, like Semenya, the testosterone receptors do not respond to the hormone in a usual way. That is why, in spite of having XY chromosomes (male), these individuals have typical external female physical characteristics. In short, it’s complex. Far too complex to be dealt with by a blanket insistence on a randomly chosen level of testosterone.  

‘Let’s start by saying, everyone is welcome and then decide how such inclusion can be achieved’ – Rick van der Made, Gay Krant.

If we are serious about creating a truly open, equal society in which individuals of all persuasions can feel welcome on the sports field, we need to be more serious about expanding our understanding of both sex and gender. A rainbow gender spectrum encourages a view of both sex and gender as varied and fluid. As Rick Van der Made suggested, let’s start by saying, ‘everyone is welcome to compete’ and then talk about how this might work. The sporting world is an excellent place to start. It is high profile and provides inspiration and role models for future generations. It brings people together and fosters confidence and a sense of community when done right. Let’s draw inspiration from the Rainbow flag. This symbol highlights the incredible diversity that nature offers. Let’s embrace it and the notion of a rainbow gender spectrum with tolerance and understanding.

algorithms

Why is bias in algorithms so difficult to avoid?

Algorithms are as biased as the human beings who create them. So how do we ensure that algorithms don’t simply amplify the biases already inherent in our societies and further entrench the human tendency to allow the past to shape the future? What does socially sustainable AI look like and will it push us to explore our own humanity in new ways?

Algorithms are part of modern life. Every time a new app appears on the market, someone, somewhere has written a bunch of algorithms to make it happen. They are commercial and issues of fairness have been left almost entirely up to the markets. In some cases such an approach might work, in other cases it has gone badly wrong. Racial bias in predictive policing tools and gender bias in recruitment software. Recall Amazon’s failed attempt to find top performing software engineers based on analysis of the CVs of past applicants. Sounds sensible, but no one thought to consider the male-dominated nature of the industry when these algorithms were designed.   

‘Bias is part of being human’ – Assistant Professor of the Ethics of Technology, Olya Kudina

Predictive algorithms use the past to shape the future, yet human beings have been using inductive reasoning for millennia. Olya Kudina, Assistant Professor of the Ethics/ Philosophy of Technology at the University of Delft, Netherlands argues that bias is part of the human condition. From an evolutionary perspective it provides a short-cut to meaning making, a sort of muscle memory that helped our ancestors survive. Nevertheless, the sort of split-second decisions that arise from such biases, are not helpful when making long term decisions. Although this sort of reasoning may be hard wired, it doesn’t mean that we shouldn’t or couldn’t be aware of it.

Julia Stoyanovich, Associate Professor at the NYU Tandon School of Engineering, maintains that  new algorithms are not needed right now. Rather, we need to focus on understanding how to make those we already have, more ethically aware. ‘We need to rethink the entire stack’ she admits. This is no small task. It requires the education of all those involved in the development of algorithms and those who use them. It also requires us to grapple with tough questions like; What should and shouldn’t algorithms do?

‘ Fairness is deeply contextual – there is no single definition’ – Microsoft Chief Responsible Officer, Natasha Crampton

Natasha Crampton, Chief Responsible AI Officer, Microsoft agrees that operationalizing  fairness is difficult. Even for Microsoft. Thus far, teams at Microsoft have succeeded in breaking it down into the labelling of what have been identified as different types of harms that algorithms might do. These are: quality of service harm (e.g. facial recognition technologies); allocation harm e.g. housing and employment and representational harm. This last involves reinforcing stereotypes by over or underestimating the prominence of particular groups. Crampton explains that the last is the least understood at present but in order to reduce all of these causes of harm, real world testing at all stages of the development cycle is needed.

A lack of methodology and norms around the concept of fairness makes the work of engineers more difficult. ‘Fairness is deeply contextual’ says Crampton, ‘there is no single definition’. It is clear that different notions of fairness will arise at different times and in different places. But Stoyanovich makes an interesting suggestion. Why not use the tried and tested scientific method in order to ascertain whether the tools we build actually work? Using hypotheses that can be falsified and tested will help provide concrete evidence that an algorithm does what is says on the tin. Further, there should be greater transparency with regards to the creation and the implementation of algorithms. As former US Congressman, Will Hurd explains, engineers must be able to explain how an algorithm makes a decision especially if it is being deployed to consumers. ‘I don’t know, is not good enough’. 

Who is responsible?

The question of responsibility looms large over AI. Who is responsible when algorithms misbehave?  Stoyanovich points to the importance of distributed accountability structures to ensure that AI use is responsible all the way from creation to application and consumer use. ‘Who’s responsibility is it? Each and every one of us!’. Crampton agrees that the European Union’s approach to digital regulation, including AI, is ‘ambitious’. It places more requirements on engineers to specify design time, the testing obligations made on developers are also more demanding.

From the consumer side, Stoyanovich and Herd agree that individuals must be able to contest decisions made by algorithms. For this to happen, there has to be a great deal more transparency about how they work. Standards for public disclosure are key here. Consumers too need to educate themselves to avoid being passive bystanders in this process. Perhaps  Kudina’s more philosophical perspective here is helpful. She is keen to avoid what she terms a purely technical, instrumental perspective on AI but advocates instead for an interactionist view. From such a perspective AI shifts our perspectives and societies in subtle ways and we in turn respond to this.

Strengthening our understanding of what it means to be human.

‘We’re growing with each other and we’re pushing each other’s boundaries, our ethical frameworks are co-evolving with what technology presents us with but it doesn’t mean anything goes’, explains Kudina.  Perhaps it comes down to fear. Fear of new, advanced technologies that we do not fully comprehend and a desire to protect what we know. If we approach it with awareness and a clear sense of agency, Kudina suggests that AI may help us strengthen our understanding of what it means to be human. Science fiction books and films have raised similar questions for decades. To finish then, a question from Philip K. Dick: Do Androids dream of Electric Sheep?