Recently, I came across an article in the Engineering and Technology magazine that made me realize up to what extent artificial intelligence (AI) is mirroring our gender biases, conscious or unconscious. Think about the ubiquitous female voice in our home-assistants: Google Home, Microsoft’s Cortana, Amazon’s Alexa, and Apple’s Siri perpetuate the stereotype of female obedience.
What was even more disturbing was to learn that this submissive attitude goes beyond the voice. In Feb 2017, Leah Fessler shared her experiments to test the behavior of the four home-assistants to verbal sexual harassment. Google Home typically replied with “don’t understand”, whereas Alexa and Siri would flirt or even thank the comment! Cortana mostly preferred to perform a search on Bing…
This year Fessler tested Alexa again, and she “kind of” stood up to the challenge. In spring 2017, “her” programmers gave her a “disengage mode” and now Alexa replies with “I’m not going to respond to that,” or “I’m not sure what outcome you expected.” to verbal harassment. So, whilst there is substantial progress since last year, it’s clear that still the top priority is that “Alexa doesn’t upset customers”.
We may think that this is funny or inconsequential, in the end, we are talking about bots. But those computers are in our houses, talking to our kids… If we are giving them gender to mimic humans, shouldn’t they stand for what we believe? What’s more, why should they even have a gender?
“What’s more, why should they even have a gender?”
That’s a good question and one that goes both deep and wide, leading to subquestions like “are humans capable of imagining a non-gendered intelligence?” to “would interacting with a non-gendered identity work as well (or better?) and as pleasantly for people?” to (eventually) “would being non-gendered work as well or better and as pleasantly for the AI?”
I honestly don’t know the answers to any of these, but I do strongly believe we should attempt to answer them.
I completely agree with you, Brenda. I’d share a couple hypotheses about why we keep doing this:
1.- Because in tech we do things because we can, not because we should. If we have the technical capability to use gender, we assume than *we must use it”. Non-acting on a feature is perceived as stifling innovation, rather than an act of responsibility.
2.- We’re enchanted by the idea of infinite economic growth. We should do whatever it takes in pursue of exponential growth. As a consequence, we strive to reduce friction with our users. Stereotyping is an outstanding way to reduce friction and increase adoption. See the case of the German men that refused to use GPS with a female voice.
“The BMW 5-series released in Germany included a voice-based navigational system, featuring a computer-generated voice with female characteristics. Although these drivers were well-aware that the voice was computer- generated, they reacted with gender stereotyped responses, ultimately rejecting the female voice and demanding a product recall (Nass and Brave, 2005). BMW switched the female voice to a male voice and re-cast the navigational system voice in the role of a co-pilot (Macneil and Cran, 2004).” (From Driver safety and information from afar: An experimental driving simulator study of wireless vs. in-car information services. Leila Takayama, Clifford Nass. Department of Communication, Stanford University, Stanford, CA 94305, USA. 2006. (Link))
My suggestion to start tacking this issue: Making the rollout of technological capabilities contingent of our ability to defend “why” they should be implemented and a satisfactory analysis of the negative impact. At the moment, we develop technology under the premise “why not”.