So-called “gaydar” algorithm’s warnings about privacy will be buried by sensationalism

Courtesy of Kosinski and Wang

Dr. Michal Kosinski and Yilun Wang of Stanford University are under fire for a paper published on Sept. 7 that examines the ability of existing facial analysis software to predict the likelihood that someone is either homosexual or heterosexual based on pattern recognition in analyzed photographs — colloquially, it’s been coined a “gaydar.” Kosinski and Wang’s research has been castigated by many groups, including GLAAD (formerly the Gay and Lesbian Alliance Against Defamation), who argue that this experiment is “junk science” and voice concerns about the promulgation of ugly, outdated stereotypes about what gay people supposedly look like, and the danger of potentially giving people with bad intent the means to “out” someone as gay without their consent.

Kosinski asserts that the research was done to highlight the threats that recent advancements in facial recognition technology and artificial intelligence (AI) pose for individuals’ privacy, and such concerns are explored in depth in the paper’s author notes. Although these threats are worth analyzing, human sexuality (particularly since the experiment was limited to the straight-gay binary) was a poor choice for demonstrating these threats. For one, the sensitive nature of politics surrounding LGBT rights has distracted from Kosinski and Wang’s warnings about how predictive algorithms threaten our privacy. On top of that, the findings of the research risk further spreading harmful stereotypes about LGBT communities and further alienating LGBT individuals. Looking at the headlines alone, it’s trivially easy to misunderstand the research and erroneously conclude that LGBT people are a group of “others” who can easily be identified by appearance and corralled into stereotypical patterns. This is something that Kosinski and Wang should have been aware of when determining the focus of their research.

Perhaps there is something to be said about the shock value in a headline about Stanford researchers building a “gaydar” and the necessity of said shock value in drawing the public’s attention to the threat that machine learning and algorithms pose to people’s privacy. However, this research dangerously skirts at the edges of popular stereotypes about gay people, and may unintentionally perpetuate existing homophobia and further reduce gay people to being a monolithic group defined only by their sexuality. What is especially worrying is that this research may also inspire others who, upon hearing snippets of how it’s supposedly possible to guess the chances that someone might be gay by using AI, may choose to replicate this technology.

Several people, including the researchers themselves and Alex Bollinger of LGBTQ Nation, have pointed out that Kosinski and Wang did nothing particularly impossible or even difficult; they used the same current software that anyone can use for any nefarious purpose involving the ability to identify someone as a member of a group. This brings to mind the realities of how LGBT youth and young adults are frequently bullied, and how “gay” remains a commonly-used insult. An app that purports to figure out the chances that someone is gay based off a couple photos (easily obtainable via social media) would be disastrous for LGBT youth, as well as anyone who is either not sure of their sexuality yet or is falsely labeled as LGBT by the app. This is to say nothing of how it would energize the stigmatization and misinformation that currently influences perceptions of LGBT people. It would also be no stretch to imagine this app being usable for identifying ethnic features and singling out members of any minority group.

There were many limiting factors to the experiment, such as how the pictures, which were all obtained from a dating website, were only of American white men and women due to not having enough pictures of people of color to have a good enough research sample. In addition, many have pointed out that the algorithm used, called VGG Face, is fundamentally just pattern-recognition software, and not some kind of magical construct that can read someone’s mind to see if someone is attracted to the same or opposite sex. In the experiment, the algorithm simply learned what pictures of gay and straight people from the dating website generally looked like and generated the probabilities of gayness based off of that. However, such software is known to sometimes being prone to drawing conclusions based off of irrelevant factors.

The potential shortcomings of the research have been widely picked apart, both by the researchers and their critics, but it’s necessary to look beyond the scope of the experiment itself and consider the long-term implications for technology like this in an era where privacy is quickly being eroded via social media and the internet. Echoes of the The Daily Beast’s embarrassing story about Grindr and Tinder use during the 2016 Olympics come to mind.

The paper’s author notes speak much about the researchers’ concerns of how “rapid progress in AI continue to erode our privacy,” and their conclusion that a “post-privacy world” is inevitable. This is unfortunately an accurate prognosis and something worth being concerned about, especially when considering their note that “The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being but also for one’s safety.” This is true, and doesn’t just apply to sexual orientation, but political and religious beliefs, lifestyle preferences and everything else that one should have the right to keep a secret if they want. Unfortunately, the tricky and politically sensitive nature of the experiment’s subject matter is going to bury these valid concerns about our future.

Still, the trouble with Kosinski and Wang’s experiment lies in an underlying idea which their research may inadvertently spread: The concept that, however inaccurate, it’s possible to identify for sure who is gay or lesbian just by analyzing their facial features with an algorithm. Although Kosinski and Wang intended to sound alarms about the risks of machine learning and the potential for businesses and political campaigns to use it for targeted advertising (which many are already using), the controversy and sensationalism over their choice to analyze sexuality in their experiment is going to bury their original message. The numerous factors that will distract from the researchers’ findings demonstrate that it was a poor choice for sexuality to be the focus of the experiment, especially at such a politically tumultuous time.

Facebook Comments