News (World)

Google’s AI is telling people that Nintendo characters are gay & trans & it’s hilarious

Male gay couple having fun playing video game console at home, LGBTQ family concept
Photo: Shutterstock

Google’s AI, Gemini, is telling people that Nintendo characters are gay and trans, users report.

In a viral screenshot, the artificial intelligence took a 2018 humor article from Autostraddle as a source to argue that various characters in Mario Kart are queer. 

Yoshi is referred to as “a tender non-binary lesbian,” the Koopa as “a trans man who was dishonorably discharged from the military,” and Wario as “A sassy, messy, polyamorous bottom who some say is a drag impersonator of Mario.”

Another user spotted Google claiming that Animal Crossing: New Horizons had LGBTQ+ representation in the form of “two gay chickens” and “Medusa’s girlfriend.” It did not mention the existing implied gay representation seen with two side characters.

It is unclear where Gemini was sourcing this information, as LGBTQ Nation was not able to find other works containing all of these elements.

Pokemon was also mentioned by Google’s AI, with one individual finding that Google’s AI claimed that there was LGBTQ+ representation among the creatures in the original games. 

These include claims that Ditto is a “genderqueer, gender nonconforming sibling who has escaped the gender binary,” Butterfree is a “strong trans woman who helps the player’s team early in the game,” and Bulbasaur is a “plant-loving queer who often says ‘Mother Earth.’”

This claim appears to come from an article published by Out Magazine earlier this year discussing characters the author believes to be queer.

AI mishaps are not limited to LGBTQ+ identities. Gemini also appeared to get Pokemon gym leaders wrong, listing characters like Freddy Kreuger, Batman, and Spider-Man as belonging to the beloved children’s franchise.

Among the most viral claims made by artificial intelligence is that people should put non-toxic glue on their pizza to keep the cheese from sliding off. This was also confirmed as legitimate by Snopes and appears to come from a satirical Reddit comment by user “F**ksmith.”

However, Snopes states that many other examples of mishaps by Google are fake, with people often unable to replicate the results alongside representatives from Google confirming their illegitimacy.

“The most notable example is a screenshot of an alleged AI Overview providing instructions on self-harm – which has been shared widely. This is a fake image, and this AI Overview never appeared,” said Google representative Ned Adriance. “The original poster even admitted to faking it.”

Social media users are advised to take an abundance of caution when seeing claims of AI misinformation.

Google’s AI follows in the footsteps of generative AI projects like OpenAI’s ChatGPT, using the same technology as a backing. These are language learning models, complicated programs that scan large datasets to determine the most likely words to come next in a sequence.

In the case of Gemini, it searches web pages relevant to the search query and attempts to summarize their key points based on what words appear most frequently or appear most relevant.

Generative AI has often been criticized for being a “black box,” where researchers are unable to fully understand how it produces its results. This leads to concerns about how to fix issues with misinformation.

This also ties into concerns about what sources it uses, with much of it being unreliable. This is attributed to a lack of a “world model” or insufficient understanding of the world around us to make informed decisions.

Another common criticism of artificial intelligence is that it can harm the environment due to the immense resource consumption behind these products. This is most notably the case in the high levels of water usage in drought-prone locations by tech companies with AI.

The black box concern does not mean that artificial intelligence is sentient, however. As AI researchers Fei-Fei Li and John Etchemendy at Time Magazine emphasize, “We have not achieved sentient AI, and larger language models won’t get us there.”

“We need a better understanding of how sentience emerges in embodied, biological systems if we want to recreate this phenomenon in AI systems. We are not going to stumble on sentience with the next iteration of ChatGPT.”

Don't forget to share:

Support vital LGBTQ+ journalism

Reader contributions help keep LGBTQ Nation free, so that queer people get the news they need, with stories that mainstream media often leaves out. Can you contribute today?

Cancel anytime · Proudly LGBTQ+ owned and operated