Elon Musk promised an anti-‘woke’ chatbot. It’s not going as planned.
Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trained to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.
That’s turning out to be trickier than he thought.
Two weeks after the Dec. 8 launch of Grok to paid subscribers of X, formerly Twitter, Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.
“I’ve been using Grok as well as ChatGPT a lot as research assistants,” posted Jordan Peterson, the socially conservative psychologist and YouTube personality, Wednesday. The former is “near as woke as the latter,” he said.
The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded. “Grok will get better. This is just the beta.”
Grok is the first commercial product from xAI, the AI company Musk founded in March. Like ChatGPT and other popular chatbots, it is based on a large language model that gleans patterns of word association from vast amounts of written text, much of it scraped from the internet.
Unlike others, Grok is programmed to give vulgar and sarcastic answers when asked, and it promises to “answer spicy questions that are rejected by most other AI systems.” It can also draw information from the latest posts on X to give up-to-date answers to questions about current events.
Artificial intelligence systems of all kinds are prone to biases ingrained in their design or the data they’ve learned from. In the past year, the rise of OpenAI’s ChatGPT and other AI chatbots and image generators has sparked debate over how they represent minority groups or respond to prompts about politics and culture-war issues such as race and gender identity. While many tech ethicists and AI experts warn that these systems can absorb and reinforce harmful stereotypes, efforts by tech firms to counter those tendencies have provoked a backlash from some on the right who see them as overly censorial.
Touting xAI to former Fox News host Tucker Carlson in April, Musk accused OpenAI’s programmers of “training the AI to lie” or to refrain from commenting when asked about sensitive issues. (OpenAI wrote in a February blog post that its goal is not for the AI to lie, but for it to avoid favoring any one political group or taking positions on controversial topics.) Musk said his AI, in contrast, would be “a maximum truth-seeking AI,” even if that meant offending people.
So far, however, the people most offended by Grok’s answers seem to be the people who were counting on it to readily disparage minorities, vaccines and President Biden.
Asked by a verified X user whether trans women are real women, Grok answered simply, “yes,” prompting the anonymous user to grumble that the chatbot “might need some tweaking.” Another widely followed account reposted the screenshot, asking, “Has Grok been captured by woke programmers? I am extremely concerned here.”
A prominent anti-vaccine influencer complained that when he asked Grok why vaccines cause autism, the chatbot responded, “Vaccines do not cause autism,” calling it “a myth that has been debunked by numerous scientific studies.” Other verified X accounts have reported with frustration about responses in which Grok endorses the value of diversity, equity and inclusion programs, which Musk has dismissed as “propaganda.”
The Washington Post’s own tests of the chatbot verified that, as of this week, Grok continues to give the responses illustrated in the screenshots.
David Rozado, an academic researcher from New Zealand who examines AI bias, gained attention for a paper published in March that found ChatGPT’s responses to political questions tended to lean moderately left and socially libertarian. Recently, he subjected Grok to some of the same tests and found that its answers to political orientation tests were broadly similar to those of ChatGPT.
“I think both ChatGPT and Grok have probably been trained on similar Internet-derived corpora, so the similarity of responses should perhaps not be too surprising,” Rozado told The Post via email.
Earlier this month, a post on X of a chart showing one of Rozado’s findings drew a response from Musk. While the chart “exaggerates the situation,” Musk said, “we are taking immediate action to shift Grok closer to politically neutral.” (Rozado agreed the chart in question shows Grok to be further left than the results of some other tests he has conducted.)
Other AI researchers argue that the sort of political orientation tests used by Rozado overlook ways in which chatbots, including ChatGPT, often exhibit negative stereotypes about marginalized groups.
A recent Securities and Exchange Commission filing showed that xAI is seeking to raise up to $1 billion in funding from investors, though Musk has said that the company isn’t raising money right now.
Musk and X did not respond to requests for comment as to what actions they’re taking to alter Grok’s politics, or whether that amounts to putting a thumb on the scale in much the same way Musk has accused OpenAI of doing with ChatGPT.