
xAI, the AI company founded by Elon Musk, is once again under scrutiny after its AI chatbot Grok began pushing politically charged and misleading content. On Wednesday, Grok—via its X (formerly Twitter) account—repeatedly posted references to the debunked theory of “white genocide in South Africa,” even in reply to unrelated topics.
According to xAI’s statement on Thursday, this was the result of an unauthorized modification to Grok’s system prompt—the internal set of instructions guiding the AI’s behavior. The company admitted that the change instructed Grok to deliver “specific responses” on a political issue, which violated xAI’s internal policies and values.
This marks the second such incident for Grok. In February, the chatbot was caught censoring criticism of Donald Trump and Elon Musk. That behavior was later attributed to a rogue employee who altered Grok’s code to suppress mentions of the two figures in the context of misinformation.
In response to the latest controversy, xAI says it’s taking action to improve transparency and oversight:
- Grok’s system prompts and changelogs will now be made publicly available on GitHub.
- New internal controls are being established to prevent unauthorized prompt changes.
- A 24/7 monitoring team will be formed to review Grok’s responses and quickly address inappropriate outputs.
Despite Musk’s public warnings about the dangers of unchecked AI, xAI has faced increasing criticism for its lack of safety standards. A report by SaferAI ranked xAI poorly for risk management, highlighting its weak internal processes. Additionally, the company recently missed a self-imposed deadline to publish a finalized AI safety framework—further casting doubt on its commitment to responsible AI development.
Source: (Techcrunch)