XAI blames the Gock’s obsession with the white genocide of « unauthorized modification »

XAI has accused a « unauthorized modification » of an error in its AI-fed Grok Chatbot, which has caused Gocher to repeatedly refer to a « white genocide in South Africa » ​​when called in certain contexts of X.

On Wednesday, Gock began to respond to dozens of X publications with information about white genocide in South Africa, even in response to unrelated subjects. The strange answers stem from the X account for Grok, which responds to users with publications generated by AI when a person marks « @grok. »

According to a post on Thursday, the XAI official X account, a change in the system of high-level Grok Bot, which guided the Bott’s behavior, aimed at GOK to provide a « specific answer » on a « political theme » was made on Wednesday morning. XAI says the tweak « has violated (its) internal policies and basic values » and that the company has « conducted a thorough investigation ».

This is the second time the XAI has publicly recognized an unauthorized change in Grok’s code has made AI respond in controversial ways.

In February, the Gock briefly censors the nephlative references of Donald Trump and Elon Musk, founder of the billionaire of XAI and the owner of X. Igor Babuschkin, leading to XAI engineering, said Grico was instructed by a deceiver to ignore the sources, as mentioned, that Muk or Trump Change as users began to head to it.

XAI said on Thursday that this would make a few changes to prevent such incidents from occurring in the future.

Starting today, XAI will publish GROK’s system prompts on GitHub as well as a shift. The company says it will « create additional checks and measures » to ensure that XAI employees cannot change the system prompt without review and establish « 24/7 observation that responds to incidents with Grok answers that are not caught by automated systems. »

Although Musk’s frequent warnings about AI’s dangers are not checked, XAI has a bad AI safety attempt. A recent report found that Gock would undress women’s photos when asked. Chatbot can also be significantly more often than AI like Gemini and Chatgpt on Google, swearing without much restraint.

A non -profit Saferai Survey aimed at improving AI Labs’s accountability has found that XAI is bad for safety among its peers because of its « very weak » risk management practices. Earlier this month, XAI missed an independent deadline for the publication of a final AI safety framework.

(State trust) (GOS)

AI,Social,Grok,xAI

#XAI #blames #Gocks #obsession #white #genocide #unauthorized #modification

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *