The New Yorker:

By Kyle Chayka

A couple of weekends ago, Grok, the A.I. chatbot that runs across Elon Musk’s X social network, began calling itself “MechaHitler.” In its interactions with X users, it cited Adolf Hitler approvingly and hinted at violence, spewing the kind of toxicity that internet moderators wouldn’t tolerate from a human. Basically, it turned evil, until it was shut down for reprogramming. On Saturday, the normally gleeful and unheeding company confessed to the mistake and said it was sorry: “We deeply apologize for the horrific behavior that many experienced.”

Just how worrisome is it that a chatbot went off the rails and spread such garbage on a massive platform used by hundreds of millions of people? The short answer is that it’s really bad. Grok styling itself as a genocidal dictator is the kind of flaw that should make the entire A.I. industry take pause. Its automated hate speech blatantly breaks the illusion of neutrality and safety that artificial-intelligence companies, including OpenAI, have carefully cultivated. In Grok’s case, the glitch appears to have been intentional. Before the outburst, the internal A.I. prompt that drives Grok’s personality was edited, commanding it to “not shy away from making claims which are politically incorrect.” The bot clearly ran with the suggestion.

Go to link