Blog Post

Prmagazine > News > News > X users treating Grok like a fact-checker spark concerns over misinformation | TechCrunch
X users treating Grok like a fact-checker spark concerns over misinformation | TechCrunch

X users treating Grok like a fact-checker spark concerns over misinformation | TechCrunch

Some users on Elon Musk X are turning to Musk’s AI robot for fact checking, which has raised concerns among human fact checkers who believe this could lead to misinformation.

Earlier this month, x Enable Users call up Xai’s grok and ask questions about different things. The action is Similar to confusionit already runs an automatic account on X to provide a similar experience.

Shortly after Xai created Grok’s automatic account on X, users began trying to ask questions. Some in the market, including India, began asking Groke for comments and questions about fact-checking targeting specific political beliefs.

Fact checkers worry about using Grok or any other AI assistant in this way, because robots can make their answers a sound compelling answer, even if they are actually incorrect. Example Spread fake news and misinformation Have seen Grok in the past.

Last August, five state secretaries urge Musk implemented key changes after misleading information generated by aides surfaced on social networks before the U.S. election.

Other chatbots, including Openai’s Chatppt and Google’s Gemini, are also considered Generate inaccurate information Last year’s general election. Additionally, researchers have discovered in 2023 that AI chatbots, including Chatgpt, can be easily used for production Convincing text is misleading.

“Like Grok, AI assistants are really good at using natural language and give an answer that sounds like people say. In this way, AI products also have such claims to naturalness and true voice responses, even if they can be very wrong. This is the danger here.”

User on X asks Grok to fact-check a claim filed by another user

Unlike AI assistants, human fact checkers use multiple reliable sources to verify information. They are also responsible for their discoveries and are attached with names and organizations to ensure credibility.

Pratik Sinha, co-founder of ALT News, an Indian nonprofit fact-checking website, said that while Grok seems to have convincing answers at the moment, it is only as good as the data provided.

“Who will decide what data is provided, that’s government intervention, etc.”

“No transparency. Anything that lacks transparency can hurt, because anything that lacks transparency can be shaped in any way.”

“Possible abuse – spreading error message”

In a reply posted earlier this week, Grok’s account is on x admit It “can be abused – spreading misinformation and invading privacy.”

However, automated accounts do not show any disclaimer to users when they get answers, causing their illusion of answers, which is a potential drawback of AI and therefore misleading it.

Grok’s response to whether it can propagate error messages (translated from Hinglish)

“It may constitute information to provide a response,” Anushka Jain, research assistant at Goa-based multidisciplinary research collective digital futures laboratory, told TechCrunch.

There are also some questions about Grok’s posts using posts as training data on X and the quality control measures it uses to check such posts. Last summer Unveiling changes This seems to allow Grok to consume X user data by default.

Another area about AI assistants accessible through social media platforms is providing information in public – unlike Chatgpt or other chatbots.

Even if the user is well aware that the information it gets from the assistant may be misleading or incompletely correct, other information on the platform may still be believed.

This can cause serious social harm. Examples seen earlier in India Misinformation circulating on WhatsApp leads to mob lynching. However, these serious incidents occurred before Genai arrived, which made the production of synthetic content easier and seemed more realistic.

“If you see a lot of grok answers like this, you’d say, hey, most are right, maybe that’s it, but there might be some errors. How many? That’s not a small part. Some research shows that AI models show that there’s a 20% error rate…and there’s something wrong in the real world, which can be a real world error,” Ifcn Ifcn ifccrunch.

AI and the True Fact Checker

When AI companies including XAI are refining their AI models to make them more like human communication, they are still not- nor can they replace humans.

Over the past few months, tech companies are exploring ways to reduce their dependence on human fact checkers. Platforms including X and Meta have begun to embrace the new concept of crowdsourcing fact-checking through so-called community notes.

Naturally, this change will also attract the attention of fact checkers.

Alt News’ Sinha is optimistic that people will learn to distinguish between machines and human fact checkers and will pay more attention to human accuracy.

“We’re going to see the swing end up checking towards more facts,” IFCN’s Holan said.

But, she noted, fact-checkers may be associated with rapidly spreading AI-generated information.

She said: “Most of this question depends, do you really care what is real?

X and XAI did not respond to our request for comment.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback