Blog Post

Prmagazine > News > News > China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report
China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

Threat Participants, some threat actors who may be located in China and Iran are developing new ways to hijack and exploit the United States Artificial Intelligence (AI) According to a new report by Openai, models of malicious intent, including secret impact operations.

The February report included two interruptions involving threat actors who appeared to have originated in China. According to the report, these participants have used or at least tried models built using OpenAI and Meta.

In one example, Openai banned a Chatgpt account that generated comments about China’s dissident Cai Xia. Comments were posted on social media by accounts claiming to be Indian and American, but the posts do not seem to attract a lot of online engagement.

The same actor also used it chatgpt service To generate long-term Spanish news articles, these articles “demean” the United States and were subsequently published by mainstream news media in Latin America. The charter of these stories is attributed to individuals, and in some cases, Chinese companies.

China, Iran and Russia are condemned by dissidents at the Geneva summit of UN regulators

China and Iran's flags with AI hackers

Global threat actors, including those in China and Iran, are looking for new ways to use the U.S. AI model to achieve malicious intentions. (Bill Hinton/Philip Fong/AFP/Maksim Konstantinov/Sopa Images/Lightrocket via Getty Images)

In recent press conferences, including Fox News Numbers, Ben Nimmo, chief investigator of Openai’s intelligence and investigation team, said at least once the translation was listed as sponsored, which showed that someone had paid the price for it.

Openai said this is the first case where Chinese actors have successfully played long-format articles in mainstream media to target Latin American audiences with anti-US narratives.

“Without the use of AI, we will not be able to connect between tweets and online articles,” Nimmo said.

He added that threat actors sometimes get glimpses of what they do in other departments of the internet because of how they use the model.

He continued: “It’s a very disturbing glimpse, a way that a non-democratic actor tries to use democratic or American AI for non-democratic purposes based on the material he generates.”

What is artificial intelligence (AI)?

Chinese espionage panic

China’s flag flies behind a pair of surveillance cameras outside the central government office in Hong Kong, China on Tuesday, July 7, 2020. Hong Kong leader Carrie Lam, her administration advocates broad new police powers, including no warrants, online surveillance and property seizure. (Roy Liu/Bloomberg by Getty Image)

The company also banned a Chatgpt account that generates tweets and articles and then posted on third-party assets that are publicly linked to known Iranian iOS (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software and text.

Both operations have been reported as separate efforts.

“There is a potential overlap between these actions – albeit small and isolated – raises the question of whether there is a partnership between these Iranian iOS, where one of the operators can represent what seems to be a unique network, “threat” The report states.

In another example, Openai bans a set of Chatgpt accounts being used Openai model Comments that translate and generate romantic decoy networks across platforms such as X, Facebook and Instagram, also known as “Pig Butchering”. After reporting these findings, the meta suggests that the activity appears to have originated from “recently standing Cambodian scam compounds.”

What is DeepSeek, a Chinese artificial intelligence startup?

The Openai Chatgpt logo is visible on your phone

Openai Chatgpt logo is seen on mobile phone on May 30, 2023 in Warsaw, Poland. ((Photography by Jaap Arriens/Nurphoto via Getty Images))

Last year, Openai became the first AI research lab to publish reports on abuse efforts by supporting the U.S., allied governments, industry partners and stakeholders and other malicious actors.

Openai said it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report and undermined a wide range of malicious uses.

The company believes AI companies can gather a lot of insights, among other technologies that disrupt, among other things, Threat actors If the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open source researchers).

Click here to get the Fox News app

Openai stressed that their survey also benefited greatly from the work shared by peers.

“We know threat actors will continue to test our defense capabilities. We are determined to continue to identify, prevent, undermine and expose attempts to abuse our harmful purpose models,” Openai said in the report.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback Recruitgo