Microsoft: State-supported Hackers Using Its AI Tools


    18 February 2024

    Microsoft says state-supported online attackers from Russia, North Korean, and Iran have been using its OpenAI tools to possibly trick targets and gain information.

    Microsoft said in a report released Wednesday that it had tracked online attackers, or hacking groups, that work with several states. They include Russia's military Intelligence, Iran's Revolutionary Guard, and North Korean governments. The company said the hackers were trying to improve their campaigns using large language models like OpenAI's ChatGPT. Those computer programs use artificial intelligence. They use huge amounts of information on the internet to create human-sounding writing.

    The company said it would ban state-backed hacking groups from using its AI products. The company said it was not concerned whether any rules had been broken.

    FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)
    FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

    "Independent of whether there's any violation of the law or any violation of terms of service, we just don't...want them to have access to this technology," Microsoft Vice President for Customer Security Tom Burt told Reuters.

    Russian, North Korean, and Iranian diplomatic officials did not immediately return requests for comment on the claims.

    The claims that state-backed hackers have been caught using AI tools to support spying activities is likely to increase concerns about the spread of the technology and its possible abuse. Internet security officials in Western countries have been warning since last year that bad actors were abusing AI tools.

    OpenAI and Microsoft described the hackers' use of their AI tools as in an "early-stage" and "incremental." Burt said neither had seen online spies have big successes.

    "We really saw them just using this technology like any other user," he said. The report described hacking groups using large language models in different ways.

    Microsoft said hackers suspected of working for the Russian military spy agency, widely called GRU, used the models. The company said the hackers researched satellite and military technologies that might deal with military operations in Ukraine.

    Microsoft said North Korean hackers used the models to create content that could be used to trick area experts into giving up information. Iranian hackers also used the models to write better emails, Microsoft said. The company said the Iranian group aimed to trick feminist leaders to go to a dangerous website.

    Neither Burt nor OpenAI security official Bob Rotsted said how much activity had been found or how many users had been banned. Burt defended the ban on hacking groups although Microsoft's search engine Bing has no such ban. Burt noted that AI was new and a cause for concern.

    "This technology is both new and incredibly powerful," he said.

    I'm Gena Bennett.

    Raphael Satter reported this story for Reuters. Gregory Stachel adapted it for VOA Learning English.

    _______________________________________________

    Words in This Story

    track v. to follow and find (someone or something) especially by looking at evidence

    hack –v. to secretly get files and information on computers or networks in order to steal, cause damage or embarrass people, groups or governments

    access – n. a way of being able to use or get something

    incremental – n. a usually small amount or degree by which something is made larger or greater

    feminism – n. beliefs and organized activity in support of women's rights and interests in many different fields

    incredibly – adv. extremely or greatly