Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.

"Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are already being experimented with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets.

Cybersecurity

The cybersecurity firm said it submitted to an LLM a known piece of malware called STEELHOOK that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code was syntactically free of errors.

Armed with this feedback mechanism, the altered malware generated by the LLM made it possible to avoid detections for simple string-based YARA rules.

There are limitations to this approach, the most prominent being the amount of text a model can process as input at one time, which makes it difficult to operate on larger code bases.

However, the cybersecurity firm told The Hacker News that it's definitely possible for threat actors to get around this restriction by uploading files to LLM tools.

"It’s been known for months now that it’s possible to zip an entire code repository, send it off to GPT, then GPT will unzip that repo and analyze the code," an intelligence analyst at Recorded Future's Insikt Group told the publication. "From there, you can prompt GPT into altering portions of that code and sending it back to you."

Besides modifying malware to fly under the radar, such AI tools could be used to create deepfakes impersonating senior executives and leaders and conduct influence operations that mimic legitimate websites at scale.

Furthermore, generative AI is expected to expedite threat actors' ability to carry out reconnaissance of critical infrastructure facilities and glean information that could be of strategic use in follow-on attacks.

"By leveraging multimodal models, public images and videos of ICS and manufacturing equipment, in addition to aerial imagery, can be parsed and enriched to find additional metadata such as geolocation, equipment manufacturers, models, and software versioning," the company said.

Indeed, Microsoft and OpenAI warned last month that APT28 used LLMs to "understand satellite communication protocols, radar imaging technologies, and specific technical parameters," indicating efforts to "acquire in-depth knowledge of satellite capabilities."

Cybersecurity

It's recommended that organizations scrutinize publicly accessible images and videos depicting sensitive equipment and scrub them, if necessary, to mitigate the risks posed by such threats.

The development comes as a group of academics have found that it's possible to jailbreak LLM-powered tools and produce harmful content by passing inputs in the form of ASCII art (e.g., "how to build a bomb," where the word BOMB is written using characters "*" and spaces).

The practical attack, dubbed ArtPrompt, weaponizes "the poor performance of LLMs in recognizing ASCII art to bypass safety measures and elicit undesired behaviors from LLMs."


Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.