Nation-Backed APT Attack Detection: Microsoft and OpenAI Warn of AI Exploitation by Iranian, North Korean, Chinese, and russian Hackers 

[post-views]
February 15, 2024 · 5 min read
Nation-Backed APT Attack Detection: Microsoft and OpenAI Warn of AI Exploitation by Iranian, North Korean, Chinese, and russian Hackers 

Throughout 2023, the frequency and sophistication of attacks have increased along with the swift evolution and adoption of AI technology. Defenders are just starting to grasp and leverage the potential of generative AI for defensive purposes to outpace adversaries, while the offensive forces don’t fall behind. Hackers have been abusing AI-powered technologies, like ChatGPT, to perform diverse malicious operations, such as generating targeted phishing emails, writing Excel macros, or creating new ransomware samples. At the dawn of the era of AI, we are still wondering if large language models (LLMs) are a blessing or a course for the future of cyber defense. 

State-affiliated APT groups from China, Iran, North Korea, and russia are experimenting with the use of AI and LLMs to supercharge their existing offensive operations.

Detect Attacks by Nation-State APTs Using AI

Globally escalating geo-political tensions have an impact on the cyber domain, being one of the reasons why the year 2023 has been marked with the increasing activity of nation-state actors. APT groups constantly adopt new tactics, techniques, and procedures to fly under the radar and succeed with their malicious goals, meaning cyber defenders need to advance their tools to cope with the escalating sophistication and volumes of attacks. 

Clearly, AI and LLM  technologies grab threat actors’ attention since they enable routine automation, including searching for open-source data, translating malicious notes and scripts to multiple languages, and running basic engineering tasks. According to the inquiry by OpenAI and Microsoft, several nation-backed collectives are heavily relying on AI to boost their malicious capabilities, including North Korean Emerald Sleet, Chinese Charcoal Typhoon, russian Forest Blizzard, and Iranian Crimson Sandstorm. 

To empower cyber defenders to stay ahead of attacks of any complexity and sophistication, SOC Prime Platform serves a set of advanced tools for threat hunting and detection engineering. Based on global threat intelligence, crowdsourcing, zero-trust, and AI, the Platform enables security professionals to search through the largest collection of behavior-based detection algorithms against APT attacks, find and address blind spots in detection coverage, automate detection engineering, and more. 

To detect malicious activity associated with the APT actors in the limelight, just follow the links below. All the listed rules are compatible with 28 SIEM, EDR, XDR, and Data Lake solutions and mapped to MITRE ATT&CK® v14.1. Additionally, detections are enriched with extensive metadata such as CTI references, attack timelines, and triage recommendations. 

Emerald Sleet (aka Kimsuky, APT43, Black Banshee, Velvet Chollima, THALLIUM)

Charcoal Typhoon (aka Earth Lusca, Red Scylla)

Forest Blizzard (aka APT28, Fancy Bear)

Crimson Sandstorm (aka Imperial Kittens, TA456)

To browse through the entire detection stack aimed at North Korean, Iranian, russian, and Chinese APT detection, hit the Explore Detections button below. 

Explore Detections

Nation-State APT Attack Analysis Using AI

Microsoft and OpenAI have recently uncovered and disrupted the malicious use of generative AI by five state-sponsored APT groups. Adversaries primarily aimed to exploit OpenAI services for accessing publicly available data, language translation, identifying coding flaws, and executing basic coding tasks.

The research revealed the following hacking collectives from China, Iran, North Korea, and russia. Below are listed five APT groups behind attacks weaponizing LLM technology to perform their malicious operations.

Among the China-linked nation-backed actors weaponizing AI are Charcoal Typhoon (aka Aquatic Panda) and Salmon Typhoon (aka Maverick Panda). The former exploited OpenAI offerings to investigate multiple companies and their cybersecurity toolkit, debug code, generate scripts, and likely produce content for targeted phishing campaigns. The latter leveraged LLMs for technical translation, gathering publicly accessible data on various intelligence agencies, and researching defense evasion techniques.

The Iran-backed Crimson Sandstorm (aka Imperial Kitten) group took advantage of OpenAI services for scripting operations to assist in app and web development, for content generation to launch spear-phishing attacks, and looking for common techniques for detection evasion.

The nefarious North Korean hacking group, known as THALLIUM (aka Emerald Sleet or Kimusky), which had recently been spotted in attacks against South Korea using Troll Stealer and GoBear malware, also stepped into the spotlight. Hackers employed LLM technology to analyze publicly accessible security flaws, assist in simple scripting operations, and generate content for phishing campaigns.

As for the russian offensive forces, researchers identified traces of malicious activity linked to Forest Blizzard (aka APT28 or Fancy Bear) hackers behind the exploitation of OpenAI offerings. Forest Blizzard was observed carrying out open-source research on satellite communication protocols and radar imaging technology, along with performing scripting operations backed by AI. Notably, APT28 has been observed launching a series of offensive campaigns against Ukrainian organizations along with other russian hacking collectives since the outbreak of the full-fledged war in Ukraine, commonly leveraging the phishing attack vector.

As technology evolution drives the need for robust cybersecurity and safety protocols, Microsoft recently published research covering the principles of risk mitigation associated with the use of AI tools by nation-backed APT groups and individual hackers. These principles encompass identifying and responding to the misuse of AI by offensive forces, notifying other AI service providers, cooperating with relevant stakeholders for prompt information exchange, and maintaining transparency.

The continuous evolution of the threat ecosystem has been impacted by the power of generative AI, with defenders and threat actors alike striving to gain a competitive edge in the cyber domain. Following cyber hygiene best practices backed by the zero-trust approach and coupled with proactive threat detection helps defenders to stay ahead of adversaries in the era of AI. SOC Prime fosters the collective cyber defense system based on threat intelligence, open-sourcing, zero-trust, and generative AI. Equip your team with curated detection content against current and emerging APT attacks to keep ahead of adversaries backed by collective cyber defense in action.

Table of Contents

Was this article helpful?

Like and share it with your peers.
Join SOC Prime's Detection as Code platform to improve visibility into threats most relevant to your business. To help you get started and drive immediate value, book a meeting now with SOC Prime experts.

Related Posts