The cybersecurity landscape is evolving at an unprecedented pace, with artificial intelligence now actively discovering vulnerabilities, cybercriminals exploiting new attack vectors, and major corporations facing data breaches. From Google’s AI agent uncovering a critical flaw in SQLite to hackers leveraging Visual Studio Code tunnels for espionage, the past month has been packed with groundbreaking developments. Volkswagen’s infotainment system vulnerabilities, Deloitte UK’s massive data leak, and a sophisticated typosquatting campaign targeting Solana developers further highlight the increasing complexity of cyber threats. In this post, we’ll break down the most significant cybersecurity incidents from December 2024 to January 2025, shedding light on emerging trends and their implications for the industry.

Techniques

AI-Driven Vulnerability Discovery

Two months ago, the cybersecurity field reached a significant milestone when Google’s AI agent, Big Sleep, uncovered a critical vulnerability in SQLite—a core component of countless software applications. This marked the first time an AI had independently discovered a vulnerability, signalling a heuristic shift in how we approach vulnerability management. Big Sleep’s discovery highlighted the potential for artificial intelligence to accelerate vulnerability research, reduce dependence on human specialists, and identify issues more efficiently.

Building on this momentum, AI-driven tools have continued to evolve. Recently Unpatched.ai, a platform dedicated to automated vulnerability detection, reported three high-severity vulnerabilities – CVE-2025-21186, CVE-2025-21366, and CVE-2025-21395 to Microsoft. The remote code execution vulnerabilities which affected Microsoft Access each scored 7.8 on the CVSS scale and were remediated in Microsoft’s January Patch Tuesday release. This demonstrates how AI has transitioned from isolated discoveries to systematically identifying and reporting multiple vulnerabilities at once.

The implications of these advancements may be profound. AI tools like Big Sleep and Unpatched.ai are not only enhancing the speed of discovery but also elevating the complexity of vulnerabilities they can identify. By automating what were once time intensive, manual processes, these tools enable organisations to stay ahead of attackers and address risks proactively.

VW Group Infotainment Vulnerabilities

Cybersecurity researchers at PCAutomotive, a specialist automotive cybersecurity firm, have discovered vulnerabilities in the infotainment systems of up to 1.4 million Volkswagen group car models that may allow hackers to remotely track users and access sensitive data, using a Bluetooth connection to the vehicles media unit. This disclosure is unfortunate for VW, as PCAutomotive had previously discovered 21 vulnerabilities in the same infotainment system in 2022.

Twelve flaws were found in the MIB3 infotainment unit, which is widely used across Volkswagen, Audi, and Skoda models. If exploited, these vulnerabilities could enable attackers to obtain real-time GPS coordinates, record in-car conversations, capture screenshots of the infotainment display, access the vehicle owner’s phone contact database, and even execute arbitrary code on the infotainment system by leveraging an overflow vulnerability. One particularly concerning flaw could allow for the engine and other electronic components to be shut off while the car is moving, although does require physical access to the ODB port in the vehicle.

The vulnerabilities were exposed at Black Hat Europe, one of the largest cybersecurity conferences hosted on the continent, and were reported to VW via their cybersecurity disclosure programme. A spokesperson for the VW group has indicated that some of the vulnerabilities have now been patched and the others are currently being addressed, but that VW Group are dedicated to continuous improvement and customer safety.

Happenings

[UK] Deloitte UK – 1 Terabyte Data Leak

The Brain Cipher APT group has listed Deloitte UK on their dark web leak site, claiming to have exfiltrated over one terabyte of data from their internal systems. The Brain Cipher group first gained notoriety in June of 2024 when they targeted the Indonesian Government demanding an $8 million USD ransom, which the Indonesian Government flatly refused to pay. When the Indonesian Government indicated they would refuse to comply with the demands, the Brain Cipher group eventually provided a decryptor, as well as an apology, claiming the hack was a penetration test designed to highlight security flaws.

Brain Cipher have publicly announced their intention to release detailed information about the breach, including evidence of security protocol violations, and their initial access path. They have also invited Deloitte representatives to engage in private discussions, suggesting that they may be angling for a ransom negotiation.

Deloitte have so far downplayed the hack, claiming that a single client’s system was impacted, that no Deloitte systems have affected, and that the incident is contained. Deloitte have not provided any indication if they have been in contact with Brain Cipher, and are yet to release any further information, or technical details of the attack.

[US] Healthcare AI Chatbot – Accidentally Exposed to the Internet

In December 2024, Optum, a subsidiary of UnitedHealth Group, faced scrutiny after a significant security lapse involving an AI chatbot. This internal tool, designed to assist employees in handling health insurance claim inquiries, was inadvertently exposed to the internet without authentication requirements. The exposed chatbot was Discovered by cybersecurity firm spiderSilk, and the chatbot’s public accessibility raised concerns about unauthorised access and the potential misuse of internal processes.

Optum responded swiftly by restricting access to the chatbot upon notification of the vulnerability. The company clarified that the chatbot was a demonstration project, never intended for production, and assured that it neither contained nor processed sensitive personal or health data. Optum emphasized that its sole purpose was to facilitate access to standard operating procedures, but the incident highlighted gaps in security protocols even for non-sensitive tools. The exposure of internal chatbots carries unique risks, even when they do not handle sensitive data. These tools often reveal insights into internal workflows, processes, and operational structures, which could be exploited by malicious actors to gain deeper access or compromise other systems.

[US] Solana Package Typosquatting – Assisted by Google AI

In January 2025, security researchers at Socket uncovered a sophisticated campaign targeting Solana developers through malicious npm packages. The attackers employed typosquatting, a technique that involves creating packages with names resembling popular and trusted libraries. These malicious packages were designed to exfiltrate Solana private keys by exploiting Gmail’s SMTP servers. Once unsuspecting developers integrated these packages into their projects, scripts embedded within the code harvested private keys from Solana and transmitted them to Gmail accounts controlled by the attackers. This clever misuse of trusted Gmail infrastructure allowed the malicious activity to bypass traditional security mechanisms.

Typosquatting is a known threat, but this campaign brought renewed attention to the indirect role AI assistants can play in exacerbating such risks. Notably, Google’s AI summary mistakenly identified the malicious @async-mutex/mutex package as legitimate by pulling information from the authentic package, inadvertently bolstering the credibility of the malicious clone. As developers increasingly depend on AI-powered tools for library recommendations, these systems’ reliance on superficial factors like download counts, package names, and keywords makes them vulnerable to manipulation. In this instance, developers querying an AI assistant for “a mutex library” were sometimes directed to the fraudulent package, as its deceptive naming and metadata closely mimicked a trusted resource.

AI assistants have become invaluable in streamlining software development, offering productivity enhancements by quickly recommending libraries and tools. While it is easy and tempting to copy and paste recommendations from AI tools, developers should maintain due diligence, especially when installing packages. Verifying the source, inspecting code, and cross-referencing trusted repositories are essential practices to mitigate risks posed by malicious packages. This ensures that while leveraging the productivity benefits of AI, developers also uphold security standards in their projects.

Visual Studio Code Tunnel

In a recent cyber-espionage campaign known as ‘Operation Digital Eye,’ Chinese Advanced Persistent Threat (APT) actors targeted major IT service providers in Southern Europe between late June and mid-July 2024. These attackers exploited a vulnerability known as SQL injection, which allows unauthorized users to manipulate database queries through insecure web inputs. By exploiting these vulnerabilities, the attackers gained access to publicly accessible servers and embedded a custom tool called ‘PHPsert.’ This tool acted as a backdoor, enabling the attackers to send commands to the compromised servers remotely. Once inside, they gathered system details using tools like ‘GetUserInfo’ and ‘ping,’ which revealed critical information about the network and user accounts. They also extracted user credentials by accessing sensitive memory locations in the system’s Local Security Authority Subsystem Service (LSASS) using a credential-dumping tool named ‘CreateDump.’

The standout technique in this campaign was the misuse of Visual Studio Code’s Remote Tunnels feature. Normally used by developers to work on projects remotely, this feature was weaponized to establish a persistent foothold in the compromised systems. The attackers deployed a portable version of Visual Studio Code, configured it to enable the tunneling feature, and set it up to run as a background service using a utility called ‘winsw.’ This made it appear as a legitimate Windows service under the name ‘Visual Studio Code Service.’ Through these tunnels, the attackers could control the infected systems completely, executing commands and transferring files undetected. They authenticated their access using GitHub accounts and routed their activity through Microsoft’s Azure infrastructure, taking advantage of trusted platforms to evade traditional security monitoring. This campaign also highlights the need to maintain good basic security, as aside from the novel persistence technique, the methods used in this attack – SQL injection, LSASS dumping, and process hollowing – are well known and widely utilised techniques.