Exploring the Future of Cybersecurity: Insights from the Annual Showcase at CETaS
The Centre for Emerging Technology and Security (CETaS) recently kicked off its annual showcase with a riveting discussion on the potential implications of Claude Mythos, a frontier model developed by Anthropic. This event has attracted significant attention, as the evolving nature of artificial intelligence continues to reshape various sectors, particularly in cybersecurity.
Major Advancements in Cybersecurity
Opening the conference, Alexander (Sacha) Babuta, the director of CETaS at the Alan Turing Institute, highlighted the transformative capabilities of the Claude Mythos Preview. He pointed out significant improvements in critical areas such as mathematics, cybersecurity, software engineering, and automated vulnerability detection. Babuta emphasized the optimistic potential of this model by explaining how enterprises could leverage its capabilities to identify vulnerabilities in their systems autonomously.
“Companies can use models like Claude Mythos to rapidly discover vulnerabilities and patch them, thereby strengthening digital security for everyone,” Babuta noted. His vision reflects a growing consensus that AI technology, if harnessed correctly, can serve as a formidable ally in the ongoing battle against cyber threats.
The Rise of “Dark AI”
As discussions progressed, Ben Collier, a senior lecturer at the University of Edinburgh, presented an intriguing study focused on the cybercrime community. His research tracked activities between the release of ChatGPT in 2022 and the end of 2025, revealing a notable emergence of “dark AI” products. These models are advertised by their creators as tailored versions of large language models (LLMs) specifically for cybercrime.
Interestingly, despite the initial enthusiasm expressed on cybercrime forums, Collier reported that these dark AI products have made little impact so far. What’s more enlightening, perhaps, is the way aspiring cybercriminals are currently utilizing accessible AI tools like ChatGPT and Claude for their own ends. While these novice developers often share their discoveries excitedly, Collier pointed out a significant disconnect: most forum members lack the foundational technical knowledge necessary to capitalize on these advanced AI tools effectively.
The Reality of Cybercrime Operations
Delving deeper into the motivations and methods within cybercrime circles, Collier explained that much of what these individuals are doing resembles basic startup tasks rather than sophisticated hacking. “They’re using vibe coding tools for hobby projects, particularly for the logistical aspects of cybercrime operations,” he said. The message here is clear: many don’t actually need to jailbreak models like Claude to extract practical utility for their endeavors.
The implications of this phenomenon are twofold. On one hand, it paints a picture of a cybercriminal landscape busy with experimentation. On the other, it highlights a significant skill gap that, for now at least, may limit the extent of malicious AI usage.
A Pessimistic Perspective on the Future
However, this optimism isn’t universally shared. Adam Beaumont, interim director at the AI Security Institute (ASI), presented a more cautionary perspective during the conference. With a history as the chief AI officer at GCHQ, Beaumont illustrated the potential dangers of AI by referencing ASI’s demonstration of a frontier AI model executing a complex 32-step cyberattack on a simulated corporate environment.
“The system hacked in a way no model had done before,” Beaumont stated, drawing a stark contrast to systems merely answering inquiries about hacking techniques. “We still don’t fully know how to ensure these systems act as we intend, or how to maintain meaningful human control as their capabilities grow.” This sentiment encapsulates the growing unease among experts regarding the dual-use nature of powerful AI technologies.
The Path Forward
Beaumont stressed the importance of understanding the full potential of these models: “The uncertainty is real, and the discomfort is appropriate.” His remarks underscore a vital need for ongoing reassessment and vigilance as the intersection of AI and cybersecurity continues to evolve.
What emerges from these discussions is not just a vision of the future, but a crucial awareness of the responsibilities that come with it. By engaging in these dialogues, the broader community—encompassing government, industry, and academia—can build informed strategies to navigate the opportunities and challenges that lie ahead in the ever-changing landscape of cybersecurity.