AI and (digital) resilience
Artificial intelligence is reshaping the boundaries of what is possible — not only for innovators and the public good, but also for the bad guys.
The Challenge: A Rapidly Evolving Threat Landscape
Until recently, criminals could send as many phishing emails, develop as much malware, or pull off as many social engineering schemes as they could build teams of technically or socially skilled people. Not anymore: they are using AI to automate and accelerate these tasks at an unprecedented scale.
Made easier and cheaper at every step by AI, the number of brute force denial-of-service cyber attacks has increased 106% in the last year. And AI is doing sophisticated work – Google recently announced its “Big Sleep” AI discovered a previously unreported zero-day vulnerability (until October, a feat of code analysis and insight requiring unusual human talent). It is only a small leap (for AI) to code an attack exploiting this vulnerability.
AI-generated phishing emails and social engineering tactics are now more targeted and widespread, with AI models producing convincing, personalized scams for individuals based on publicly available data. Audio deepfakes are now used in CEO fraud schemes that empty companies’ bank accounts. A marketplace of malicious service providers has emerged to “democratize” these tools; “deepfake-as-a-service”, for instance, is now available for as little as €145 per video.
The impact is clear. Today, an estimated 56% of cyber-attacks use AI during the access and penetration phases, driving up the average cost of data breaches to over €4 million. And it takes only a quick peek at current developments in generative AI – perfect video, agents that can autonomously control computers and do tasks on the internet, exponentially dropping token prices – to realize things will get much worse, and soon. Imagine these capabilities deployed at European scale (analogously to the use of TikTok for microtargeting in Romanian elections).
And get ready for cyber-physical fusion. In its war against Ukraine, Russian cyber attacks have preceded missile strikes, with the intent of making sewing panic and slowing down first responders. In the aftermath of Hurricanes Helene and Milton in the US (this year), targeted misinformation (linked to Russia, China, and Cuba) was used to undermine and misdirect first responders’ efforts. Again, lowering cost means that organized crime and even simple criminals will be using these tools soon.
What can we do about it?
The best response to a bad guy with an AI is a good guy with an AI. Only organizations (and societies) that embed AI into every element of their crisis response cycle (prevention, preparedness, response, recovery) stand a chance.
Preparation and prevention. We must attack ourselves with AI before the bad guys do. AI tools to scan our code and constantly test our systems for vulnerabilities. Frequent simulations and exercises (made cheaper with AI) to test our teams and systems against complex attacks. Bots probing our own employees with social engineering attacks to help them raise their psychological defenses.
Situational awareness and response. Algorithmic tools, with access to vast pools of data and the ability to detect small patterns and anomalies, can identify dangers earlier and also take action.
Automated systems can reroute network traffic, deploy backup resources, and restore functionalities with minimal human intervention. (This is already how anti-DDoS tools work).
Quaint notions of keeping a human-in-the-loop fail against attacks moving faster than human (or organizational) speed.
Recovery and resilience. Once attacks are contained, AI-assisted systems can help rapidly restore data and infrastructure, limit the spread of disinformation, and identify lessons for strengthening our defenses. Intelligent forensics tools, operating at machine speed, can spot how attackers gained entry and recommend structural changes to stop them next time.
Continuous improvement and reinvention. Adapting to an AI-driven threat environment demands more than new tools; it will require continuous learning and adaptation at individual and group level. Beyond the need for more data and cyber specialists, the largest challenge hardest is organizational: new teams, new budgets, new reporting lines, new organizational habits. That is complex and uncomfortable change management – with resilience and security ideally part of a broader set of changes to improve AI-readiness.
So is Europe ready?
We certainly have a legal framework in place:
- New cyber regulation (NIS2, the Cyber Resilience and Cyber Solidarity Acts) create a legal framework for product standards and certification, regulation of essential services and cross border collaboration on large-scale cyber incidents;
- The Digital Services Act obligates action on misinformation;
- eIDAS 2 creates a pan-European digital identity infrastructure, essential for distinguishing between bots and real humans.
If Europe fails, it will not be for lack of regulation. But we face a significant execution gap:
The adoption of AI (and also of the cloud services underlying most AI) is modest among both governments and enterprises. Adoption of secure eID is in the single digits in many large countries (a German is three times as likely to fall prey to identify theft and fraud as an Estonian, where more than 90% use secure eID).
Our collaboration moves at human speed, with many countries contributing almost nothing to cross-border sharing of threat information. And most European firms lack the data and AI skills and literacy to be resistant to
So what can the next commission do? Some ideas:
With a legal framework in place, the Commission and ENISA should propose an action plan on digital resilience, with plans to measure and improve on quantitative indicators (updated from the current Digital Decade targets), including the following (I offer target values for 2030)
- Percentage of Europeans regularly using secure eID (>80%)
- Percentage of government and enterprise technology expenditure going to security (>10%)
- Proportion of vulnerabilities and incidents reported on threat and information sharing platforms (>95%)
- Average speed of initial automated response to common incidents (<5 minutes)
- Damage from disruptive and destructive cyber attacks (reduce by 30%)
- Percentage of citizens, employees passing cyber and disinformation training (>75%)
- Volume and intensity of exercises and penetration testing
While Europe builds a single market with many multinational companies, governmental cyber capacities remain national. The Commission and ENISA should grow their direct collaboration and communication with industry, especially with essential service providers in inherently cross-border sectors (e.g. energy, transport, digital infrastructure), as is already the case for financial services. Member States and industry should set ambitious capability targets, including 1000 new “Red Teams” across Europe using AI tools to probe systems and find vulnerabilities before the bad guys do.
To preempt concerns about data protection and AI governance, ENISA, the European Data Protection Board (EDPB) and EU AI Office should issue clear and specific guidance on permissible use of data and AI for cyber security.
Finally, Europe should become an active student of Ukraine’s experience. Since 2014, the country has been under constant Russian cyber attack and has developed significant public and private capabilities and a security mindset absent in most of Europe. And the best way to learn is through active collaboration and working side-by-side, best achieved by bringing Ukrainian teams and companies into every element of operational and technical cooperation as soon as possible, without waiting for full Ukrainian EU membership.