Cybersecurity experts are raising concerns over the rapid adoption of Artificial Intelligence (AI) and deepfake technologies by non-state actors, who are increasingly using them to amplify disinformation campaigns and launch sophisticated cyberattacks.

Addressing a panel titled ‘Shadow Forces: The Growing Threat of Non-State Actors in Cyber Security and Information Warfare,’ at the Global Security Forum 2025 Monday in Doha, experts explored the increasingly complex and destabilising role of non-state actors – ranging from hacktivist groups and cybercriminal syndicates to ideological extremists and private contractors – who operate outside traditional governmental structures.

“The speed of attacks is dramatically increasing,” stressed engineer Abdulrahman Ali Muhammad al-Farahid al-Malki, president of the National Cyber Security Agency of Qatar.

He explained that attacks are not only quicker to execute but also utilise existing tools within target systems, making detection significantly more challenging. He cited the growing prevalence of ‘cybercrime as a service,’ noting a disturbing trend of malicious actors offering their services for hire.

Al-Malki said that in Qatar, the private sector companies are sharing cyber security information to the government, showing their support to help protect infrastructure and information.

He pointed to AI, particularly generative AI tools like ChatGPT, as a key technology being exploited by these actors. “Now we’re seeing a lot of tools similar to ChatGPT used for the attack,” he said, as he also flagged the growing use of deepfakes for creating realistic but entirely fabricated content aimed at manipulating public opinion or generating profit.

Anjana Rajan, former Assistant National Cyber Director at the White House, acknowledged the dual-edged nature of AI, saying: “We’re very bullish on the opportunities that come with AI, we are not naive about the risks”.

She added that the US government is committed to leading in AI innovation while remaining extremely aware of its potential for weaponisation.

Dr Marc Owen Jones, Associate Professor of Media Analytics at Northwestern University in Qatar, noted that inaction of major online platforms, citing their failure to implement readily available technologies to effectively combat deepfakes and disinformation.

He argued that recent cuts in US federal funding for disinformation research signalled a “renunciation of responsibility” in addressing this critical threat.
Adam Hadley, founder and executive director of Tech Against Terrorism, while expressing optimism about the potential of AI for defence, warned that non-state actors are currently “ahead of the defenders”. He stressed the urgent need for governments to invest in basic Internet infrastructure and skills to effectively tap AI for countering terrorism and other cyber threats.

“Governments, law enforcement, they often lack the fundamental tools and skills just to look at the Internet,” Hadley said. “My concern is the significant delay in adopting this technology”.

The panel, moderated by Defense One’s Technology and Science editor Patrick Tucker, underlined the importance of enhanced international collaboration, increased information sharing between governments and the private sector, and a proactive approach to technological innovation to effectively combat the evolving threat landscape posed by AI-empowered non-state actors.