Rising up to Defend Against Cyber Threats

Wednesday 13 November, 2024

by Ben Kwong (California & St Edmund Hall 2023)

Data Defenders Header

Atharv Gupta (Virginia & Exeter 2023) opened with a somber question to the panelists: ‘What cyber threats keep you up at night?’ The breadth of the panelists’ answers was alarming. Ahmed Achechak, CEO and Co-founder of Qevlar AI, explained that, while not all cyber threats are concerning, the key challenge lies in the fact that ‘in essence, protection is much more difficult than harming’. The ‘convergence factor’ posed the biggest threat in the eyes of Francesca Bosco, the Head of Foresight at the CyberPeace Institute. The convergence factor describes a world in which cyber and physical risks converge-- a world that becomes more realistic each day as climate change, supply chain uncertainty, and geopolitical risks become more prevalent.

What is unfolding across the world today reinforces the credibility of these threats. Major firms are failing to defend their data amidst the vast noise. Nations in conflict are increasingly searching for ways to converge cyber and physical attacks against their enemies. Yet, we should not give up hope. Ahmed pointed out that the use of artificial intelligence can serve to help balance the playing field by making protection a more feasible option. Artificial intelligence has the potential to expand the capabilities of the investigation element of cyber defense.

On the regulatory side, there is an important debate unfolding in governments across the world on how to properly address cyber threats, especially in the context of AI. Fola Adeleke, Executive Director at the Global Center for AI Governance, recognized the need for large datasets to train AI to defend against major risks, but he expressed that firms shouldn’t be expected to provide all their data. Fola’s organization is responsible for the creation of the Global Index for AI Responsibility which measures government commitment to the responsible deployment of AI. The creation of this index is perhaps a reflection of the fact that many nations have not been responsible in their deployment of AI. Michael Sexton, a Senior Policy Advisor for Cyber and AI at the Third Way, explained a deep philosophical concern over human’s ‘highly reactionary habits’ that lull governments into poor long-term regulatory decisions.

Although the conversation was meant to be centered around cyber threats, the pervasiveness of AI was apparent in the audience’s questions. The audience posed questions regarding valid concerns over the impact of AI in the context of conflict and privacy. Our panelists relayed some valuable insights on these concerns. In conflict, Michael argued that there needs to be a human in the loop of every targeting system. Further, Fransecsa pointed out the lack of digital literacy as an unfortunate risk factor preventing lay people from understanding the potential impact of an autonomous weapons system. For privacy, ‘the best AI is no AI’-- put simply by Fola. To him, we must resist the deployment of AI in certain areas to maintain privacy.

The coming decades are characterized by a high degree of uncertainty. The cyber dimension is no different. Cyber threats pose a serious threat to global security and, if this expert panel made one thing clear, it is that we need to expand our conversations on the matter.

Share this article