Our Research

Below, we proudly showcase a small collection of research produced under Meridian programmes. Our research concerns questions ranging from responsible governance of AI to civilizational recovery and the future of warfare.

  • Civilisational recovery under endogenous fertility and productivity

    Duncan McClements (CERI Futures Fellowship 2024)

    McClements extends Alexandrie and Eden (2023) by applying the Barro-Becker model to estimate population recovery after massive reductions. He finds that recovery time is about 550 years regardless of loss magnitude, but the recovery path depends on the initial shock.

  • Towards Safe Multilingual Frontier AI

    Arturs Kanepajs, Vladimir Ivanov, and Richard Moulange (ERA Fellowship 2024)

    The authors warn that while multilingual LLMs are vital for global AI benefits, they're vulnerable to "multilingual jailbreaks" — using translation to bypass safety measures. They assess this risk across 24 EU languages and recommend EU policies to enhance AI safety and linguistic inclusivity.

  • Imagining the Aftermaths of a Superintelligent AI: The Case for a "Confined Oracle" Future

    Andrew K. Yang and Oona Lagercrantz (CERI Futures Fellowship 2024)

    The authors critique Max Tegmark's AGI future scenarios — "Libertarian Utopia," "Egalitarian Utopia," and "Protector God" — as unfeasible or unsafe. They propose a "Confined Oracle" model, where AGI is safely contained and used to provide guidance, aligning with current AI safety research.

  • Artificial Intelligence and the Threat of Stable Totalitarianism

    Keir Reid (CERI Futures Fellowship 2024)

    Reid warns that AI tools like facial recognition, censorship systems, and persuasive algorithms help authoritarian regimes create enduring totalitarian states. By suppressing dissent and undermining democracy, AI poses an existential threat. He urges global action to regulate AI and protect human rights.

  • Military AI & X-Risk: A Case Study of Autonomous Weapons Systems (AWS)

    Natasha Karner (CERI Futures Fellowship 2024)

    The paper explores how AI existential risks apply to the military domain, focusing on Autonomous Weapons Systems (AWS). It identifies three key risks: misalignment, malevolence, and misperception. By examining recent AI uses in conflicts, it aims to bridge x-risk discussions with military AI applications.

  • Governing dual-use technologies: Case studies of international security agreements & lessons for AI governance

    Akash Wasil, Michael Gerovitch, Tom Reed, & Jack William Miller (ERA Fellowship 2024)

    The authors study international agreements on dual-use technologies to inform AI governance reducing security risks. They extract lessons on verification, power balance, adapting to tech change, and enforcement.

  • Axiological Seedbanks: Cultivating Value Resilience in the Age of Transformative AI

    Jai Patel (CERI Futures Fellowship 2024)

    Patel proposes "axiological seedbanks" to preserve human values in AI development, preventing value lock-in as AI advances. He suggests using virtual reality for moral innovation and emphasizes inclusive governance to align AI with evolving values.