Meridian Visiting
Researcher Programme
Transform your AI safety research ideas into funded projects in just one week
Apply by March 19, anywhere on Earth.
The Cambridge AI Safety Hub invites technical AI safety researchers to join our Visiting Researcher Programme in Cambridge from April 6-12, 2025.
This week-long intensive programme will help you develop compelling research projects while preparing proposals for Open Philanthropy's $40 million AI safety research initiative.
Why attend our one-week intensive programme?
Our structured programme is designed to take you from initial concept to submission-ready proposal in just seven days:
Team formation: Connect with researchers who share complementary skills and interests
Project development: Transform initial ideas into well-specified research proposals
Expert feedback: Receive evaluation and revision suggestions from AI safety specialists
Proposal refinement: Develop competitive applications for Open Philanthropy funding
For those who secure funding:
Researchers who successfully obtain Open Philanthropy grants will be invited back to Cambridge to:
Work from dedicated space at Meridian Cambridge
Access Cambridge's AI safety academic environment and network
Receive research management support
Visa and lodging support to help you establish yourself in Cambridge over the summer
The details
Who should apply:
Researchers with expertise in machine learning, computer science, mathematics, or related fields
Graduates of programmes such as MATS, MARS, ARENA, ML4G, SPAR, and ERA with AI safety knowledge looking to transition to full-time research
Individuals with established research portfolios interested in transitioning to AI safety
PhD candidates, recent graduates, or postdoctoral researchers exploring AI safety directions
PIs interested in incorporating AI safety into their research agenda
People based in Cambridge or willing/excited to work there and contribute to strengthening Cambridge as a hub for AI safety research
Support included:
Partial or full expense reimbursement to and from Cambridge, UK. (Exact amount will depend on individual circumstances and available funding.)
Accommodation during the programme week
Workspace and resources
Meals during program activities
Apply by March 19, 2025, anywhere on Earth.
Spaces are limited to approximately 30 participants.
Programme Structure
-
Day 1-2: Ideation and team formation
Explore research directions and meet potential collaborators through introductory sessions, workshops, and our speed cofounder matching process. You'll connect with researchers whose skills complement yours and begin developing project ideas together. -
Day 3: Team Cementation
Form your research team and refine your project focus. By the end of day three, you'll have a clear concept and research direction with your newly formed team. -
Days 4-5: Proposal Development
Learn effective grant writing strategies from experts and develop your proposal with structured feedback. Sessions will cover research impact assessment and alignment with funder priorities. -
Day 6: Red Teaming and Feedback
Present your proposal to peers and experts who will identify potential weaknesses. Use this critical feedback to strengthen your proposal's technical content and presentation. -
Day 7: Proposal Finalization
Polish your submission with final technical reviews. By day's end, you'll have a completed proposal ready for the April 15th deadline.
FAQs
-
The programme is designed to fill a gap between upskilling programs and full-time research positions for those transitioning to the AI Safety field. It provides an institutional home and supportive environment for researchers and aims to help researchers in developing compelling research proposals for further funding.
-
Unlike typical upskilling programs, this program specifically targets researchers who already have AI safety knowledge but need help transitioning to full-time research roles. It focuses on team formation, project development, and grant writing rather than teaching technical skills.
-
Collaborations are an extremely effective learning mechanism. Researchers who might otherwise work independently can benefit significantly from working with others they can learn from and get rapid feedback/coaching. Research teams typically write much better project proposals than individuals working alone.
-
Selection is based on a review of CVs and prior research experience, evaluation of preliminary project proposals, and assessment of candidates' motivations for entering the AI safety field. Selection decisions will be made on a rolling basis and we encourage applicants to apply as early as possible.
-
Selection decisions are made on a rolling basis. We encourage those who will need more time to make travel arrangements to apply earlier. We aim to notify all applicants by March 24th at the latest.
-
The program will reimburse some or all travel expenses, provide accommodation for the duration of the programme week, and cover meals and refreshments during program activities
-
You can apply either as an individual or with a pre-formed team. For individuals, the programme encourages team formation during the in-person week, with ideal research teams consisting of 2-4 members. If you apply as an individual, you’ll have opportunities to connect with other participants and form a team through our structured matching process. You are also welcome to submit your final proposal individually if you do not find a suitable team.
-
After the in-person week, teams will submit their proposals to Open Philanthropy by the April 15th deadline. Funding decisions are expected in May-June 2025, with funded projects commencing from June 2025 onwards.
-
In the case that grants submitted to Open Philanthropy are unsuccessful, we will be happy to continue working with teams to refine their proposals and submit to alternative funding sources.
-
Funded teams are encouraged to return to Cambridge to conduct their funded research at the Meridian office and CAISH facilities. We will assist with visas and relocation for the research period.
-
Open Philanthropy has identified 21 research areas of interest across five clusters: Adversarial machine learning, Exploring sophisticated misbehavior in LLMs, Model transparency, Trust from first principles, and Alternative approaches to mitigating AI risks. Their RFP provides detailed descriptions of these areas.
-
Funded teams will have access to dedicated workspace at the Meridian office, integration with Cambridge's AI safety research community, optional research management support, and administrative assistance for grant management.
-
If you have further questions, please contact Gábor Fuisz at gabor@cambridgeaisafety.org