Current jobs related to AI Safety Research Scientist - London, Greater London - Department for Science, Innovation & Technology


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyThe AI Safety Institute research unit is seeking highly motivated and talented Research Scientists to work on critical areas of AI safety, including risk models, frontier models, and large-scale targeted manipulation and deception.Key Responsibilities:Conduct research on AI safety and risk models, including...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyThe AI Safety Institute research unit is seeking highly motivated and talented Research Scientists to work on critical areas of AI safety, including risk models, frontier models, and large-scale targeted manipulation and deception.Key Responsibilities:Conduct research on AI safety and risk models, including...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our mission is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our mission is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our goal is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while developing...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our goal is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while developing...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our mission is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Team as a Research Scientist in AI SafetyWe're a team of scientists, engineers, and domain experts at the AI Safety Institute, focused on mitigating the risks associated with autonomous AI systems. Our mission is to advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains, while...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    The AI Safety Institute is seeking a highly motivated and talented Research Scientist to join our team in the area of AI safety research. The successful candidate will work on studying, evaluating, and recommending mitigations for extreme risks from autonomous AI systems.Key ResponsibilitiesConduct research on AI safety and risk mitigationDevelop and...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Research TeamThe AI Safety Institute is seeking highly motivated and talented Research Scientists to work on critical projects related to AI safety. Our team is dedicated to advancing the field of AI safety research and developing innovative solutions to mitigate risks associated with autonomous AI systems.Key ResponsibilitiesConduct research on AI...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Join Our Research TeamThe AI Safety Institute is seeking highly motivated and talented Research Scientists to work on critical projects related to AI safety. Our team is dedicated to advancing the field of AI safety research and developing innovative solutions to mitigate risks associated with autonomous AI systems.Key ResponsibilitiesConduct research on AI...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    We're pushing the boundaries of AI safety research at the AI Safety Institute. As a research scientist, you'll be part of a dynamic team exploring the risks of autonomous AI systems. Your expertise will help us advance the state of the science in risk modeling, incorporating insights from safety-critical and adversarial domains. You'll work closely with...

  • Research Scientist

    1 day ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    Job SummaryWe are seeking a highly skilled Research Scientist to join our Science of Evaluations team at the AI Safety Institute. As a Research Scientist, you will play a key role in conducting applied and foundational research focused on the measurement of frontier AI system capabilities.Key ResponsibilitiesDevelop and apply rigorous scientific techniques...

  • AI Safety Researcher

    1 month ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    Join the AI Safety Institute TeamAISI is launching a new Mechanistic Interpretability team to research the fundamental question of how can we tell if a model is scheming? This is an ambitious bet to bring interpretability as a field into prime time. We believe that this is a vital challenge that mechanistic interpretability can help solve, ensuring that...

  • AI Safety Researcher

    1 month ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    Join the AI Safety Institute TeamAISI is launching a new Mechanistic Interpretability team to research the fundamental question of how can we tell if a model is scheming? This is an ambitious bet to bring interpretability as a field into prime time. We believe that this is a vital challenge that mechanistic interpretability can help solve, ensuring that...

  • Research Scientist

    2 weeks ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    Unlock the Secrets of AI SafetyAISI is pioneering a groundbreaking Mechanistic Interpretability team to tackle the fundamental question of how can we ensure AI models are safe and transparent? This ambitious project aims to bring interpretability to the forefront, ensuring that AI systems can be reliably evaluated for safety and alignment. We're seeking a...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Job Title: Machine Learning Research ScientistJoin the AI Safety Institute as a Machine Learning Research Scientist and contribute to the development of safety cases for AI systems. As a key member of our research team, you will conduct foundational research to advance the understanding of AI safety and governance.About the RoleWe are seeking a highly...


  • London, Greater London, United Kingdom AI Safety Institute Full time

    Job Title: Machine Learning Research ScientistJoin the AI Safety Institute as a Machine Learning Research Scientist and contribute to the development of safety cases for AI systems. As a key member of our research team, you will conduct foundational research to advance the understanding of AI safety and governance.About the RoleWe are seeking a highly...

  • Research Scientist

    4 weeks ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    AI Safety Institute: Launching a New Mechanistic Interpretability TeamAISI is embarking on a groundbreaking project to develop mechanistic interpretability, a crucial challenge in ensuring the safety of AI systems. We seek a team lead, research scientists, and research engineers to join our mission.Key Responsibilities:Conduct hands-on mechanistic...

  • Research Scientist

    4 weeks ago


    London, Greater London, United Kingdom AI Safety Institute Full time

    AI Safety Institute: Launching a New Mechanistic Interpretability TeamAISI is embarking on a groundbreaking project to develop mechanistic interpretability, a crucial challenge in ensuring the safety of AI systems. We seek a team lead, research scientists, and research engineers to join our mission.Key Responsibilities:Conduct hands-on mechanistic...

AI Safety Research Scientist

2 months ago


London, Greater London, United Kingdom Department for Science, Innovation & Technology Full time

Position Overview

About the Department for Science, Innovation & Technology

The Department for Science, Innovation & Technology is dedicated to advancing the responsible development of artificial intelligence for the benefit of society. Our mission is to foster a safe and secure AI landscape through rigorous research and collaboration with experts across various sectors.

We are on a quest to build a team of exceptional talent to address the challenges posed by advanced AI systems. As part of our commitment, we are looking for individuals who can contribute to our vision of AI safety and governance.

Key Responsibilities:

  • Conduct Evaluative Research: You will engage in foundational research aimed at enhancing our understanding of AI safety cases, contributing to the development of frameworks that ensure AI systems operate safely.
  • Innovate AI Governance Tools: Your role will involve creating practical methodologies to assess the societal impacts of AI technologies and develop strategies for effective governance.
  • Facilitate Collaboration: Establish and maintain channels for information exchange with national and international stakeholders, including policymakers and research organizations.

What We Value:

  • Diversity of Thought: We recognize that diverse perspectives are crucial for innovation and invite individuals from all backgrounds to contribute to our mission.
  • Team Collaboration: We prioritize teamwork and open communication, valuing contributions from all team members.
  • Impact-Driven Innovation: We are committed to making a tangible difference in AI safety and encourage creative problem-solving.
  • Inclusive Culture: We strive to create an environment where all employees feel valued and empowered to express their authentic selves.

Role Description:

As a Machine Learning Safety Specialist, you will be at the forefront of research aimed at developing robust safety cases for AI systems. Your work will involve exploring how safety cases can be structured to mitigate risks associated with AI deployment.

Safety cases serve as essential frameworks in various industries, providing structured arguments that demonstrate the safety of a system in specific contexts. In the rapidly evolving field of AI, these cases are expected to play a pivotal role in ensuring the responsible use of technology.

Your contributions will include both direct research and collaboration with external experts, focusing on critical safety agendas and evaluations. You will have the opportunity to work alongside leading professionals in the field, contributing to the overall strategy and vision of the safety case initiative.

Core Qualifications:

  • Experience in machine learning research within industry, academia, or relevant open-source projects.
  • Strong understanding of technical safety methodologies.
  • Excellent writing and communication skills.
  • Ability to work independently and adapt to a dynamic environment.
  • Familiarity with large language models and their applications.

Additional Considerations:

  • Experience in interdisciplinary teams and a strong publication record in relevant conferences is advantageous.

Working Conditions:

  • Flexibility to work remotely with a commitment to collaborative office engagement.
  • Access to professional development opportunities and a supportive work culture.

We are looking for individuals who are passionate about advancing AI safety and governance, and who are eager to contribute to meaningful research that shapes the future of technology.