Full-time

Research Engineer

Far.Ai

Posted

Dec 29, 2025

Location

Remote

Type

Full-time

Compensation

$100000 - $190000

Mission

What you will drive

FAR.AIis a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response. Founded in July 2022, we have grown quickly to30+ staff. We are uniquely positioned to conduct technical research at a scale surpassing academia and leveraging the research freedom of being a non-profit. Our work is published at top conferences (e.g. NeurIPS, ICLR, ICML) and cited by leading media outlets such as theFinancial Times,Nature NewsandMIT Technology Review.

FAR.AIuses three prongs working together to improve AI safety:

  • FAR.Research - we conduct cutting-edge AI safetyresearchin-house anddispense grantsto support the wider research community.

FAR.Research - we conduct cutting-edge AI safetyresearchin-house anddispense grantsto support the wider research community.

  • FAR.Futures - we bring together key policy makers, researchers and companies to drive change, such as theSan Diego Alignment Workshopor the Guaranteed Safe AIresearch roadmapwritten with Yoshua Bengio.

FAR.Futures - we bring together key policy makers, researchers and companies to drive change, such as theSan Diego Alignment Workshopor the Guaranteed Safe AIresearch roadmapwritten with Yoshua Bengio.

  • FAR.Labs - we host a co-working space in Berkeley to help to incubate other AI safety organizations, currently housing 40 members.

FAR.Labs - we host a co-working space in Berkeley to help to incubate other AI safety organizations, currently housing 40 members.

We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Once the core research problems are solved, we work to scale them to a minimum viable prototype, demonstrating their validity to AI companies and governments to drive adoption.

We are aiming to rapidly grow our team in the following areas especially, at varying levels of seniority:

  • Evals and red-teaming: Conducting pre- and post-release adversarial evaluations of frontier models (e.g.Claude 4 Opus,ChatGPT Agent,GPT-5); developingnovel attacksto support this work; and exploring new threat models (e.g.persuasion,tampering risks).

Evals and red-teaming: Conducting pre- and post-release adversarial evaluations of frontier models (e.g.Claude 4 Opus,ChatGPT Agent,GPT-5); developingnovel attacksto support this work; and exploring new threat models (e.g.persuasion,tampering risks).

  • Infrastructure:Maintaining GPU compute infrastructure to support experiments with open-weight models and developing new tooling to allow our research teams to scale their fine-tuning and post-training workflows to frontier open-weight models.

Infrastructure:Maintaining GPU compute infrastructure to support experiments with open-weight models and developing new tooling to allow our research teams to scale their fine-tuning and post-training workflows to frontier open-weight models.

We are also seeking more senior candidates in the following research areas:

  • Mitigating AI deception: Studying whenlie detectors induce honesty or evasion, and developing model organisms for deception and sandbagging

Mitigating AI deception: Studying whenlie detectors induce honesty or evasion, and developing model organisms for deception and sandbagging

  • Adversarial Robustness: Working to rigorously solve these security problems through building a science of security and robustness for AI, fromdemonstrating superhuman systems can be vulnerable, toscaling laws for robustnessandjailbreaking constitutional classifiers

Adversarial Robustness: Working to rigorously solve these security problems through building a science of security and robustness for AI, fromdemonstrating superhuman systems can be vulnerable, toscaling laws for robustnessandjailbreaking constitutional classifiers

  • Mechanistic Interpretability: FindingissueswithSparse Autoencoders, probing deception usingAmongUs, understandinglearned planningin SokoBan and interpretable data attribution.

Mechanistic Interpretability: FindingissueswithSparse Autoencoders, probing deception usingAmongUs, understandinglearned planningin SokoBan and interpretable data attribution.

FAR.AIis one of the largest independent AI safety research institutes, and is rapidly growing with the goal of diversifying and deepening our research portfolio. We would welcome the opportunity to add new research directions if you are a senior researcher with a strong vision and would like to pitch us on it.

We organize our team as Members of Technical Staff, with significant overlap between scientist and engineer roles. As an engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You can contribute to open source codebases such as Pytorch, HuggingFace Transformers and Accelerate. You will receive engineering mentorship via code review, pair programming and regular 1-to-1s. Alongside the scientists, you will be involved in the write-up of results and credited as an author in papers.

You are encouraged to develop your research taste, proposing novel directions and joining a research pod which suits your interests. You are welcome to take time to study and to attend conferences free of charge. Our technical team is organized into research pods to enable continuity of organizational structure whilst each pod can pivot through varied research projects.

BeyondFAR.AI, you can work with national AI safety institutes, frontier model developers and top academics.

It is essential that you:

  • Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.

Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.

  • Have experience with at least one object-oriented programming language (preferably Python).

Have experience with at least one object-oriented programming language (preferably Python).

  • Are results-oriented and motivated by impactful research.

Are results-oriented and motivated by impactful research.

It is preferable that you have experience with some of the following:

  • Common ML frameworks like PyTorch or TensorFlow.

Common ML frameworks like PyTorch or TensorFlow.

  • Natural language processing or reinforcement learning.

Natural language processing or reinforcement learning.

  • Operating system internals and distributed systems.

Operating system internals and distributed systems.

  • Publications or open-source software contributions.

Publications or open-source software contributions.

  • Basic linear algebra, calculus, probability, and statistics.

Basic linear algebra, calculus, probability, and statistics.

We encourage applications from strong software engineers who are new to ML, and from academics without industrial experience in software engineering.

If based in the USA, you will be an employee ofFAR.AI, a 501(c)(3) research non-profit. Outside the USA, you will be an employee of an EoR organization on behalf ofFAR.AI.

  • Location:Both remote and in-person (Berkeley, CA) are possible. We are also expecting an in-person Singapore office in 2026Q3 as an alternative location. We sponsor visas for in-person employees, and can also hire remotely in most countries.

Location:Both remote and in-person (Berkeley, CA) are possible. We are also expecting an in-person Singapore office in 2026Q3 as an alternative location. We sponsor visas for in-person employees, and can also hire remotely in most countries.

  • Hours:Full-time (40 hours/week).

Hours:Full-time (40 hours/week).

  • Compensation:$100,000-$190,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

Compensation:$100,000-$190,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

  • Application process:A 72-minute programming assessment, two interviews with members of our technical staff, and a paid work trial lasting up to 1 week. If you are not available for a work trial we may be able to find alternative ways of testing your fit.

Application process:A 72-minute programming assessment, two interviews with members of our technical staff, and a paid work trial lasting up to 1 week. If you are not available for a work trial we may be able to find alternative ways of testing your fit.

If you have any questions about the role, please do get in touch [email protected].

Otherwise, if you don't have questions, the best way to ensure a proper review of your skills and qualifications is by applying directly via the application form.Please don't email us to share your resume(it won't have any impact on our decision). Thank you!

Profile

What makes you a great fit

Jan 13, 2026 ... Research Engineer. Location. Berkeley Office, Remote (International), Remote (US) ... non-profit. Our work is published at top conferences (e.g. NeurIPS, ICLR ...