Policy Outreach Lead (EU and UK) London, England
Job description
About Anthropic
You may be a good fit if you:
- Have a track record of effective, information-rich advocacy in European (especially EU) and/or UK policy and related communities
- Thrive when engaging with governments, policymakers, and civil society to translate complex technical topics into accessible information
- Know what resources will be useful for policymakers to understand and act on a particular issue area
- Have demonstrated success in identifying and convening relevant stakeholders
- Are motivated to better equip governments to understand the pace of progress in the AI field and develop effective policies
- Are adept at creating compelling narratives and real-world examples to support your advocacy efforts
- Enjoy thinking through the policy implications of technical developments and industry trends, and relating them to the interests and priorities of policymakers
- Have a deep curiosity for frontier technological research and are eager to work closely with technical colleagues
Strong candidates may also:
- Have extensive experience engaging leading policy actors in Brussels and/or London
- Have demonstrated an ability to quickly get up to speed on complex technical areas and policy dynamics
Sample projects:
- Brief a diverse set of policy and research actors that are looking to better understand AI safety and AI policy opportunities
- Convene diverse stakeholders and speak at policy events such as research announcements, discussion panels, or workshops on AI safety topics
- Support senior colleagues in external engagements by preparing meeting briefs and taking action on follow-up requests
- Develop new programming and generating opportunities for Anthropic to regularly engage with policymakers, think tanks, and non-profit organizations
- Collaborate with policy and technical teams to translate Anthropic research into concrete policy proposals, stakeholder education, and thought leadership opportunities
- Develop and amplify public submissions, responses to requests for information (RFI), and policy memos written by Anthropic to drive positive change in AI policy
- Partner with non-profit organizations and academic researchers to publish discussion papers and policy memos on how to enable responsible AI research & development
Annual Salary (GBP)
- The expected salary range for this position is £190,000 - £215,000, or the local currency equivalent for candidates outside the U.K.
Compensation and Benefits*
Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
Benefits - Benefits we offer include:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Private health, dental, and vision insurance for you and your dependents.
- Pension contribution (matching 4% of your salary)
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Health cash plan
- Life insurance and income protection
- Daily lunches and snacks in our office.
This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.
How we're different
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!