Job Summary
As AI capabilities rapidly advance, we face a fundamental knowledge gap: we don't yet fully understand the complex dynamics that determine whether AI systems, or even individual capabilities of them, predominantly threaten or protect society. In this role, you'll lead research to decode these offense-defense dynamics, examining how specific attributes of AI technologies influence their propensity to either enhance societal safety or amplify risks. You'll apply interdisciplinary methods to develop quantitative and qualitative frameworks that analyze how AI capabilities proliferate through society as either protective or harmful applications, producing actionable insights for developers, evaluators, standards bodies, and policymakers to anticipate and mitigate risks. This position offers a unique opportunity to shape how society evaluates and governs increasingly powerful AI systems, with direct impact on global efforts to maximize AI's benefits while minimizing risks. This role is 100% remote but requires occasional travel.
About CARMA
The Center for AI Risk Management & Alignment (CARMA) works to help society navigate the complex and potentially catastrophic risks arising from increasingly powerful AI systems. Our mission is specifically to lower the risks to humanity and the biosphere from transformative AI.
We focus on grounding AI risk management in rigorous analysis, developing policy frameworks that squarely address AGI, advancing technical safety approaches, and fostering global perspectives on durable safety. Through these complementary approaches, CARMA aims to provide critical support to society for managing the outsized risks from advanced AI before they materialize.
CARMA is a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public benefit corporation.
Responsibilities
• Develop quantitative system dynamics models capturing the interrelationships between technological, social, and institutional factors that influence AI risk landscapes
• Design detailed analytical models and simulations to identify critical leverage points where policy interventions could shift offense-defense balances toward safer outcomes
• Expand and operationalize our current offense/defense dynamics taxonomy and nascent framework, developing metrics and models to predict whether specific AI system features favor offensive or defensive applications
• Build empirically-informed analytical frameworks using documented cases of AI misuse and beneficial deployed uses to validate theoretical models
• Research how specific technical characteristics (capabilities breadth/depth, accessibility, adaptability, etc.) interact with sociotechnical contexts to determine offense-defense balances
• Build public understanding of offense-defense dynamics through blog posts, articles, conference talks, and media engagement
• Create tools and methodologies to assess new AI models upon release for their likely offense-defense implications
• Draft evidence-based guidance for AI governance that accounts for complex interdependencies between technological capabilities and deployment contexts
• Translate research findings into actionable guidance for key stakeholders including policymakers, AI developers, security professionals, and standards organizations
Requirements
• A M.Sc. or higher in either Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field
• Demonstrated experience with complex systems modeling, risk assessment methodologies, or security analysis
• Strong understanding of dual-use technologies and the factors that influence whether capabilities favor offensive or defensive applications
• Deep understanding of modern AI systems, including large language models, multimodal models, and autonomous agents, with ability to analyze their technical architectures and capability profiles
• Experience in any of the following: Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development
• Ability to develop both qualitative frameworks and quantitative models that capture sociotechnical interactions, and comfort creating semi-quantitative semi-empirical models also grounded in logic
• Record of relevant publications or research contributions related to technology risk, governance, or security
• Exceptional analytical thinking with ability to identify non-obvious path dependencies and feedback loops in complex systems
Pluses
• PhD in a relevant field
• Experience with system dynamics modeling, hypergraph techniques, or other complex network analysis methods
• Skills in developing interactive tools or dashboards for risk visualization and communication
• Background in interdisciplinary research bridging technical and social science domains
• Demonstrated aptitude in top-down techniques and first-principles thinking
• Experience with the quantification of qualitative risk factors or developing proxy metrics for complex phenomena
• Background in compiling and analyzing incident databases or case studies for pattern recognition
• Familiarity with empirical approaches to technology assessment and impact prediction
• Knowledge of international relations theory as it applies to technology proliferation dynamics
CARMA/SEE is proud to be an Equal Opportunity Employer. We will not discriminate on the basis of race, ethnicity, sex, age, religion, gender reassignment, partnership status, maternity, or sexual orientation. We are, by policy and action, an inclusive organization and actively promote equal opportunities for all humans with the right mix of talent, knowledge, skills, attitude, and potential, so hiring is only based on individual merit for the job. Our organization operates through a fiscal sponsor whose infrastructure only supports persons authorized to work in the U.S. as employees. Candidates outside the U.S. would be engaged as independent contractors with project-focused responsibilities. Note that we are unable to sponsor visas at this time.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Join CARMA as a Senior Technical Specialist to advance the methodologies of AI risk assessment and governance in a fully remote role.
Join CARMA as a Communications Director and Staff Director to lead impactful communications and organization-wide initiatives in AI safety.
Equip seeks a Senior Program Development Lead with expertise in Binge Eating Disorder to drive clinical innovation in virtual eating disorder treatment.
Join Kelsey-Seybold Clinic as a Chief Physicist in Radiation Oncology to lead and innovate in providing exceptional radiation treatment care.
Join AstraZeneca as a Laboratory Manager, where you'll lead efforts in dosage form design and development, contributing to transformative drug development.
Become part of AbbVie's mission by advancing the development of innovative biologic therapies as a Senior Scientist II in Worcester, MA.
Eli Lilly and Company is in search of a Computational Chemist to support advanced cheminformatics initiatives within their global teams.
Seeking a dynamic Principal Research Scientist I to drive innovative analytical strategies at AbbVie.
FMV Analyst role at Streamline offers an opportunity to lead real-time GEOINT analysis and deliver critical intelligence products for Marine Corps missions at Quantico.
Lead and manage the first shift air environment testing laboratory at Eurofins Scientific to ensure high-quality, compliant, and efficient laboratory operations.
Lead scientific biostatistics efforts at Alcon to support product development, clinical trials, and regulatory submissions in a Senior Principal Biostatistician role.
As a Director of Clinical at CoMind, you will lead the clinical strategy development and execute groundbreaking approaches to improve brain disorder diagnostics and treatments.
Join Eurofins as a Group Lead to oversee sample testing and method transfer while ensuring GMP compliance in a thriving biopharmaceutical lab.
AbbVie is looking for an Associate Director of Statistical Programming to lead a dynamic team in oncology early development.
Lead a talented analytical sciences team at AbbVie to advance innovative therapies and solutions in healthcare.
We are a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. We are currently focusing on potential risks from the development of human-level artificial intelligence. To learn more about the id...
8 jobsSubscribe to Rise newsletter