ASAL  ·  AI & ML

🛡️  AI Safety & Alignment Laboratory

Ensuring that powerful AI systems reliably pursue intended goals. Research spans RLHF, constitutional methods, oversight scalability, and formal specification of human values.

Breakthrough · 1,000-Qubit Coherence AchievedCareers · 12 Postdoc Positions OpenAward · Best Paper IEEE S&P 2025Partnership · CERN CollaborationEvent · AI Safety Summit June 2025Grant · $18M NSF Award Green ComputingPublication · Nature Photonics All-Optical NNBreakthrough · 1,000-Qubit Coherence AchievedCareers · 12 Postdoc Positions OpenAward · Best Paper IEEE S&P 2025Partnership · CERN CollaborationEvent · AI Safety Summit June 2025Grant · $18M NSF Award Green ComputingPublication · Nature Photonics All-Optical NN

Mission & Focus

About the Laboratory

Ensuring that powerful AI systems reliably pursue intended goals. Research spans RLHF, constitutional methods, oversight scalability, and formal specification of human values.

Active Projects

Current Research

Constitutional AI at Scale
Active · 2025
Scalable Oversight
Active · 2025
Debate & Amplification
Active · 2025
Value Alignment Benchmarks
Active · 2025

Research Team

Lab Members

Y
Dr. Dominick Rizk
Lab Director
RA
Mr. Sandeep Shiraskar
Research Assistant

Recent Publications

Selected Papers

Constitutional AI at Scale: A Comprehensive Study
Rizk, D. et al. · Top Venue 2025  ★ Best Paper
Scalable Oversight: Theory & Empirical Evaluation
Research Team · Flagship Journal 2025
Foundations of ASAL: Survey and New Directions
Lab Members · Survey Paper 2024
Debate & Amplification: Experimental Results
Collaborative Work · Conference 2024
All Publications →
Join This Lab →