LVM  ·  AI & ML

👁️  Language, Vision & Multimodal Lab

Unified representations across language, vision, audio, and structured data. Research on cross-modal grounding, zero-shot transfer, and embodied language understanding.

Breakthrough · 1,000-Qubit Coherence AchievedCareers · 12 Postdoc Positions OpenAward · Best Paper IEEE S&P 2025Partnership · CERN CollaborationEvent · AI Safety Summit June 2025Grant · $18M NSF Award Green ComputingPublication · Nature Photonics All-Optical NNBreakthrough · 1,000-Qubit Coherence AchievedCareers · 12 Postdoc Positions OpenAward · Best Paper IEEE S&P 2025Partnership · CERN CollaborationEvent · AI Safety Summit June 2025Grant · $18M NSF Award Green ComputingPublication · Nature Photonics All-Optical NN

Mission & Focus

About the Laboratory

Unified representations across language, vision, audio, and structured data. Research on cross-modal grounding, zero-shot transfer, and embodied language understanding.

Active Projects

Current Research

Unified Multimodal Pretraining
Active · 2025
Visual Reasoning
Active · 2025
Audio-Language Grounding
Active · 2025
Embodied Language Agents
Active · 2025

Research Team

Lab Members

Y
Dr. Dominick Rizk
Lab Director
RA
Mr. Sandeep Shiraskar
Research Assistant

Recent Publications

Selected Papers

Unified Multimodal Pretraining: A Comprehensive Study
Rizk, D. et al. · Top Venue 2025  ★ Best Paper
Visual Reasoning: Theory & Empirical Evaluation
Research Team · Flagship Journal 2025
Foundations of LVM: Survey and New Directions
Lab Members · Survey Paper 2024
Audio-Language Grounding: Experimental Results
Collaborative Work · Conference 2024
All Publications →
Join This Lab →