Usman Gohar
CS Ph.D. Candidate @ Iowa State University

Hi! I am Usman! I am a final-year PhD candidate and F. Wendell Scholar in Computer Science at Iowa State University, advised by Dr. Robyn Lutz in the Laboratory for Software Safety. My research interests lie at the intersection of Machine Learning, Software Engineering, AI Safety, and ethics, to address algorithmic harms and enhance responsible AI practices. I currently serve as the Section Lead for “Harms to Individuals through AI-Generated Fake Content” for the 2026 International AI Safety Report, a global initiative chaired by Turing Award winner Yoshua Bengio. This initiative, led by the UK AISI, unites over 100 leading experts, governments, and organizations to assess the frontier risks posed by advanced AI systems.
My research has focused on creating robust, practical approaches to operationalize AI and software safety within complex, data-driven systems, including advancing fairness, mitigating harms from machine learning and AI, and improving safety assurance for autonomous systems (e.g., drones) and AI systems. I’ve published peer-reviewed research in leading venues in software engineering, machine learning, and AI ethics.
My research focuses on three key areas:
-
Operationalizing AI and Software Safety Developing practical frameworks and methodologies to apply system safety principles in complex, data-driven AI systems. My work includes advancing safety assurance approaches for autonomous drones and evaluating risks associated with generative AI systems.
-
Algorithmic Fairness and Harm Mitigation
Investigating how harms from machine learning models manifest and how fairness can be systematically measured and improved. I analyze existing fairness metrics and propose improvements to reduce bias and unfairness. -
Evaluation and Deployment of Safe AI
Designing transparent evaluation techniques that bridge theoretical AI safety concepts with real-world deployment challenges, focusing on creating actionable tools and guidelines for reliable, fair, and safe AI systems.
Previously, I have worked as a Data Scientist in different sectors like agriculture, manufacturing, and power systems, specializing in forecasting, predictive analytics, and model deployment. I am passionate about translating foundational AI research into practical applications.
News + Updates
Sep 2024 | Excited to announce that I am co-organizing our NeurIPS 2024 Workshop, “Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI” aka EvalEval 2024! with the brilliant Irene Solaiman, Zeerak Talat at Hugging Face. Call for papers will be out soon! See you at NeurIPS! |
---|---|
Sep 2024 | Invited to be on the Program Committee for AAAI 2025! |
Aug 2024 | My work “CoDefeater: Using LLMs To Find Defeaters in Assurance Cases” has been accepted at ASE (NIER) 2024! We evaluate using LLMs to assist in red-teaming by identifying defeaters and simulating diverse failure modes. See you in Sacramento! |
Jul 2024 | Invited to be part of ICSE 2025 Shadow Program Committee! |
Jul 2024 | Excited to announce that our paper “Evaluating the Social Impact of Generative AI Systems in Systems and Society” with Irene Solaiman, Zeerak Talat and other fantastic researchers has been accepted to appear as a Book Chapter in Hacker, Engel, Hammer, Mittelstadt (eds), Oxford Handbook on the Foundations and Regulation of Generative AI. Oxford University Press |
May 2024 | Invited to be on the Program Committee for AIES 2024! |
Apr 2024 | Our work, “A Family-Based Approach to Safety Cases for Controlled Airspaces in Small Uncrewed Aerial Systems” has been accepted at AIAA’24! |
Mar 2024 | Invited to be an Ethics Reviewer for ICML 2024! |
Mar 2024 | Invited to be on the Program Committee for TrustNLP: Fourth Workshop on Trustworthy Natural Language Processing at NAACL’24! |