I am a Hartz Family Career Development Assistant Professor in the Department of Computer Science and Engineering at The Pennsylvania State University. Previously, I was a postdoc at the Institute of Science and Technology (IST) Austria in the Henzinger Group. Before joining IST, I completed my PhD from the University of Texas at Austin advised by Prof. Swarat Chaudhuri. My research lies at the intersection of machine learning and formal methods, with a focus on building intelligent systems that are reliable, transparent, and secure. This work builds connections between the symbolic reasoning and inductive learning paradigms of artificial intelligence.

Research Overview

My research combines ideas from formal methods and machine learning to efficiently build models that are reliable, transparent, and secure. This means that such a system can be expected to learn desirable behaviors with limited data, while provably maintaining some essential correctness invariant and generating models whose decisions can be understood by humans. I believe that we can achieve these goals via Neurosymbolic learning.

[Read More] Current machine learning models are dominated by Deep Neural Networks, because they are capable of leveraging gradient-based algorithms to optimize a specific objective. However, neural models are considered “black-boxes” and are often considered untrustworthy due to the following drawbacks:
  1. Hard to interpret: this makes these models hard to audit and debug.
  2. Hard to formally verify: due to the lack of abstractions in neural models they are often too large to verify for desirable behavior using automated reasoning tools.
  3. Unreliable: neural models have notoriously high levels of variability, to the extent that the random initialization of the weights can determine whether the learner finds a useful model.
  4. Lack of domain awareness: neural models lack the ability to bias the learner with commonsense knowledge about the task or environment.
My research focuses on addressing these four drawbacks simultaneously, and provides a promising path to discovering new algorithmic techniques leading to Trustworthy Artificial Intelligence.




PhD Thesis

Programmatic Reinforcement Learning


  • (Fall 2022) CSE 597: Neurosymbolic Learning
  • (Spring 2023) CMPSC 448: Machine Learning and Algorithmic AI