I am a Hartz Family Career Development Assistant Professor in the Department of Computer Science and Engineering at The Pennsylvania State University. Previously, I was a postdoc at the Institute of Science and Technology (IST) Austria in the Henzinger Group. Before joining IST, I completed my PhD from the University of Texas at Austin advised by Prof. Swarat Chaudhuri. My research lies at the intersection of machine learning and formal methods, with a focus on building intelligent systems that are reliable, transparent, and secure. This work builds connections between the symbolic reasoning and inductive learning paradigms of artificial intelligence.
Research Overview
My research combines ideas from formal methods and machine learning to efficiently build models that are reliable, transparent, and secure. This means that such a system can be expected to learn desirable behaviors with limited data, while provably maintaining some essential correctness invariant and generating models whose decisions can be understood by humans. I believe that we can achieve these goals via Neurosymbolic learning.
[Read More]
Current machine learning models are dominated by Deep Neural Networks, because they are capable of leveraging gradient-based algorithms to optimize a specific objective. However, neural models are considered “black-boxes” and are often considered untrustworthy due to the following drawbacks:- Hard to interpret: this makes these models hard to audit and debug.
- Hard to formally verify: due to the lack of abstractions in neural models they are often too large to verify for desirable behavior using automated reasoning tools.
- Unreliable: neural models have notoriously high levels of variability, to the extent that the random initialization of the weights can determine whether the learner finds a useful model.
- Lack of domain awareness: neural models lack the ability to bias the learner with commonsense knowledge about the task or environment.
Publications
Eventual Discounting Temporal Logic Counterfactual Experience Replay
Cameron Voloshin, Abhinav Verma, Yisong Yue
International Conference on Machine Learning (ICML), 2023.
ArXivNeurosymbolic Reinforcement Learning with Formally Verified Exploration
Greg Anderson, Abhinav Verma, Isil Dillig, Swarat Chaudhuri
Conference on Neural Information Processing Systems (NeurIPS), 2020.
ArXiv CodeLearning Differentiable Programs with Admissible Neural Heuristics
Ameesh Shah, Eric Zhan, Jennifer J Sun, Abhinav Verma, Yisong Yue, Swarat Chaudhuri
Conference on Neural Information Processing Systems (NeurIPS), 2020.
ArXiv Code Video
- Imitation-Projected Programmatic Reinforcement Learning
Abhinav Verma, Hoang M. Le, Yisong Yue, Swarat Chaudhuri
Conference on Neural Information Processing Systems (NeurIPS), 2019.
ArXiv Code Video
- Control Regularization for Reduced Variance Reinforcement Learning
Richard Cheng, Abhinav Verma, Gabor Orosz, Swarat Chaudhuri, Yisong Yue, Joel W. Burdick
International Conference on Machine Learning (ICML), 2019.
ArXiv Code Video
Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel
International Conference on Learning Representations(ICLR), 2019.
ArXivProgrammatically Interpretable Reinforcement Learning
Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri
International Conference on Machine Learning (ICML), 2018.
ArXiv Video
PhD Thesis
Programmatic Reinforcement Learning
Teaching
- (Fall 2022) CSE 597: Neurosymbolic Learning
- (Spring 2023) CMPSC 448: Machine Learning and Algorithmic AI