
Research on making deep learning and deep reinforcement learning more interpretable and explainable is receiving much attention. One of the main reasons is the application of deep learning models to high-stake domains. Also, using explanations as a proxy for debugging models so that we could improve performance, learn new insights, and also use explanations as a proxy for compression and distillation. In general, interpretability is an essential component for deploying deep learning models. In my doctoral research, I worked on explainability and robustness of deep learning, primarily in the NLP domain. My general research interests include Explainable AI, Deep Learning, and Foundation Models for Decision Making, such as Transformers and Transfer Learning.