Housam Babiker
PhD in Computing Science University of Alberta
Housam Babiker received his PhD in Computing Science degree from the University of Alberta, Canada. During the doctoral study, he developed explainable AI techniques for deep learning, mainly in the natural language processing (NLP) domain, working at the Explainable Artificial Intelligence (XAI) lab under the supervision of Prof. Randy Goebel. Housam's research interests include explainable AI, deep learning, reinforcement learning and their applications to real-world problems.
Research on making deep learning and deep reinforcement learning more interpretable and explainable is receiving much attention.
One of the main reasons is the application of deep learning models to high-stake domains. Also, using explanations as a proxy for debugging models
so that we could improve performance, learn new insights, and also use explanations as a proxy for compression and distillation. In general, interpretability is an essential component for deploying deep
learning models. In my doctoral research, I worked on explainability and robustness of deep learning, primarily in the NLP domain. My general research interests include Explainable AI, Deep Learning, and Foundation Models for Decision Making, such as Transformers and Transfer Learning.