Housam Babiker

Housam Babiker

PhD in Computing Science
University of Alberta
Housam Babiker received his PhD in Computing Science degree from the University of Alberta, Canada. During the doctoral study, he developed explainable AI techniques for deep learning, mainly in the natural language processing (NLP) domain, working at the Explainable Artificial Intelligence (XAI) lab under the supervision of Prof. Randy Goebel. Housam's research interests include explainable AI, deep learning, reinforcement learning and their applications to real-world problems.

News

Research

Research on making deep learning and deep reinforcement learning more interpretable and explainable is receiving much attention. One of the main reasons is the application of deep learning models to high-stake domains. Also, using explanations as a proxy for debugging models so that we could improve performance, learn new insights, and also use explanations as a proxy for compression and distillation. In general, interpretability is an essential component for deploying deep learning models. In my doctoral research, I worked on explainability and robustness of deep learning, primarily in the NLP domain. My general research interests include Explainable AI, Deep Learning, and Foundation Models for Decision Making, such as Transformers and Transfer Learning.

Research Articles

Selected Awards

  • Highest performance on task 4 of the COLIEE Competition, Japan, 2018.
  • GSA Travel Award, University of Alberta, Canada, 2017.
  • Full Doctoral Scholarship: Awarded by the Computing Science Department of the University of Alberta, Canada, 2016.