Currently, I’m working on the following projects. Feel free to drop by or send me emails if you’re interested in them.

  • Detecting poisoning attacks on ML models. We mainly focus on the statistic difference of poisoning samples/models.
  • Security and privacy of federated learning (FL). Our interests in FL are that it is distributed and vulnerable to insider attacks. At the same time, FL is subject to performance and communication constrains. These make it challenging and important to work around both security and system design goals.
  • Federated IoT systems. We want to test drive FL on IoT systems like smart home or intrusion detection systems.
  • Coining AI models such as contrastive learning, transfer learning, multitasking learning for security applications such as user authentication, malware detection.
  • Practical differential privacy (DP/LDP) systems on real data such as mobile sensor data, indoor localization data, medical data.
  • Coining privacy-aware NLP models for medical EHR.
  • GNN-based causal inference for data stream applications such as prediction and recommendation.


I’m very lucky to work with the following talented students. Join us if you have interests in our projects!

  • PhD students
    • Shixiong Li (BS/MS from Southwest Jiaotong Univ.)
    • Xingyu Lyu (BS from Shanghai Univ., MS from Guangzhou Univ.)
  • Master students
    • Manita Ngarmpaiboonsombat (Model stealing attacks, UML)
    • Poornika Bonam (Private NLP, UML)
    • Yang Hu (Adversarial example attacks, VT)
  • Undergrad students
    • Jared Q Widberg (Reverse engineering, UML HONOR Thesis)


I work closely with Dr. Tao Li at IUPUI, Dr. Sashank Narain and Dr. Mohammad Arif Ul Alam at UML, Dr. Yang Xiao, Ning Wang, Yang Hu, and Jianfeng He at Virginia Tech.