I completed my PhD at the Australian National University where I was based at CSIRO Data61, advised by Richard Nock. Before this, I received a BSc (Adv) (Hons Class I and University Medal) from The University of Sydney.
During my PhD, I worked on both theoretical and practical aspects of Machine Learning with particular research interests in generative models, privacy and robustness. Generally speaking, the goal of my research was to build theoretical foundations and explain empirical phenomena in deep learning by answering questions similar to:
- How does the information-theoretic divergence between probability measures interact with the data and structure of the learning problem? (see [2,5])
- How can we train machine learning models to be more robust against adversaries of varying degree? (see [1,2])
- [04/22] New preprint on dealing with distributional shifts for Bayesian Optimization!
- [03/22] New work titled "Adversarial Interpretation of Bayesian Inference" was accepted to ALT2022.
- [01/21] New work titled "Regularized Policies are Reward Robust" was accepted to AISTATS2021. We build on the theory of regularizing policies beyond entropy with additional connections to regression losses in Q-learning.
- [11/20] New preprint on Risk-Monotonicity, which helps us understand instability in training, which is related to an open problem posed at COLT2019.
- [09/20] "Distributional Robustness with IPMs and links to Regularization and GANs" was accepted to NeurIPS2020.
- [06/20] Our work "Optimal Continual Learning has Perfect Memory and is NP-Hard" was accepted to ICML2020.
- [01/20] Our work "Local Differential Privacy for Sampling" was accepted to AISTATS2020.
- [11/19] I gave a talk
at the Max Planck Institute for Empirical Inference. (slides)
- [10/19] Our work "A Primal-Dual Link between GANs and Autoencoders" was accepted to NeurIPS2019.
- Adversarial Interpretation of Bayesian Inference.
Hisham Husain and Jeremias Knoblauch
- Regularized Policies are Reward Robust.
Hisham Husain, Kamil Ciosek and Ryota Tomioka.
- Distributional Robustness with IPMs and links to Regularization and GANs.
- Optimal Continual Learning has Perfect Memory and is NP-Hard.
Jeremias Knoblauch, Hisham Husain and Tom Diethe.
- Local Differential Privacy for Sampling.
Hisham Husain, Borja Balle, Zac Cranko and Richard Nock.
- A Primal-Dual Link between GANs and Autoencoders.
Hisham Husain, Richard Nock and Robert C. Williamson.
- Data Preprocessing to Mitigate Bias with Fair Boosted Mollifiers
Alexander Soen, Hisham Husain and Richard Nock.
- Distributionally Robust Bayesian Optimization with ϕ-divergences
Hisham Husain, Vu Nguyen and Anton van den Hengel.
hisham dot husain at protonmail dot com