I completed my PhD at the Australian National University where I was based at CSIRO Data61, advised by Richard Nock. Before this, I received a BSc (Adv) (Hons Class I and University Medal) from The University of Sydney.

During my PhD, I worked on both theoretical and practical aspects of Machine Learning with particular research interests in generative models, privacy and robustness. Generally speaking, the goal of my research was to build theoretical foundations and explain empirical phenomena in deep learning by answering questions similar to:

  1. How does the information-theoretic divergence between probability measures interact with the data and structure of the learning problem? (see [2,5])
  2. How can we train machine learning models to be more robust against adversaries of varying degree? (see [1,2])

News


Publications


  1. Adversarial Interpretation of Bayesian Inference.
    Hisham Husain and Jeremias Knoblauch
    ALT2022

  2. Regularized Policies are Reward Robust.
    Hisham Husain, Kamil Ciosek and Ryota Tomioka.
    AISTATS2021

  3. Distributional Robustness with IPMs and links to Regularization and GANs.
    Hisham Husain.
    NeurIPS2020

  4. Optimal Continual Learning has Perfect Memory and is NP-Hard.
    Jeremias Knoblauch, Hisham Husain and Tom Diethe.
    ICML2020

  5. Local Differential Privacy for Sampling.
    Hisham Husain, Borja Balle, Zac Cranko and Richard Nock.
    AISTATS2020

  6. A Primal-Dual Link between GANs and Autoencoders.
    Hisham Husain, Richard Nock and Robert C. Williamson.
    NeurIPS2019

Preprints


  1. Data Preprocessing to Mitigate Bias with Fair Boosted Mollifiers
    Alexander Soen, Hisham Husain and Richard Nock.
  2. Distributionally Robust Bayesian Optimization with ϕ-divergences
    Hisham Husain, Vu Nguyen and Anton van den Hengel.

Contact


hisham dot husain at protonmail dot com