I am a final year PhD student at the Australian National University and CSIRO Data61, advised by Richard Nock. Before this, I received a BSc (Adv) in Pure Mathematics at the University of Sydney. 

I work on both theoretical and practical aspects of Machine Learning with particular research interests in generative models, privacy and robustness. I am interested in building theoretical foundations and explaining empirical phenomena in deep learning. Research questions I typically answer involve 

  1. How does the information-theoretic divergence between probability measures interact with the data and structure of the learning problem?
  2. How can we train machine learning models to be more robust against adversaries of varying degree?


News



Publications


  1. Regularized Policies are Reward Robust.
    Hisham Husain, Kamil Ciosek and Ryota Tomioka.
    AISTATS2021

  2. Distributional Robustness with IPMs and links to Regularization and GANs.
    Hisham Husain.
    NeurIPS2020

  3. Optimal Continual Learning has Perfect Memory and is NP-Hard.
    Jeremias Knoblauch, Hisham Husain and Tom Diethe.
    ICML2020

  4. Local Differential Privacy for Sampling.
    Hisham Husain, Borja Balle, Zac Cranko and Richard Nock.
    AISTATS2020

  5. A Primal-Dual Link between GANs and Autoencoders.
    Hisham Husain, Richard Nock and Robert C. Williamson.
    NeurIPS2019


Preprints


  1. Data Preprocessing to Mitigate Bias with Fair Boosted Mollifiers
    Alexander Soen, Hisham Husain and Richard Nock.


  2. Risk-Monotonicity in Statistical Learning
    Zakaria Mhammedi and Hisham Husain.



Contact


hisham dot husain at anu dot edu dot au