Publications

BlockSwap: Fisherguided Block Substitution for Network CompressionThe desire to run neural networks on lowcapacity edge devices has led to the development of a wealth of compression techniques. Moonshine is a simple and powerful example of this: one takes a large pretrained network and substitutes each of its convolutional blocks with a selected cheap alternative block, then distills the resultant network with the original. However, not all blocks are created equally; for a required parameter budget there may exist a potent combination of many different cheap blocks. In this work, we find these by developing BlockSwap: an algorithm for choosing networks with interleaved block types by passing a single minibatch of training data through randomly initialised networks and gauging their Fisher potential. We show that blockwise cheapening yields more accurate networks than single blocktype networks across a spectrum of parameter budgets.

Separable Layers Enable Structured Efficient Linear SubstitutionsIn response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff. Pointwise convolutions are fully connected layers and are thus prepared for replacement by structured transforms. Networks using such layers are able to learn the same tasks as those using standard convolutions, and provide Paretooptimal benefits in efficiency/accuracy, both in terms of computation (multadds) and parameter count (and hence memory).

Learning to Learn via SelfCritiqueIn fewshot learning, a machine learning system learns from a small set of labelled examples relating to a specific task, such that it can generalize to new examples of the same task. Given the limited availability of labelled examples in such tasks ,we wish to make use of all the information we can. Usually a model learns taskspecific information from a small trainingset (supportset) to predict on an unlabelled validation set (targetset). The targetset contains additional taskspecific information which is not utilized by existing fewshot learning methods. Making use of the targetset examples via transductive learning requires approaches beyond the current methods; at inference time, the targetset contains only unlabelled input datapoints, and so discriminative learning cannot be used. In this paper, we propose a framework called SelfCritique and Adaptor SCA, which learns to learn a labelfree loss function, parameterized as a neural network. A basemodel learns on a supportset using existing methods (e.g. stochastic gradient descent combined with the crossentropy loss), and then is updated for the incoming targettask using the learnt loss function. This labelfree loss function is itself optimized such that the learnt model achieves higher generalization performance. Experiments demonstrate that SCA offers substantially reduced errorrates compared to baselines which only adapt on the supportset, and results in state of the art benchmark performance on MiniImageNet and CaltechUCSD Birds 200.

Zeroshot Knowledge Transfer via Adversarial Belief MatchingPerforming knowledge transfer from a large teacher network to a smaller student is a popular task in modern deep learning applications. However, due to growing dataset sizes and stricter privacy regulations, it is increasingly common not to have access to the data that was used to train the teacher. We propose a novel method which trains a student to match the predictions of its teacher without using any data or metadata. We achieve this by training an adversarial generator to search for images on which the student poorly matches the teacher, and then using them to train the student. Our resulting student closely approximates its teacher for simple datasets like SVHN, and on CIFAR10 we improve on the state oftheart for fewshot distillation (with 100 images per class), despite using no data. Finally, we also propose a metric to quantify the degree of belief matching between teacher and student in the vicinity of decision boundaries, and observe a significantly higher match between our zeroshot student and the teacher, than between a student distilled with real data and the teacher.

Exploration by Random Network Distillation
Proceedings of the 7th International Conference on Learning Representations (ICLR)
We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level. 
How to train your MAML
Proceedings of the 7th International Conference on Learning Representations (ICLR)
The field of fewshot learning has recently seen substantial advancements. Most of these advancements came from casting fewshot learning as a metalearning problem. Model Agnostic Meta Learning or MAML is currently one of the best approaches for fewshot learning via metalearning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++. 
LargeScale Study of CuriosityDriven Learning
Proceedings of the 7th International Conference on Learning Representations (ICLR)
Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with handdesigned, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first largescale study of purely curiositydriven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the handdesigned extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the predictionbased rewards in stochastic setups. 
On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length
Proceedings of the 7th International Conference on Learning Representations (ICLR)
Recent work has identified that using a high learning rate or a small batch size for Stochastic Gradient Descent (SGD) based training of deep neural networks encourages finding flatter minima of the training loss towards the end of training. Moreover, measures of the flatness of minima have been shown to correlate with good generalization performance. Extending this previous work, we investigate the loss curvature through the Hessian eigenvalue spectrum in the early phase of training and find an analogous bias: even at the beginning of training, a high learning rate or small batch size influences SGD to visit flatter loss regions. In addition, the evolution of the largest eigenvalues appears to always follow a similar pattern, with a fast increase in the early phase, and a decrease or stabilization thereafter, where the peak value is determined by the learning rate and batch size. Finally, we find that by altering the learning rate just in the direction of the eigenvectors associated with the largest eigenvalues, SGD can be steered towards regions which are an order of magnitude sharper but correspond to models with similar generalization, which suggests the curvature of the endpoint found by SGD is not predictive of its generalization properties. 
Distilling with Performance Enhanced StudentsThe task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size. However, the efficacy of pruning based approaches has since been called into question. In this paper, we turn to distillation for model compressionspecifically, attention transferand develop a simple method for discovering performance enhanced student networks. We combine channel saliency metrics with empirical observations of runtime performance to design more accurate networks for a given latency budget. We apply our methodology to residual and denselyconnected networks, and show that we are able to find resourceefficient student networks on different hardware platforms while maintaining very high accuracy. These performanceenhanced student networks achieve up to 10% boosts in top1 ImageNet accuracy over their channelpruned counterparts for the same inference time.

Assume, Augment and Learn: Unsupervised FewShot MetaLearning via Random Labels and Data AugmentationThe field of fewshot learning has been laboriously explored in the supervised setting, where perclass labels are available. On the other hand, the unsupervised fewshot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating fewshot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting fewshot tasks can be used to train any standard metalearning framework. Once trained, such a model, can be directly applied on small reallabeled datasets without any changes or finetuning required. In our experiments, the learned models achieve good generalization performance in a variety of established fewshot learning tasks on Omniglot and MiniImagenet.

Pruning neural networks: is it time to nip it in the bud?
Workshop on Compact Deep Neural Networks with industrial applications, NeurIPS
Pruning is a popular technique for compressing a neural network: a large pretrained network is finetuned while connections are successively removed. However, the value of pruning has largely evaded scrutiny. In this extended abstract, we examine residual networks obtained through Fisherpruning and make two interesting observations. First, when timeconstrained, it is better to train a simple, smaller network from scratch than prune a large network. Second, it is the architectures obtained through the pruning process  not the learnt weights that prove valuable. Such architectures are powerful when trained from scratch. Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements. 
Moonshine: Distilling with Cheap Convolutions
Advances in Neural Information Processing Systems
Many engineers wish to deploy modern neural networks in memorylimited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting costbenefit. We propose structural model distillation for memory reduction using a strategy that produces a student architecture that is a simple transformation of the teacher architecture: no redesign is needed, and the same hyperparameters can be used. Using attention transfer, we provide Pareto curves/tables for distillation of residual networks with four benchmark datasets, indicating the memory versus accuracy payoff. We show that substantial memory savings are possible with very little loss of accuracy, and confirm that distillation provides student network performance that is better than training that student architecture directly on data. 
Dilated DenseNets for Relational ReasoningDespite their impressive performance in many tasks, deep neural networks often struggle at relational reasoning. This has recently been remedied with the introduction of a plugin relational module that considers relations between pairs of objects. Unfortunately, this is combinatorially expensive. In this extended abstract, we show that a DenseNet incorporating dilated convolutions excels at relational reasoning on the SortofCLEVR dataset, allowing us to forgo this relational module and its associated expense.

CINIC10 is not ImageNet or CIFAR10In this brief technical report we introduce the CINIC10 dataset as a plugin extended alternative for CIFAR10. It was compiled by combining CIFAR10 with images selected and downsampled from the ImageNet database. We present the approach to compiling the dataset, illustrate the example images for different classes, give pixel distributions for each part of the repository, and give some standard benchmarks for well known models. Details for download, usage, and compilation can be found in the associated github repository.

GINN: Geometric Illustration of Neural NetworksThis informal technical report details the geometric illustration of decision boundaries for ReLU units in a three layer fully connected neural network. The network is designed and trained to predict pixel intensity from an (x, y) input location. The Geometric Illustration of Neural Networks (GINN) tool was built to visualise and track the points at which ReLU units switch from being active to off (or vice versa) as the network undergoes training. Several phenomenon were observed and are discussed herein.

Augmenting Image Classifiers using Data Augmentation Generative Adversarial Networks
Proceedings of the 27th International Conference on Artificial Neural Networks (ICANN)
Effective training of neural networks requires much data. In the lowdata regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation alleviates this by using existing data more effectively, but standard data augmentation produces only limited plausible alternative data. Given the potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, uses data from a source domain and learns to take a data item and augment it by generating other withinclass data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes. We demonstrate that a Data Augmentation Generative Adversarial Network (DAGAN) augments classifiers well on Omniglot, EMNIST and VGGFace. 
Three Factors Influencing Minima in SGD
Proceedings of the 27th International Conference on Artificial Neural Networks (ICANN)
We investigate the dynamical and convergent properties of stochastic gradient descent (SGD) applied to Deep Neural Networks (DNNs). Characterizing the relation between learning rate, batch size and the properties of the final minima, such as width or generalization, remains an open question. In order to tackle this problem we investigate the previously proposed approximation of SGD by a stochastic differential equation (SDE). We theoretically argue that three factors  learning rate, batch size and gradient covariance  influence the minima found by SGD. In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization. We confirm these findings experimentally. Further, we include experiments which show that learning rate schedules can be replaced with batch size schedules and that the ratio of learning rate to batch size is an important factor influencing the memorization process. 
Characterising AcrossStack Optimisations for Deep Convolutional Neural Networks
International Symposium on Workload Characterization
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resourceconstrained devices. Since such systems are where some of their most useful applications lie (e.g. obstacle detection for mobile robots, visionbased medical assistive technology), significant bodies of work from both machine learning and systems communities have attempted to provide optimisations that will make CNNs available to edge devices. In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an acrossstack approach by implementing and evaluating the most common neural network compression techniques (weight pruning, channel pruning, and quantisation) and optimising their parallel execution with a range of programming approaches (OpenMP, OpenCL) and hardware architectures (CPU, GPU). We provide comprehensive Pareto curves to instruct tradeoffs under constraints of accuracy, execution time, and memory space. 
Asymptotically exact inference in differentiable generative models
Electronic Journal of Statistics
Many generative models can be expressed as a differentiable function applied to input variables sampled from a known probability distribution. This framework includes both the generative component of learned parametric models such as variational autoencoders and generative adversarial networks, and also procedurally defined simulator models which involve only differentiable operations. Though the distribution on the input variables to such models is known, often the distribution on the output variables is only implicitly defined. We present a method for performing efficient Markov chain Monte Carlo inference in such models when conditioning on observations of the model output. For some models this offers an asymptotically exact inference method where approximate Bayesian computation might otherwise be employed. We use the intuition that computing conditional expectations is equivalent to integrating over a density defined on the manifold corresponding to the set of inputs consistent with the observed outputs. This motivates the use of a constrained variant of Hamiltonian Monte Carlo which leverages the smooth geometry of the manifold to move between inputs exactly consistent with observations. We validate the method by performing inference experiments in a diverse set of models. 
Data Augmentation Generative Adversarial NetworksEffective training of neural networks requires much data. In the lowdata regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other withinclass data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance fewshot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGGFace data. In our experiments we can see over 13% increase in accuracy in the lowdata regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76\) and VGGFace (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%).

Continuously tempered Hamiltonian Monte Carlo
Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)
Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC) method for performing approximate inference in complex probabilistic models of continuous variables. In common with many MCMC methods, however, the standard HMC approach performs poorly in distributions with multiple isolated modes. We present a method for augmenting the Hamiltonian system with an extra continuous temperature control variable which allows the dynamic to bridge between sampling a complex target distribution and a simpler unimodal base distribution. This augmentation both helps improve mixing in multimodal targets and allows the normalisation constant of the target distribution to be estimated. The method is simple to implement within existing HMC code, requiring only a standard leapfrog integrator. We demonstrate experimentally that the method is competitive with annealed importance sampling and simulating tempering methods at sampling from challenging multimodal distributions and estimating their normalising constants. 
Asymptotically exact inference in differentiable generative models
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS)
Many generative models can be expressed as a differentiable function of random inputs drawn from some simple probability density. This framework includes both deep generative architectures such as Variational Autoencoders and a large class of procedurally defined simulator models. We present a method for performing efficient MCMC inference in such models when conditioning on observations of the model output. For some models this offers an asymptotically exact inference method where Approximate Bayesian Computation might otherwise be employed. We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs. This motivates the use of a constrained variant of Hamiltonian Monte Carlo which leverages the smooth geometry of the manifold to coherently move between inputs exactly consistent with observations. We validate the method by performing inference tasks in a diverse set of models. 
Towards a Neural Statistician
Proceedings of the 6th International Conference on Learning Representations (ICLR)
An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision. 
ResourceEfficient Feature Gathering at Test Time
Workshop on Reliable Machine Learning in the Wild, NIPS
Data collection is costly. A machine learning model requires input data to produce an output prediction, but that input is often not costfree to produce accurately. For example, in the social sciences, it may require collecting samples; in signal processing it may involve investing in expensive accurate sensors. The problem of allocating a budget across the collection of different input variables is largely over looked in machine learning, but is important under realworld constraints. Given that the noise level on each input feature depends on how much resource has been spent gathering it, and given a fixed budget, we ask how to allocate that budget to maximise our expected reward. At the same time, the optimal model parameters will depend on the choice of budget allocation, and so searching the space of pos sible budgets is costly. Using doubly stochastic gradient methods we propose a solution that allows expressive models and massive datasets, while still providing an interpretable budget allocation for feature gathering at test time. 
Censoring Representations with an Adversary
Proceedings of the 5th International Conference on Learning Representations (ICLR)
In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate minmax optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model. 
Evaluation of a presurgical functional MRI workflow: From data acquisition to reporting
International Journal of Medical Informatics, 86, p 3742.
Purpose: Present and assess clinical protocols and associated automated workflow for presurgical functional magnetic resonance imaging in brain tumor patients. Methods: Protocols were validated using a singlesubject reliability approach based on 10 healthy control subjects. Results from the automated workflow were evaluated in 9 patients with brain tumors, comparing fMRI results to direct electrical stimulation (DES) of the cortex. Results: Using a new approach to compute singlesubject fMRI reliability in controls, we show that not all tasks are suitable in the clinical context, even if they show meaningful results at the group level. Comparison of the fMRI results from patients to DES showed good correspondence between techniques (odds ratio 36). Conclusion: Providing that validated and reliable fMRI protocols are used, fMRI can accurately delineate eloquent areas, thus providing an aid to medical decision regarding brain tumor surgery. 
Stochastic Parallel Block Coordinate Descent for Largescale Saddle Point Problems
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI)
We consider convexconcave saddle point problems with a separable structure and nonstrongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primaldual updates, which enables flexible parallel optimization for largescale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of primaldual methods and utilizing the structure of the separable convexconcave saddle point problem. It is capable of solving a wide range of machine learning applications, including robust principal component analysis, Lasso, and feature selection by group Lasso, etc. Theoretically and empirically, we demonstrate significantly better performance than stateoftheart methods in all these applications. 
CovarianceControlled Adaptive Langevin Thermostat for LargeScale Bayesian Sampling
Advances in Neural Information Processing Systems
Monte Carlo sampling for Bayesian posterior inference is a common approach used in machine learning. The Markov Chain Monte Carlo procedures that are used are often discretetime analogues of associated stochastic differential equations (SDEs). These SDEs are guaranteed to leave invariant the required posterior distribution. An area of current research addresses the computational benefits of stochastic gradient methods in this setting. Existing techniques rely on estimating the variance or covariance of the subsampling error, and typically assume constant variance. In this article, we propose a covariancecontrolled adaptive Langevin thermostat that can effectively dissipate parameterdependent noise while maintaining a desired target distribution. The proposed method achieves a substantial speedup over popular alternative schemes for largescale machine learning applications. 
Adaptive stochastic primaldual coordinate descent for separable saddle point problems
Joint European Conference on Machine Learning and Knowledge Discovery in Databases
We consider a generic convexconcave saddle point problem with a separable structure, a form that covers a wideranged machine learning applications. Under this problem structure, we follow the framework of primaldual updates for saddle point problems, and incorporate stochastic block coordinate descent with adaptive stepsizes into this framework. We theoretically show that our proposal of adaptive stepsizes potentially achieves a sharper linear convergence rate compared with the existing methods. Additionally, since we can select “minibatch” of block coordinates to update, our method is also amenable to parallel processing for largescale data. We apply the proposed method to regularized empirical risk minimization and show that it performs comparably or, more often, better than stateoftheart methods on both synthetic and realworld data sets. 
Training Deep Convolutional Neural Networks to Play Go
Proceedings of the 32nd International Conference on Machine Learning (ICML)
Mastering the game of Go has remained a long standing challenge to the field of AI. Modern computer Go systems rely on processing millions of possible future positions to play well, but intuitively a stronger and more 'humanlike' way to play the game would be to rely on pattern recognition abilities rather then brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to 'hard code' symmetries that are expect to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction programs have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search. It is also able to win some games against state of the art Go playing program Fuego while using a fraction of the play time. This success at playing Go indicates high level principles of the game were learned. 
Multiperiod Trading Prediction Markets with Connections to Machine Learning
Proceedings of the 31st International Conference on Machine Learning (ICML)
We present a new model for prediction markets, in which we use risk measures to model agents and introduce a market maker to describe the trading process. This specific choice on modelling tools brings us mathematical convenience. The analysis shows that the whole market effectively approaches a global objective, despite that the market is designed such that each agent only cares about its own goal. Additionally, the market dynamics provides a sensible algorithm for optimising the global objective. An intimate connection between machine learning and our markets is thus established, such that we could 1) analyse a market by applying machine learning methods to the global objective, and 2) solve machine learning problems by setting up and running certain markets. 
The supervised hierarchical Dirichlet process
IEEE Transactions on Pattern Analysis and Machine Intelligence (Special Issue on Bayesian Nonparametrics)
We propose the supervised hierarchical Dirichlet process (sHDP), a nonparametric generative model for the joint distribution of a group of observations and a response variable directly associated with that whole group. We compare the sHDP with another leading method for regression on grouped data, the supervised latent Dirichlet allocation (sLDA) model. We evaluate our method on two realworld classification problems and two realworld regression problems. Bayesian nonparametric regression models based on the Dirichlet process, such as the Dirichlet processgeneralised linear models (DPGLM) have previously been explored; these models allow flexibility in modelling nonlinear relationships. However, until now, Hierarchical Dirichlet Process (HDP) mixtures have not seen significant use in supervised problems with grouped data since a straightforward application of the HDP on the grouped data results in learnt clusters that are not predictive of the responses. The sHDP solves this problem by allowing for clusters to be learnt jointly from the group structure and from the label assigned to each group. 
Series Expansion Approximations of Brownian Motion for NonLinear Kalman Filtering of Diffusion Processes
IEEE Transactions on Signal Processing
In this paper, we describe a novel application of sigmapoint methods to continuousdiscrete filtering. In principle, the nonlinear continuous discrete filtering problem can be solved exactly. In practice, the solution contains terms that are computationally intractible. Assumed density filtering methods attempt to match statistics of the filtering distribution to some set of more tractible probability distributions. We describe a novel method that decomposes the Brownian motion driving the signal in a generalised Fourier series, which is truncated after a number of terms. This approximation to Brownian can be described using a relatively small number of Fourier coefficients, and allows us to compute statistics of the filtering distribution with a single application of a sigmapoint method. Assumed density filters that exist in the literature usually rely on discretisation of the signal dynamics followed by iterated application of a sigma point transform (or a limiting case thereof). Iterating the transform in this manner can lead to loss of information about the filtering distri bution in highly nonlinear settings. We demonstrate that our method is better equipped to cope with such problems. 
Bayesian Inference in Sparse Gaussian Graphical ModelsOne of the fundamental tasks of science is to find explainable relationships between observed phenomena. One approach to this task that has received attention in recent years is based on probabilistic graphical modelling with sparsity constraints on model structures. In this paper, we describe two new approaches to Bayesian inference of sparse structures of Gaussian graphical models (GGMs). One is based on a simple modification of the cuttingedge block Gibbs sampler for sparse GGMs, which results in significant computational gains in high dimensions. The other method is based on a specific construction of the Hamiltonian Monte Carlo sampler, which results in further significant improvements. We compare our fully Bayesian approaches with the popular regularisationbased graphical LASSO, and demonstrate significant advantages of the Bayesian treatment under the same computing costs. We apply the methods to a broad range of simulated data sets, and a reallife financial data set.

A Topic Model for Melodic Sequences
Proceedings of the 29th International Conference on Machine Learning (ICML)
We examine the problem of learning a probabilistic model for melody directly from musical sequences belonging to the same genre. This is a challenging task as one needs to capture not only the rich temporal structure evident in music, but also the complex statistical dependencies among different music components. To address this problem we introduce the Variablegram Topic Model, which couples the latent topic formalism with a systematic model for contextual information. We evaluate the model on nextstep prediction. Additionally, we present a novel way of model evaluation, where we directly compare model samples with data sequences using the Maximum Mean Discrepancy of string kernels, to assess how close is the model distribution to the data distribution. We show that the model has the highest performance under both evaluation measures when compared to LDA, the Topic Bigram and related nontopic models. 
Isoelastic Agents and Wealth Updates in Machine Learning Markets
Proceedings of the 29th International Conference on Machine Learning (ICML)
Recently, prediction markets have shown considerable promise for developing flexible mechanisms for machine learning. In this paper, agents with isoelastic utilities are considered. It is shown that the costs associated with homogeneous markets of agents with isoelastic utilities produce equilibrium prices corresponding to alphamixtures, with a particular form of mixing component relating to each agent's wealth. We also demonstrate that wealth accumulation for logarithmic and other isoelastic agents (through payoffs on prediction of training targets) can implement both Bayesian model updates and mixture weight updates by imposing different market payoff structures. An iterative algorithm is given for market equilibrium computation. We demonstrate that inhomogeneous markets of agents with isoelastic utilities outperform state of the art aggregate classifiers such as random forests, as well as single classifiers (neural networks, decision trees) on a number of machine learning benchmarks, and show that isoelastic combination methods are generally better than their logarithmic counterparts. 
Comparing Probabilistic Models for Melodic Sequences
Proceedings of the ECMLPKDD
Modelling the real world complexity of music is a challenge for machine learning. We address the task of modeling melodic sequences from the same music genre. We perform a comparative analysis of two probabilistic models; a Dirichlet Variable Length Markov Model (DirichletVMM) and a Time Convolutional Restricted Boltzmann Machine (TCRBM). We show that the TCRBM learns descriptive music features, such as underlying chords and typical melody transitions and dynamics. We assess the models for future prediction and compare their performance to a VMM, which is the current state of the art in melody generation. We show that both models perform significantly better than the VMM, with the DirichletVMM marginally outperforming the TCRBM. Finally, we evaluate the short order statistics of the models, using the KullbackLeibler divergence between test sequences and model samples, and show that our proposed methods match the statistics of the music genre significantly better than the VMM. 
Machine Learning Markets
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS)
Prediction markets show considerable promise for developing flexible mechanisms for machine learning. Here, machine learning markets for multivariate systems are defined, and a utilitybased framework is established for their analysis. This differs from the usual approach of defining static betting functions. It is shown that such markets can implement model combination methods used in machine learning, such as product of expert and mixture of expert approaches as equilibrium pricing models, by varying agent utility functions. They can also implement models composed of local potentials, and message passing methods. Prediction markets also allow for more flexible combinations, by combining multiple different utility functions. Conversely, the market mechanisms implement inference in the relevant probabilistic models. This means that market mechanism can be utilized for implementing parallelized model building and inference for probabilistic modelling.