JMLR Editorial Board

Editors-in-Chief

Managing Editors

Editorial Assistant

Production Editor

  • Kevin Bello, Carnegie Mellon University and University of Chicago.

Web Master

JMLR Action Editors

  • Alekh Agarwal, Google Research, USA. Reinforcement Learning, Online Learning, Bandits, Learning Theory.
  • Shipra Agrawal, Columbia University Reinforcement Learning, Multi-armed bandits, Online learning, Online optimization
  • Edo Airoldi, Harvard University, USA Statistics, approximate inference, causal inference, network data analysis, computational biology.
  • Dan Alistarh, IST Austria & Neural Magic distributed optimization, federated learning, model compression, efficient ML
  • Genevera Allen, Rice University, USA Statistical machine learning, high-dimensional statistics, modern multivariate analysis, graphical models, data integration, tensor decompositions.
  • Pierre Alquier, ESSEC Asia-Pacific Statistical Learning theory, PAC-Bayes learning, Approximate Bayesian inference, Variational inference, High-dimensional statistics
  • Animashree Anandkumar, California Institute of Technology, USA Tensor decomposition, non-convex optimization, probabilistic models, reinforcement learning.
  • Krishnakumar Balasubramanian, University of California, Davis Sampling and stochastic optimization, Kernel methods, Geometric and topological data analysis, Statistical learning theory.
  • Elias Bareinboim, Columbia University causal Inference, generalizability, fairness, reinforcement learning
  • Samy Bengio, Apple, USA Deep learning, representation learning
  • Yoshua Bengio, University of Montreal, Canada / Mila Deep learning, learning to reason
  • Quentin Berthet, Google DeepMind High-dimensional statistics, convex optimization, differentiable programming
  • Alexandre Bouchard, UBC mcmc, smc, phylogenetics
  • Joan Bruna, NYU, USA deep learning theory, signal processing, statistics
  • Miguel Carreira-Perpinan, University of California, Merced, USA Decision trees and forests, neural network compression, optimization in deep learning
  • Silvia Chiappa, DeepMind Causal inference, Approximate Bayesian inference, variational inference, ML fairness
  • John Cunningham, Columbia University, USA State space models, deep generative models, approximate inference, gaussian processes, computational neuroscience.
  • Marco Cuturi, Apple Optimal transport, geometric methods
  • Florence d'Alche-Buc, Telecom Paris, Institut Polytechnique de Paris Kernel methods, complex output prediction, robustness, explainability, bioinformatics
  • Luc De Raedt, Katholieke Universiteit Leuven, Belgium (statistical) relational learning, inductive logic programming, symbolic machine learning, probabilistic programming, learning from structured data, pattern mining
  • Vanessa Didelez, Leibniz Institute for Prevention Research and Epidemiology - BIPS, Bremen, Germany causal inference, graphical models, structure learning, applications in epidemiology
  • Gal Elidan, Hebrew University, Israel
  • Barbara Engelhardt, Stanford University, USA Latent factor models, computational biology, statistical inference, hierarchical models
  • Kenji Fukumizu, The Institute of Statistical Mathematics, Japan Kernel methods, dimension reduction
  • Aurelien Garivier, Ecole Normale Suprieure de Lyon, France Bandits, Sequential Analysis, Information Theory and Statistics
  • Christophe Giraud, Universite Paris-Saclay Clustering, network analysis, algorithmic fairness, active learning, theory of neural networks, high-dimensional statistics
  • Manuel Gomez-Rodriguez, Max Planck Institute for Software Systems Fairness, interpretability, accountability, strategic behavior, human-ai collaboration, temporal point processes
  • Russ Greiner, University of Alberta, Canada Medical informatics, active/budgeted Learning
  • Quanquan Gu, UCLA optimization, theory of deep learning, reinforcement learning, LLMs, deep generative models, high-dimensional statistics
  • Benjamin Guedj, Inria and University College London, France and UK Learning theory, PAC-Bayes, computational statistics, high-dimensional statistics, theory of deep learning, probabilistic models, Bayesian inference
  • Rajarshi Guhaniyogi, Texas A & M University Spatial and spatio-temporal Bayesian methods for large data, Bayes theory and methods for high dimensional regressions, tensor and network-valued regressions, functional data analysis, approximate Bayesian inference, graphical models, applications in neuroimaging and environmental sciences
  • Maya Gupta, University of Washington fairness, interpretability, societal issues, safety, regresssion, ensembles, shape constraints, immunology, information theory
  • Isabelle Guyon, University Paris-Saclay, France, and ChaLearn, USA Feature selection, causality, model selection, automated machine learning, computer vision, kernel methods, privacy, fairness
  • Zaid Harchaoui, University of Washington Stochastic optimization, distributional shift, differentiable programming, federated learning, high-dimensional statistical inference, reproducing kernel Hilbert space.
  • Matthew Hoffman, Google Bayesian inference, Markov chain Monte Carlo, Sequential Monte Carlo,Variational inference
  • Daniel Hsu, Columbia University Learning theory
  • Aapo Hyvarinen, University of Helsinki, Finland Unsupervised learning, natural image statistics, neuroimaging data analysis
  • Tommi Jaakkola, Massachusetts Institute of Technology, USA Approximate inference, structured prediction, deep learning
  • Martin Jaggi, EPFL, Switzerland Distributed training, federated learning, optimization
  • Prateek Jain, Microsoft Research, India Non-convex Optimization, Stochastic Optimization, Large-scale Optimization, Resource-constrained Machine Learning
  • Kevin Jamieson, University of Washington Multi-armed bandits, active learning, experimental design
  • Stefanie Jegelka, Massachusetts Institute of Technology, USA Submodularity, determinantal point processes, negative dependence, Bayesian optimization
  • Nan Jiang, University of Illinois Urbana-Champaign reinforcement learning theory
  • Varun Kanade, University of Oxford learning theory; online learning; computational complexity; optimization
  • Samuel Kaski, Aalto University, Finland Probabilistic modelling, multiple data sources (multi-view, multi-task, multimodal, retrieval); applications in bioinformatics, user interaction, brain signal analysis
  • Sathiya Keerthi, Microsoft Research, USA optimization, large margin methods, structured prediction, large scale learning, distributed training
  • Mohammad Emtiyaz Khan, RIKEN Center for Advanced Intelligence, Japan Variational Inference, Approximate Bayesian inference, Bayesian Deep Learning
  • Mladen Kolar, University of Southern California, USA high-dimensional statistics, graphical models
  • George Konidaris, Duke University, USA Reinforcement Learning, artificial intelligence, robotics
  • Aryeh Kontorovich, Ben-Gurion University metric spaces, nearest neighbors, Markov chains, statistics
  • Wouter Koolen, CWI, Amsterdam Online Learning, Bandits, Pure Exploration, e-values
  • Sanjiv Kumar, Google Research representation learning, optimization, deep learning, hashing, nearest neighbor search
  • Eric Laber, Duke University reinforcement learning, precision medicine, treatment regimes, causal inference
  • Simon Lacoste-Julien, Mila, Universit de Montral & SAIT AI Lab, Montreal optimization, structured prediction, theory of deep learning
  • Christoph Lampert, Institute of Science and Technology, Austria (IST Austria) transfer learning, trustworthy learning, computer vision
  • Tor Lattimore, DeepMind Bandits, reinforcement learning, online learning
  • Nicolas Le Roux, Microsoft Research, Montreal optimization, reinforcement learning
  • Honglak Lee, Google and University of Michigan, Ann Arbor Deep Learning, Deep Generative Models, Representation Learning, Reinforcement Learning, Unsupervised Learning
  • Anthony Lee, University of Bristol Markov chain Monte Carlo, sequential Monte Carlo
  • Qiang Liu, Dartmouth College, USA Probablistic graphical models, inference and learning, computational models for crowdsourcing
  • Gabor Lugosi, Pompeu Fabra University, Spain statistical learning theory, online prediction, concentration inequalities
  • Shiqian Ma, Rice University first-order methods, stochastic algorithms, bilevel optimization, Riemannian optimization
  • Michael Mahoney, University of California at Berkeley, USA randomized linear algebra; stochastic optimization; neural networks; matrix algorithms; graph algorithms; scientific machine learning
  • Stephan Mandt, variational inference, deep latent variable models, machine learning and physics, neural data compression
  • Vikash Mansinghka, Massachusetts Institute of Technology, USA Probabilistic programming, Bayesian structure learning, large-scale sequential Monte Carlo
  • Benjamin Marlin, University of Massachusetts Amherst Probabilistic models, missing data, time series
  • Rahul Mazumder, Massachusetts Institute of Technology mathematical optimization, high-dimensional statistics, sparsity, Boosting, nonparametric statistics, shape constrained estimation, decision tree ensembles, compressing large neural networks
  • Qiaozhu Mei, University of Michigan, USA Learning from text, network, and behavioral data, representation learning, interactive learning
  • Vahab Mirrokni, Google Research Mechanism Desgin and Internet Economics, Algorithmic Game Theory, Distributed Optimization, Submodular Optimization, Large-scale Graph Mining
  • Mehryar Mohri, New York University, USA Learning theory (all aspects, including auctioning, ensemble methods, structured prediction, time series, on-line learning, games, adaptation, learning kernels, spectral learning, ranking, low-rank approximation)
  • Joris Mooij, University of Amsterdam, Netherlands Causality
  • Sayan Mukherjee, Duke University, USA; University of Leipzig; Max Planck Institute for Mathematics in the Sciences Bayesian, Time series, Geometry, Topology, Deep learning
  • Gergely Neu, reinforcement learning, learning theory, online learning, bandit theory
  • Lam Nguyen, IBM Research, Thomas J. Watson Research Center Stochastic Gradient Algorithms, Non-convex Optimization, Stochastic Optimization, Convex Optimization
  • Scott Niekum, University of Massachusetts Amherst AI safety, imitation learning, reinforcement learning, robotics, human-ai interaction
  • Chris Oates, Newcastle University Bayesian computation, kernel methods, uncertainty quantification
  • Francesco Orabona, Boston University Online convex optimization, betting algorithms, parameter-free online optimization, stochastic optimization
  • Laurent Orseau, Deepmind Reinforcement Learning, Artificial General Intelligence
  • Debdeep Pati, Texas A&M University Bayes theory and methods in high dimensions; Approximate Bayesian methods; high dimensional network analysis, graphical models, hierarchical modeling of complex shapes, point pattern data modeling, real-time tracking algorithms
  • Jie Peng, University of California, Davis, USA High dimensional statistical inference, graphical models, functional data analysis
  • Vianney Perchet, ENSAE & Criteo AI Lab bandits, online learning, matching
  • Alexandre Proutiere, KTH Royal Institute of Technology Reinforcement learning, statistical learning in control systems, bandits, clustering and community detection
  • Maxim Raginsky, University of Illinois at Urbana-Champaign Theory of deep learning, statistical learning, optimization, applied probability, concentration of measure, dynamical systems and control
  • Peter Richtarik, King Abdullah University of Science and Technology (KAUST) convex and nonconvex optimization, stochastic zero, first and second-order methods, distributed training, federated learning, communication compression, operator splitting, efficient ML
  • Lorenzo Rosasco, University of Genova, Italy and Massachusetts Institute of Technology, USA Statistical learning theory, Optimization, Regularization, Inverse problems
  • Daniel Roy, University of Toronto generalization, learning theory, deep learning, pac-bayes, nonparametric bayes, online learning, nonvacuous bounds
  • Sivan Sabato, Ben Gurion University of the Negev statistical learning theory, active learning, interactive learning
  • Ruslan Salakhutdinov, Carnegie Mellon University Deep Learning, Probabilistic Graphical Models, and Large-scale Optimization.
  • Joseph Salmon, Universit de Montpellier High-dimensional statistics, convex optimization, crowdsourcing
  • Marc Schoenauer, INRIA Saclay, France Stochastic Optimization, Derivative-free Optimization, Evolutionary Algorithms, Algorithm configuration/selection
  • Fei Sha, Google probabilistic modeling, dimensionality reduction, scientific machine learning, AI for science, NLP, computer vision
  • Ohad Shamir, Weizmann Institute of Science, Israel Learning theory, optimization, theory of deep learning.
  • Christian Shelton, UC Riverside, USA Time series, temporal and spatial processes, point processes
  • Xiaotong Shen, University of Minnesota, USA Learning, Graphical models, Recommenders
  • Ali Shojaie, University of Washington High-dimensional statistics; Statistical learning; graphical models; network analysis
  • Ilya Shpitser, Johns Hopkins University causal inference, missing data, algorithmic fairness, semi-parametric statistics
  • Mahdi Soltanolkotabi,
  • David Sontag, Massachusetts Institute of Technology Graphical models, approximate inference, structured prediction, unsupervised learning, applications to health care
  • Bharath Sriperumbudur, Pennsylvania State University Kernel Methods, Regularization, Theory of Functions and Spaces, Statistical Learning Theory, Nonparametric Estimation and Testing, Functional Data Analysis, Topological Data Analysis
  • Erik Sudderth, University of California, Irvine, USA Bayesian nonparametrics, graphical models, unsupervised learning, variational inference, Monte Carlo methods, computer vision, signal and image processing.
  • Csaba Szepesvari, University of Alberta, Canada reinforcement learning, sequential decision making, learning theory
  • Ambuj Tewari, University of Michigan, USA learning theory, online learning, bandit problems, reinforcement learning, optimization, high-dimensional statistics
  • Jin Tian, Iowa State University causal inference, Bayesian networks, probabilistic graphical models
  • Ivan Titov, University of Edinburgh, UK / University of Amsterdam, NL Natural language processing, deep learning, structured prediction
  • Koji Tsuda, National Institute of Advanced Industrial Science and Technology, Japan.
  • Nicolas Vayatis, ENS Cachan, France Statistical learning theory
  • Jean-Philippe Vert, Google, France kernel methods, computational biology, statistical learning theory
  • Silvia Villa, Genova University, Italy optimization, convex optimization, first order methods, regularization
  • Manfred Warmuth, Google Research
  • Kilian Weinberger, Cornell University, USA Deep Learning, Representation Learning, Ranking, Computer Vision
  • Martha White, University of Alberta reinforcement learning, representation learning
  • Chris Wiggins, Columbia University Computational biology, ethics, bandits
  • Zhihua Zhang, Peking University, China Bayesian Analysis and Computations, Numerical Algebra and Optimization
  • Mingyuan Zhou, The University of Texas at Austin Approximate inference, Bayesian methods, deep generative models, discrete data analysis
  • Ji Zhu, University of Michigan, Ann Arbor Network data analysis, latent variable models, graphical models, high-dimensional data, health analytics.

JMLR-MLOSS Editors

  • Alexandre Gramfort, Meta AI. supervised learning, convex optimization, sparse methods, machine learning software, applications in neuroscience
  • Sebastian Schelter, University of Amsterdam & Apache Software Foundation. Data management for machine learning; data quality; relational data preparation.
  • Joaquin Vanschoren, Eindhoven University of Technology, Netherlands. Automated machine learning, meta-learning, machine learning software.
  • Zeyi Wen, Hong Kong University of Science and Technology (Guangzhou). Machine learning systems, automatic machine learning, kernel methods, decision trees.
  • Albert Bifet, Télécom ParisTech & University of Waikato. Artificial Intelligence, Big Data Science, and Machine Learning for Data Streams.

JMLR Editorial board of reviewers

The Editorial board of reviewers is a collection of trusted reviewers, which commit to review at least 2 papers per year. Please reach out to us at editor@jmlr.org if you'd like to volunteer to be in this list of trusted reviewers:

  • David Abel, reinforcement learning, planning, abstraction
  • Evrim Acar, matrix/tensor factorizations
  • Maximilian Alber, deep learning, semantic segmentation, software, attribution methods/"explainable ai"
  • Mauricio A Alvarez, Gaussian processes, kernel methods, Bayesian inference, physics-inspired machine learning, data-centric engineering
  • David Alvarez-Melis, Optimal Transport, Optimization, Unsupervised Learning
  • Chris Amato, multi-agent reinforcement learning, partially observable reinforcement learning, multi-robot systems
  • Weihua An, Network Analysis, Causal Inference, Bayesian Analysis, Experiments
  • Rika Antonova, Bayesian optimization, reinforcement learning, variational inference, learning for robotics
  • michael arbel, kernel methods, deep learning
  • Cedric Archambeau, Uncertainty quantification, approximate inference, variational inference. Bayesian, optimisation, hyperparameter optimisation, AutoML. Transfer learning, continual learning. Responsible AI.
  • Ery Arias-Castro, clustering, multidimensional scaling, manifold learning, hypothesis testing, multiple testing, nonparametric methods
  • Yossi Arjevani, optimization, lower bound, ReLU models, symmetry
  • Raman Arora, stochastic optimization, subspace learning, differential privacy, robust adversarial learning, algorithmic regularization
  • Devansh Arpit, deep learning, representation learning, optimization, generalization
  • Alexander Aue, time series, high-dimensional statistics, change-points
  • Valeriy Avanesov, learning theory, distributed learning, kernel methods, nonparametric and high-dimensional statistics, bootstrap
  • Kamyar Azizzadenesheli, Learning theory, Reinforcement Learning, Bandit Algorithms
  • Rohit Babbar, Large-scale multi-label learning, Extreme Classification, Imbalanced classification
  • Krishnakumar Balasubramanian, Sampling and stochastic optimization, Kernel methods, Geometric and topological data analysis, Statistical learning theory.
  • Raef Bassily, differential privacy, statistical learning theory, optimization, stochastic gradient descent, generalization guarantees, adaptive data analysis, information theory
  • Gustavo Batista, time series, data streams, class imbalanced, embedded machine learning, quantification
  • Kayhan Batmanghelich, ML for Healthcare, Explainability, Weakly Supervised Learning, Disentanglement, Medical Imaging, Probabilistic Graphical Model
  • Denis Belomestny, MCMC, Variance reduction, deconvolution problems, reinforcement learning
  • Thomas Berrett,
  • Srinadh Bhojanapalli, optimization, deep learning, transformers, non-convex optimization
  • Przemyslaw Biecek, explainable ai, interpretable machine learning, evidence based machine learning, human centered artificial intelligence
  • Michael Biehl, learning vector quantization, prototype based systems, statistical physics of learning, biomedical applications
  • Gilles Blanchard, learning theory, kernel methods, high-dimensional inference, multiple testing, statistics
  • Mathieu Blondel, structured prediction, differentiable programming, optimization
  • Omer Bobrowski, geometric and topological inference, probabilistic modeling, gaussian processes
  • Giorgos Borboudakis, feature selection, causal discovery, automated machine learning, model selection
  • Guy Bresler, complexity of statistical inference, probabilistic models, random graphs, applied probability
  • Peter Bubenik, topological data analysis, applied topology, applied algebra, applied category theory
  • Cody Buntain, social media, text mining, network science, multi-modal learning, information retrieval
  • David Burns, time series learning, human activity recognition, novelty detection, out of distribution detection, open set classification
  • Diana Cai, Bayesian nonparametrics, probabilistic modeling, Bayesian modeling
  • Burak Cakmak, approximate Bayesian inference, message passing, variational Inference
  • Francisco Maria Calisto, Human-Computer Interaction, Health Informatics, Breast Cancer, User-Centered Design, Artificial Intelligence, Medical Imaging
  • Timothy Cannings, Classification, statistical learning, high-dimensional data, data perturbation techniques, nonparametric methods
  • Olivier Capp, Bandit Algorithms, Statistics, Signal Processing
  • Iain Carmichael, Multi-view data, high-dimensional, statistics
  • Luigi Carratino, kernel methods, large-scale, regularization, optimization
  • Antonio Cavalcante Araujo Neto, clustering, unsupervised learning, graphs, density estimation
  • Adam Charles, Signal Processing, Computational Neuroscience, Dictionary learning, deconvolution, Compressed sensing, Inverse problems, Regularizations, Recurrent neural networks
  • Pratik Chaudhari, deep learning, optimization
  • Bo Chen, deep learning, generative model, Bayesian inference,
  • Xi Chen, high-dimensional statistics, stochastic and robust optimization, machine learning for revenue management, crowdsourcing, choice modelling
  • Kun Chen, Integrative statistical learning, dimension reduction, low-rank models, robust estimation, large-scale predictive modeling, healthcare analytics
  • Yuansi Chen, domain adaptation, MCMC sampling, optimization, computational neuroscience
  • Jie Chen, graph deep learning, Gaussian process, kernel method
  • Cheng Chen, matrix factorization, optimization, online learning
  • Lin Chen, optimization, machine learning theory
  • Jianbo Chen, adversarial examples; adversarial robustness; model interpretation; explainability
  • Shizhe Chen, point process, graphical model, neuroscience, experimental design
  • Victor Chernozhukov, causal models, structural equation models, treatment effects, quantile and distributional methods, high-dimensional inference
  • Silvia Chiappa, Causal inference, Approximate Bayesian inference, variational inference, ML fairness
  • David Choi, statistics, network data analysis, stochastic blockmodel
  • Andreas Christmann, kernel methods, robust statistics, support vector machines
  • Delin Chu, scientific computing, data dimensionality reduction, optimization techniques
  • Carlo Ciliberto, kernel methods, statistical learning theory, structured prediction, meta-learning, multi-task learning
  • Nadav Cohen, Machine Learning, Deep Learning, Statistical Learning Theory, Tensor Analysis, Non-Convex Optimization
  • Taco Cohen, deep learning, equivariance, geometry, data compression
  • Lorin Crawford, deep learning, kernel methods, interpretability, Bayesian, computational biology
  • Lehel Csat, probabilistic inference, gaussian processes, non-parametric methods
  • Yifan Cui, causal inference, foundation of statistics, sampling, statistical machine learning, survival analysis
  • James Cussens, graphical models, relational learning
  • Andy Dahl, Genetics, Variance Decomposition, Matrix/Tensor Factorization, Clustering
  • Xiaowu Dai, kernel methods, matching markets, mechanism design, high-dimensional statistics, nonparametric inference, dynamic systems
  • Ben Dai, statistical learning theory, ranking, word embedding
  • Andreas Damianou, Gaussian process, transfer learning
  • Jesse Davis, relational learning, PU learning, sports analytics, anomaly detection
  • Cassio de Campos, Probabilistic Circuits, Probabilistic Graphical Models, Imprecise Probability, Credal Models, Computational Complexity, Robustness
  • Chris De Sa, optimization,MCMC,manifolds,systems,parallelism,distributed,decentralized
  • ernesto de vito, kernel methods, mathematical foundation machine learning, reproducing kernel Hilbert spaces
  • Krzysztof Dembczynski, multi-label classification, extreme classification, large-scale learning, learning theory, learning reductions
  • Carlo DEramo, Reinforcement Learning, Deep Learning, Multi-task learning
  • Michal Derezinski, randomized linear algebra, statistical learning theory, determinantal point processes
  • Alexis Derumigny, high-dimensional linear regression, copula models, kernel smoothing
  • Paramveer Dhillon, NLP, Text Mining, Matrix Factorization, Social Network Analysis, Computational Social Science
  • Laxman Dhulipala, parallel graph algorithms, graph embedding, shared-memory graph algorithms, distributed graph algorithms
  • Thomas Dietterich,
  • Edgar Dobriban, statistical learning theory, sketching, distributed learning, dimension reduction, mathematics of deep learning
  • Michele Donini, automl, fairness, kernel methods
  • Christian Donner, bayesian inference, Gaussian process, variational inference, density estimation, nonparametric models
  • Dejing Dou, semantic data mining, deep learning, information extraction, health informatics
  • Kumar Avinava Dubey, Bayesian Inference, Question Answering, Bayesian Nonparametrics, deep learning
  • Sebastijan Dumancic, statistical relational learning, neuro-symbolic methods, inductive logic programming, program induction, probabilistic programming
  • Jack Dunn, optimization,decision trees,interpretability
  • Subhajit Dutta,
  • David Duvenaud, deep learning, Gaussian processes, approximate inference, differential equations
  • Yonathan Efroni, Reinforcement Learning, Bandits, Online Learning
  • Dumitru Erhan, deep learning, self-supervised learning, unsupervised learning, domain adaptation, object detection, model understanding
  • Shobeir Fakhraei, Graph Mining, Graph Neural Networks
  • Zhou Fan, random matrices, random graphs, free probability, high-dimensional asymptotics
  • Max Farrell, causal inference, nonparametrics, deep learning, semiparametrics,
  • Moran Feldman, submodular maximization, streaming algorithms, online algorithms, combinatorial optimization
  • Yang Feng, machine learning, variable selection, community detection
  • Olivier Fercoq, optimization, convex analysis, coordinate descent, primal-dual methods
  • Tamara Fernandez, kernel methods, survival analysis, Gaussian processes, non-parametric statistics
  • Matthias Feurer, Automated Machine Learning, Hyperparameter Optimization, Bayesian Optimization
  • Aaron Fisher, causal inference, interpretable machine learning, wearable device data, matrix decompositions
  • Madalina Fiterau, ensemble methods, deep learning, multimodal learning, medical imaging, health applications
  • Remi Flamary, optimal transport, domain adaptation, optimization
  • Nicolas Flammarion, optimization
  • Seth Flaxman, Bayesian inference, kernel methods, Gaussian processes
  • Michael Fop, Feature selection, Graphical models, High-dimensional data analysis, Model-based clustering and classification, Statistical network analysis
  • Dylan Foster, Reinforcement learning, control, contextual bandits, online learning, statistical learning, optimization, deep learning
  • Jordan Frecon, Hyperparameter optimization, Structured sparsity, Multitask learning, Optimization, Bilevel programming
  • Roy Frostig, statistical learning theory, stochastic optimization, differentiable programming
  • Piotr Fryzlewicz, time series, change-point and change detection, high-dimensional inference, dimension reduction, wavelets, multiscale methods, statistical learning, networks
  • Chad Fulton, time series, bayesian analysis, econometrics
  • Rahul G. Krishnan, deep generative models, unsupervised learning, applications to health care, state space models
  • Chao Gao, robust statistics, high-dimensional statistics, Bayes theory, network analysis
  • Tingran Gao, kernel methods, manifold learning, topological data analysis
  • Xu Gao, time series, deep learning, spatial temporal model
  • Jochen Garcke, kernel methods, manifold learning, interpretability, high-dimensional approximation, uncertainty quantification, numerical simulations
  • Roman Garnett, Gaussian processes, Bayesian optimization, active learning
  • Damien Garreau, interpretability, kernel methods
  • Saeed Ghadimi, nonconvex optimization, stochastic gradient-based algorithms, sample complexity
  • Asish Ghoshal, statistical learning theory, causal inference, graphical models
  • Gauthier Gidel, optimization, deep learning theory, game theory
  • Pieter Gijsbers, AutoML, meta-learning
  • Olivier Goudet, causality, metaheuristics
  • Robert Gower, Stochastic optimization, sketching, randomized numerical linear algebra, linear algebra, quasi-Newton methods, SGD, stochastic gradient descent
  • Navin Goyal, deep learning, learning theory,
  • Roger Grosse, neural net optimization, Bayesian deep learning, hyperparameter adaptation
  • Steffen Grunewalder, statistical learning theory, kernel methods, multi armed bandits
  • Yuwen Gu, high-dimensional statistics, variable selection, nonparametric statistics, model combination, optimization, causal inference
  • Quanquan Gu, optimization, theory of deep learning, reinforcement learning, LLMs, deep generative models, high-dimensional statistics
  • Benjamin Guedj, Learning theory, PAC-Bayes, computational statistics, high-dimensional statistics, theory of deep learning, probabilistic models, Bayesian inference
  • Ishaan Gulrajani, deep learning, generative modeling, out-of-distribution generalization
  • Tom Gunter, gaussian processes, bayesian nonparametrics, cox processes, bayesian inference
  • Xin Guo, ranking and preference learning, regression, learning theory, supervised learning, semi-supervised learning, online learning, kernel methods, sparsity regularization
  • xin guo, deep learning and Games, reinforcement learning, GANs
  • Minh Ha Quang, kernel methods, statistical learning theory, matrix and operator theory, differential geometrical methods, information geometry, infinite-dimensional statistics
  • Amaury Habrard, Metric Learning, Transfer Learning, Domain Adaptation, Representation learning, statistical learning
  • Jussi Hakanen, optimization, multiobjective optimization, bayesian optimization, kriging, human in the loop
  • William Hamilton, graph representation learning; natural language processing; network analysis
  • Chulwoo Han, asset pricing, financial application, deep learning
  • Lei Han, reinforcement learning, supervised learning, transfer learning
  • Bo Han, deep learning, weakly supervised learning, label-noise learning, adversarial learning
  • Steve Hanneke, learning theory, active learning, sample complexity, PAC learning, VC theory, compression schemes, machine teaching, non-iid learning
  • Ben Hansen, optimal matching, multivariate distance matching, potential-outcomes based causal inference
  • Botao Hao, bandits, reinforcement learning, exploration, tensor methods
  • Ning Hao, Change-point analysis, High-dimensional data, Multivariate analysis, Statistical machine learning.
  • Ethan Harris, MLOSS, deep learning, augmentation, computational neuroscience
  • Mohamed Hebiri, high dimensional statistics, statistical fairness, distribution-free algorithms, minimax theory
  • Reinhard Heckel, deep learning, optimization, active learning
  • Markus Heinonen, Gaussian processes, dynamical models, differential equations, bayesian neural networks, kernel methods
  • James Hensman, gaussian processes, variational inference, biostatistics
  • Daniel Hernndez Lobato, Approximate Inference, Gaussian Processes, Bayesian Optimization
  • Jun-ichiro Hirayama, unsupervised learning, brain imaging, neuroscience, signal processing, independent component analysis
  • Nhat Ho, Statistical learning theory, Optimal transport, Bayesian nonparametrics, Bayesian inference, Mixture and hierarchical models, Optimization, Deep generative models, Variational inference
  • Jean Honorio, learning theory, planted models, graphical models, structured prediction, community detection
  • Giles Hooker, random forrests, intelligibility, explanations, confidence intervals, uncertainty quantification, hypothesis tests, variable importance, central limit theorems
  • Thibaut Horel, optimization, convex analysis, game theory, diffusion processes
  • Tamas Horvath, pattern mining, graph mining, relational learning, inductive logic programming, learning from structured data, networks
  • Torsten Hothorn, statistical learning
  • Daniel Hsu, Learning theory
  • Wei Hu, deep learning theory
  • Jianhua Huang, statistical machine learning, dimension reduction, statistical inference, Bayesian optimization, transfer learning
  • Nicolas Hug, open source, gradient boosting, python, software
  • Jonathan Huggins, Bayesian methods, Bayesian computation, kernel methods, robust inference, large-scale learning
  • Eyke Hllermeier, preference learning and ranking, uncertainty in machine learning, multi-target prediction, weakly supervised learning, learning on data streams
  • Masaaki Imaizumi, statistics, learning theory, tensor, functional data, deep learning theory
  • Rishabh Iyer, submodular optimization, active learning, compute efficient learning, robust learning, data subset selection, data summarization
  • Ameya D. Jagtap, Supervised Learning, Neural Deep Neural Networks, Scientific Machine Learning, Physics-Informed Machine Learning, Transfer Learning, Active Learning, Activation Functions, Distributed Learning, Neural Operator Networks, Graph Neural Networks, Data-Driven Techniques, and Supervised Learning.
  • Kevin Jamieson, Multi-armed bandits, active learning, experimental design
  • Lucas Janson, high-dimensional inference, variable importance, reinforcement learning
  • Ghassen Jerfel, bayesian machine learning, statistical inference, uncertainty, unsupervised learning, sampling, MCMC, optimization
  • Sean Jewell, changepoint detection, selective inference, neuroscience
  • Heinrich Jiang, fairness, data labeling, clustering
  • Nan Jiang, reinforcement learning theory
  • Lu Jiang, robust deep learning, curriculum learning, multimodal learning
  • Chi Jin, nonconvex optimization, reinforcement learning theory
  • julie josse, missing values, causal inference, matrix completion
  • Varun Kanade, learning theory; online learning; computational complexity; optimization
  • Motonobu Kanagawa, kernel methods, simulation models, uncertainty quantification
  • shiva Kasiviswanathan, Privacy, Learning Theory, Optimization, Algorithms
  • Emilie Kaufmann, multi-armed bandit, reinforcement learning
  • Kshitij Khare, Graphical models, Bayesian computation, Vector autoregressive models
  • Rahul Kidambi, stochastic optimization, stochastic gradient descent, optimization, offline reinforcement learning, model-based reinforcement learning, batch learning with bandit feedback, offline contextual bandit learning
  • Yoon Kim, natural language processing, deep learning
  • Pieter-Jan Kindermans, interpretability, explainability, understanding neural networks, deep learning, neural architecture search, brain machine interfaces, brain computer interfaces
  • Johannes Kirschner, bandits, Bayesian optimization, partial monitoring
  • Arto Klami, probabilistic models, variational inference, matrix factorization, canonical correlation analysis
  • Aaron Klein, Bayesian optimization, AutoML, neural architecture search
  • Jason Klusowski, Deep learning, neural networks, decision tree learning
  • Murat Kocaoglu, causal inference, information theory, deep generative models
  • Mladen Kolar, high-dimensional statistics, graphical models
  • Dehan Kong, kernel methods, matrix and tensor methods, causal inference, high dimensional inference, manifold learning, robust methods, neuroimaging and genetics
  • Jean Kossaifi, deep learning, tensor methods
  • Sanmi Koyejo, federated learning, distributed machine learning, robust machine learning, statistical learning theory, neuroimaging, machine learning for medical imaging, machine learning for healthcare
  • Akshay Krishnamurthy, statistical learning theory, reinforcement learning, bandits
  • Todd Kuffner, statistics, post-selection inference, resampling, bootstrap, asymptotics, testing
  • Vitaly Kuznetsov, time series, learning theory, quantitative finance
  • Branislav Kveton, bandits, online learning, reinforcement learning
  • Jakub Lacki, graph algorithms, clustering, distributed optimization
  • Vincenzo Lagani, Causal analysis, bioinformatics
  • Silvio Lattanzi, clustering, graph mining, submodular optimization
  • Tor Lattimore, Bandits, reinforcement learning, online learning
  • Rmi LE PRIOL, deep learning, optimization, duality
  • Nicolas Le Roux, optimization, reinforcement learning
  • Johannes Lederer, deep learning theory, high-dimensional statistics
  • Holden Lee, MCMC, sampling algorithm, control theory, reinforcement learning
  • Sokbae Lee, econometrics, causal inference, quantile regression, mixed integer optimization
  • Yoonkyung Lee, Kernel methods, ranking, loss functions, dimension reduction
  • Guillaume Lemaitre, software engineering, open source software, class imbalance
  • Tianyang Li, optimization, statistics, machine learning, stochastic optimization, statistical inference, high dimensional statistics, robust learning
  • Hao Li, deep learning, vision, generative models, optimization
  • Xiaodong Li, matrix completion, network analysis, optimization
  • Didong Li, Nonparametric Bayes, geometric data analysis, manifold learning, information geometry, spatial statistics
  • Shuai Li, Machine intelligence, online prediction, decision making, bandits, learning theory, optimization
  • Yujia Li, deep learning, graph neural networks, program synthesis, program induction
  • Yi Li, sparse recovery, randomized numerical linear algebra
  • Heng Lian, statistics, learning theory, reproducing kernel Hilbert space, distributed optimization
  • Tengyuan Liang, deep learning theory, kernel methods, interpolation, high-dimensional asymptotics
  • Tianyi Lin, min-max optimization, optimal transport
  • Wei Lin, high-dimensional statistics, statistical machine learning, causal inference
  • Wu Lin, Variational Inference, Stochastic Optimization
  • Hongzhou Lin, optimization
  • Marius Lindauer, automated machine learning, hyperparameter optimization, neural architecture search
  • Zachary Lipton, deep learning, healthcare, natural language processing, robustness, causality, fairness, technology and society
  • Yang Liu, learning with noisy data, weakly supervised learning, crowdsourcing
  • Weidong Liu, statistical optimization, Gaussian graphical model, precision matrix,false discovery rate
  • SONG LIU, density ratio estimation, graphical model, stein indentity, change detection, outlier detection
  • Tongliang Liu,
  • Chong Liu, Bayesian Optimization, Bandits, Active Learning, AI for Science
  • Liping Liu, generative models, graph neural networks, self-attention models
  • Karen Livescu, representation learning, multi-view learning, speech processing, natural language processing, sign language
  • Galle Loosli, kernel methods, indefinite kernels, adversarial robustness
  • Miles Lopes, bootstrap methods, high-dimensional statistics, sketching algorithms (error analysis of)
  • Qi Lou, graphical models, computational advertising
  • Bryan Kian Hsiang Low, Gaussian process, Bayesian optimization, active learning, automated machine learning, probabilistic machine learning, data valuation, fairness in collaborative/federated learning
  • Daniel Lowd, adversarial machine learning, statistical relational learning, Markov logic, tractable probabilistic models, sum-product networks, probabilistic graphical models, Markov networks, Bayesian networks, Markov random fields
  • Aurelie Lozano, high-dimensional estimation, deep learning, optimization
  • Haihao Lu, optimization
  • Junwei Lu, high dimensional statistics
  • Aurelien Lucchi, optimization, deep learning theory
  • Haipeng Luo, online learning, bandit problems, reinforcement learning
  • Yuetian Luo, tensor data analysis, statistical and computational trade off
  • Luo Luo, optimization, numerical linear algebra
  • Shujie Ma, deep learning, causal inference, network analysis, nonparametric methods, dimensionality reduction, time series data
  • Tengyu Ma, deep learning theory
  • Siyuan Ma, optimization, kernel methods, deep learning
  • Eric Ma, network science, graph theory, applied deep learning, applied bayesian statistics
  • Zongming Ma, statistics, optimality, social network
  • Yi-An Ma, Bayesian inference, time series analysis
  • Qing Mai, High-dimensional data analysis, Tensor data analysis, Machine learning, Semiparametric and nonparametric statistics, Dimension reduction
  • odalric-ambrym maillard, multi-armed bandits, reinforcement learning, markov decision processes, concentration of measure
  • Man Wai Mak, Speaker recognition, deep learning, domain adaptation, noise robustness, ECG classification
  • Ameesh Makadia, 3D computer vision, geometric deep learning, harmonic analysis
  • Gustavo Malkomes, Bayesian optimization, active learning, Gaussian processes, active model selection
  • Stephan Mandt, variational inference, deep latent variable models, machine learning and physics, neural data compression
  • Horia Mania, Reinforcement Learning, control theory
  • Timothy Mann, reinforcement learning, optimization, robustness, transfer learning, delay
  • Rahul Mazumder, mathematical optimization, high-dimensional statistics, sparsity, Boosting, nonparametric statistics, shape constrained estimation, decision tree ensembles, compressing large neural networks
  • Julian McAuley, personalization, recommender systems, web mining
  • Daniel McDonald, statistical machine learning, high-dimensional statistics, time series, optimization, risk estimation
  • Song Mei, deep learning, kernel methods
  • gonzalo mena, optimal transport, statistics, computational biology
  • Lucas Mentch, Random Forests, Ensembles, Explainability, Variable Importance, Hypothesis Testing
  • bjoern menze, random forest, deep learning, biomedical, imaging
  • Bertrand Michel, Model selection, topological data analysis, unsupervised learning
  • Ezra Miller, geometry, algebra, combinatorics, topology, geometric and topological data analysis, evolutionary biology
  • Andrew Miller, statistical inference, health, Gaussian processes, MCMC, variational inference
  • Bamdev Mishra, Riemannian optimization, manifold optimization, matrix tensor decompositions, stochastic algorithms
  • Ioannis Mitliagkas, optimization, theory of deep learning, large scale learning, minimax optimization
  • Alejandro Moreo Fernndez, quantification, text classification, domain adaptation, word embeddings, kernel methods, transfer learning
  • Dmitriy Morozov, topological data analysis
  • Christopher Morris, Learning on graphs
  • Nicole Mcke, kernel methods, stochastic approximation, deep learning, (de-)centralized learning, regularization methods, inverse problems, learning theory
  • Shinichi Nakajima, Bayesian learning, variational inference, generative model
  • Eric Nalisnick, bayesian methods, deep learning, approximate inference, generative models, out-of-distribution detection
  • Preetam Nandy, Causal Inference, Causal Structure Learning, Graphical Models, High-dimensional Data, Reinforcement Learning, Fairness in Machine Learning,
  • Harikrishna Narasimhan, Evaluation Metrics, Constrained Optimization, Fairness, Learning Theory, Convex Optimization
  • Ion Necoara, convex optimization, stochastic optimization, kernel methods, supervised learning
  • Gergely Neu, reinforcement learning, learning theory, online learning, bandit theory
  • Gerhard Neumann, reinforcement learning, policy search, deep learning, robotics,
  • Behnam Neyshabur, deep learning, learning theory, generalization
  • Vlad Niculae, structured prediction, optimization, argmin differentiation, natural language processing
  • Yang Ning, high dimensional statistics, statistical inference, causal inference
  • Jose Nino-Mora, optimization, probabilistic models, bandit problems
  • Atsushi Nitanda, stochastic optimization, statistical learning theory, deep learning, kernel methods
  • David Nott, Bayesian model choice and model criticism, likelihood-free inference, variational inference
  • Alex Nowak-Vila, kernel methods, structured prediction, inverse problems, statistical learning theory, convex optimization
  • Aidan O'Brien, bioinformatics, feature selection, implementation
  • Kevin O'Connor, optimal transport, inference for dynamical systems
  • Ronald Ortner, reinforcement learning
  • Satoshi Oyama, kernel methods, link prediction, crowdsourcing
  • Ana Ozaki, exact learning, pac learning, neural network verification, logic, ontologies, knowledge graphs
  • Randy Paffenroth, deep learning, theory of machine learning, unsupervised learning, applied mathematics
  • Amichai Painsky, Statistics, Information Theory, Statistical Inference, Predictive Modeling, Tree-based Models, Data Compression, Probability Estimation
  • Evangelos Papalexakis, factorization methods, tensor factorization, tensor decomposition, matrix factorization, matrix decomposition, unsupervised methods
  • Laetitia Papaxanthos, Deep learning, data mining, computational biology
  • Biswajit Paria, bayesian optimization
  • CHANGYI PARK, Kernel methods, support vector machines, feature selection
  • Gunwoong Park, directed acyclic graphical models, causal inference
  • Matt Parry, probabilistic modelling, scoring rules, Bayesian statistics
  • Razvan Pascanu, deep learning, optimization, reinforcement learning, continual learning, graphnetes
  • Jose M Pena, causality, probabilistic graphical models
  • Richard Peng, graph algorithms, optimization, numerical methods
  • Will Perkins, probability, statistical physics, combinatorics, random graphs
  • Victor Picheny, Bayesian optimization, Gaussian process
  • Brad Price, Statistical Machine Learning, Multivariate and Multi-Task Methods, Graph Constrained Models
  • Yixuan Qiu, statistical computing, optimization, MCMC
  • Yumou Qiu, High-dimensional statistical inference, Gaussian graphic model, kernel smoothing, Statistical analysis for brain imaging, causal inference, High-throughput plant phenotying
  • Qing Qu, nonconvex optimization, representation learning, inverse problems, unsupervised learning
  • Peter Radchenko, high-dimensional statistics, sparse learning and estimation, feature selection.
  • Manish Raghavan, algorithmic fairness, game theory, behavioral economics
  • Maxim Raginsky, Theory of deep learning, statistical learning, optimization, applied probability, concentration of measure, dynamical systems and control
  • Anand Rajagopalan, Clustering, Random Matrix Theory
  • Goutham Rajendran, Machine Learning Theory, Generative Models, Representation Learning, Latent Variable Models, Variational Autoencoders
  • Herilalaina Rakotoarison, automl, algorithm selection
  • Jan Ramon, privacy preserving learning, learning from graphs, learning theory
  • Rajesh Ranganath, Approximate Inference, Deep Generative Models, Causal Inference, Machine Learning
  • Vinayak Rao, Markov Chain Monte Carlo, Monte Carlo, Bayesian methods, Bayesian nonparametrics, Variational Inference, Point Processes, Stochastic Processes
  • Jesse Read, multi-label, multi-output, data streams
  • Zhao Ren, high-dimensional statistics, robust statistics, graphical models
  • Steffen Rendle, recommender systems,large scale learning,matrix factorization
  • Marcello Restelli, reinforcement learning
  • Lev Reyzin, learning theory, graph algorithms, ensemble methods, bandits
  • Bruno Ribeiro, relational learning, invariant representations, embeddings, graph neural networks
  • Bastian Rieck, topological data analysis, computational topology, kernel methods, networks and graphs, applications to healthcare
  • Fabrizio Riguzzi, relational learning, statistical relational learning, inductive logic programming, probabilistic inductive logic programming
  • Omar Rivasplata, Statistical Learning Theory, PAC-Bayes bounds, deep learning, mathematics, probability and statistics
  • Ariel Rokem, Neuroinformatics, regularized regression, open-source software, data science
  • Alessandro Rudi, Kernel methods, statistical machine learning
  • Sivan Sabato, statistical learning theory, active learning, interactive learning
  • Veeranjaneyulu Sadhanala,
  • Saverio Salzo, Convex optimization, kernel methods
  • JEROME SARACCO, dimension reduction, nonparametric and semiparametric regression, clustering, non parametric conditional quantile estimation
  • Hiroaki Sasaki, unsupervised learning, density estimation, kernel methods
  • Kevin Scaman, optimization, distributed optimization
  • Florian Schaefer, optimization, game theory, GANs, Gaussian processes
  • Mikkel Schmidt, Approximate Bayesian inference, Probabilistic modeling, Markov chain Monte Carlo, Variational Inference, Bayesian nonparametrics, Network data analysis, Reinforcement learning, Generative models, Deep learning, Program induction
  • Jacob Schreiber, deep learning, genomics, submodular optimization, tensor factorization, imputation
  • Alex Schwing, deep learning, structured prediction, generative adversarial nets
  • Clayton Scott, statistical learning theory, kernel methods, domain adaptation, weak supervision, kernel density estimation, label noise, domain generalization
  • Dino Sejdinovic, kernel methods
  • Yevgeny Seldin, Bandits, PAC-Bayesian Analysis, Online Learning, Learning Theory
  • Rajat Sen, bandit algorithms, online learning, time series
  • Amina Shabbeer, Optimization, deep learning, bioinformatics, natural language processing, reinforcement learning
  • Uri Shalit, causal inference, machine learning in healthcare
  • yanyao shen, optimization, robust learning, large-scale learning
  • Seung Jun Shin, kernel methods, dimension reduction, regularized estimation
  • Ali Shojaie, High-dimensional statistics; Statistical learning; graphical models; network analysis
  • Ilya Shpitser, causal inference, missing data, algorithmic fairness, semi-parametric statistics
  • Si Si, model compression; kernel methods
  • Ricardo Silva, causality, graphical models, Bayesian inference, variational methods
  • Max Simchowitz, control theory, reinforcement learning, bandits
  • Riley Simmons-Edler, Reinforcement Learning, Deep Reinforcement Learning, Exploration
  • Dejan Slepcev, graph based learning, optimal transportation, geometric data analysis, semi-supervised learning, PDE and variational methods
  • Aleksandrs Slivkins, multi-armed bandits, exploration, economics and computation, mechanism design
  • Marek Smieja, deep learning, semi-supervised learning, missing data, anomaly detection, clustering, multi-label classification
  • Arno Solin, Probabilistic modelling, stochastic differential equations, state space models, Gaussian processes, approximative inference
  • Hyebin Song, statistical learning, high dimensional statistics, computational biology, optimization
  • Karthik Sridharan, online learning, learning theory, stochastic optimization
  • Sanvesh Srivastava, Distributed Bayesian inference, Divide-and-Conquer, Gaussian process, latent variable models, Wasserstein barycenter
  • Francesco C. Stingo, biostatistics, Bayesian analysis, model selection, graphical models
  • Karl Stratos, representation learning, information theory, spectral methods, natural language processing
  • Weijie Su, statistics, differential privacy, optimization
  • Mahito Sugiyama, clustering, feature selection, pattern mining, graph mining
  • Yanan Sui, AI Safety, Bandit, Bayesian Optimization, Medical Application
  • Shiliang Sun, Probabilistic Model and Approximate Inference, Optimization, Statistical Learning Theory, Multi-view Learning, Trustworthy Artificial Intelligence, Sequential Data Modeling
  • Ruoyu Sun, optimization, deep learning
  • Taiji Suzuki, kernel methods, deep learning, optimization
  • Zoltan Szabo, information theory, kernel techniques
  • Ronen Talmon, kernel methods, manifold learning, geometric methods, spectral graph theory
  • Kean Ming Tan, graphical models, unsupervised learning, low rank approximation
  • Vincent Tan,
  • Cheng Yong Tang, covariance modeling, graphical models, high-dimensional statistical learning, nonparametric methods, statistical inference
  • Wesley Tansey, Bayesian statistics, empirical Bayes, graphical models, computational biology, hypothesis testing
  • Chen Tessler, deep reinforcement learning, reinforcement learning
  • Albert Thomas, machine learning software, python, anomaly detection
  • Jin Tian, causal inference, Bayesian networks, probabilistic graphical models
  • Felipe Tobar, Gaussian processes, Bayesian inference, Bayesian nonparametrics, Time Series
  • Kim-Chuan Toh, convex optimization, sparse Newton methods, semidefinite programming, polynomial optimization
  • Panos Toulis, causal inference, randomization tests, stochastic gradient, stochastic approximations, networks
  • Sofia Triantafillou, causality; probabilistic graphical models; Bayesian networks
  • Ivor Tsang, Transfer Learning, Kernel Methods, Deep Generative Models, Weakly Supervised Learning, Imitation Learning
  • Cesar A. Uribe, optimization, decentralized optimization, optimal transport, distributed optimization, social learning, network science
  • Inigo Urteaga, Bayesian Theory, generative models, approximate inference, Bayesian nonparametrics, multi-armed bandits
  • Ewout van den Berg, convex optimization, quantum computing
  • Laurens van der Maaten, computer vision, privacy
  • Stfan van der Walt, open source software, image processing, array computing
  • Jan N van Rijn, Machine Learning, AutoML, Automated Design of Algorithms, meta-learning
  • Bart Vandereycken, Riemannian optimization, manifold methods, low-rank approximation, tensor decomposition, numerical linear algebra
  • Gael Varoquaux, dirty data, missing values, brain imaging, healthcare
  • Kush Varshney, fairness, interpretability, safety, applications to social good
  • Aki Vehtari, Bayesian analysis, Bayesian statistics, Gaussian processes
  • Silvia Villa, optimization, convex optimization, first order methods, regularization
  • Max Vladymyrov, non-convex optimization, manifold learning, neural architecture search
  • Chong Wang, approximate inference, deep learning, uncertainty, generative models
  • Yu-Xiang Wang, statistical machine learning, optimization, differential privacy, reinforcement learning
  • Jialei Wang, optimization, high-dimensional statistics, learning theory
  • Chien-Chih Wang, optimization, deep learning, large-scale classification
  • Serena Wang, fairness, constrained optimization, robust optimization, ensemble methods
  • Xiaoqian Wang, explainable AI, fairness in machine learning, generative model
  • Weiran Wang, representation learning, deep learning, speech processing, sequence learning
  • Zi Wang, robot learning, Bayesian optimization, learning and planning, Gaussian process, active learning
  • Mengdi Wang, reinforcement learning, representation learning
  • Zhaoran Wang, reinforcement learning
  • Y. Samuel Wang, Graphical Models, Causal Discovery
  • Shulei Wang, nonparametric, high-dimensional statistics, machine learning, biomedical application
  • Yuhao Wang, causal inference, high-dimensional statistics, semiparametric inference, graphical models
  • Lan Wang, high-dimensional statistics, optimal decision estimation, nonparametric and semiparametric statistics, quantile regression, causal inference
  • Kazuho Watanabe, latent variable models, rate-distortion theory
  • Andrew Gordon Wilson, Bayesian deep learning, Gaussian processes, generalization in deep learning
  • Ole Winther, deep learning, generative models, gaussian processes
  • Guy Wolf, manifold learning, geometric deep learning, data exploration
  • Raymond K. W. Wong, nonparametric regression, functional data analysis, low-rank modeling, tensor estimation
  • Chirayu Wongchokprasitti, recommender systems, user modeling, causal discovery
  • Jiajun Wu, computer vision, deep learning, cognitive science
  • Yao Xie, statistical learning, spatio-temporal data modeling, sequential analysis, change-point detection, dynamic networks.
  • Lingzhou Xue, high-dimensional statistics, graphical models, dimension reduction, optimization
  • Zhirong Yang, dimensionality reduction, cluster analysis, visualization
  • Yuhong Yang, bandit problems, forecasting, model selection and assessment, minimax learning theory
  • Zhuoran Yang, reinforcement learning, statistical machine learning, optimization
  • Felix X.-F. Ye, Model Reduction, Dynamical system, Data-driven modeling
  • Han-Jia Ye, representation learning, meta-learning
  • Junming Yin, statistical machine learning, probabilistic modeling and inference, nonparametric statistics
  • Yiming Ying, statistical learning theory, optimization in machine learning, kernel methods, differential privacy
  • Rose Yu, deep learning, time series, tensor methods
  • Yaoliang Yu, generative models, optimization, robustness
  • Yi Yu, statistical network analysis, change point detection, high-dimensional statistics
  • Guo Yu, sparsity; convex optimization; Gaussian graphical models; multiple testing
  • Xiaotong Yuan, sparse learning, optimization, meta-learning, non-convex optimization, learning theory, distributed optimization
  • Luca Zanetti, Graph clustering, Markov chains, Spectral methods
  • Assaf Zeevi,
  • Jingzhao Zhang, optimization
  • Kun Zhang, causality, transfer learning, kernel methods, unsupervised deep learning
  • Chiyuan Zhang, deep learning
  • Xin Zhang, Dimension Reduction, Multivariate Analysis and Regression, Tensor Data Analysis, Discriminant Analysis, Neuroimaging
  • Michael Minyi Zhang, Bayesian non-parametrics, MCMC, Gaussian processes
  • Xinhua Zhang, kernel methods, transfer learning, adversarial learning, representation learning
  • Aonan Zhang, bayesian methods, bayesian nonparametric, deep unsupervised learning, uncertainty estimation
  • Lijun Zhang, Online learning, Bandits, stochastic optimization, Randomized algorithms
  • Tuo Zhao, deep learning, nonconvex optimization, high dimensional statistics, natural language processing, open-source software library
  • Han Zhao, domain adaptation, domain generalization, transfer learning, probabilistic circuits, algorithmic fairness, multitask learning, meta-learning
  • Peng Zhao, online learning
  • Zhigen Zhao, high dimensional statistical inference, empirical Bayesian/Bayesian statistics, Sufficient dimension reduction, multiple comparison
  • Yunpeng Zhao, Network analysis; Community detection
  • Qinqing Zheng, optimization, differential privacy
  • Ping-Shou Zhong, kernel methods, statistical inference, high dimensional data, functional data, nonparametric methods, and genomics and genetics
  • Zhengyuan Zhou, contexutal bandits, online learning, game theory
  • Wenda Zhou, statistical machine learning, deep learning, high-dimensional statistics
  • Ding-Xuan Zhou, deep learning, approximation by deep neural networks, kernel methods, wavelets
  • Shuchang Zhou, optimization,neural network,quantization
  • Ruoqing Zhu, random forests, personalized medicine, survival analysis
  • Liping Zhu, massive data analysis, nonlinear dependence, dimension reduction
  • Marinka Zitnik, representation learning, embeddings, graph neural networks, knowledge graphs, latent variable models, biomedical data, computational biology, network science

JMLR Advisory Board

  • Shun-Ichi Amari, RIKEN Brain Science Institute, Japan
  • Andrew Barto, University of Massachusetts at Amherst, USA
  • Thomas Dietterich, Oregon State University, USA
  • Jerome Friedman, Stanford University, USA
  • Stuart Geman, Brown University, USA
  • Geoffrey Hinton, University of Toronto, Canada
  • Michael Jordan, University of California at Berkeley at USA
  • Leslie Pack Kaelbling, Massachusetts Institute of Technology, USA
  • Michael Kearns, University of Pennsylvania, USA
  • Steven Minton, InferLink, USA
  • Tom Mitchell, Carnegie Mellon University, USA
  • Stephen Muggleton, Imperial College London, UK
  • Kevin Murphy, Google, USA
  • Tomaso Poggio, Massachusetts Institute of Technology, USA
  • Ross Quinlan, Rulequest Research Pty Ltd, Australia
  • Stuart Russell, University of California at Berkeley, USA
  • Lawrence Saul, University of California at San Diego, USA
  • Bernhard Schölkopf, Max Planck Institute for Intelligent Systems, Germany
  • Terrence Sejnowski, Salk Institute for Biological Studies, USA
  • Richard Sutton, University of Alberta, Canada
  • Leslie Valiant, Harvard University, USA