MLSP 2020

IEEE International Workshop on
MACHINE LEARNING FOR SIGNAL PROCESSING

September 21–24, 2020 Aalto University, Espoo, Finland (virtual conference)

Keynote Lectures

Contents

Michael Unser

École Polytechnique Fédérale de Lausanne (EPFL)

Homepage

Splines and Machine Learning: From classical RKHS methods to deep neural networks

Monday, September 21, 2020, 08:15–09:15

Abstract: Supervised learning is a fundamentally ill-posed problem. In practice, this indetermination is dealt with by imposing constraints on the solution; these are either implicit, as in neural networks, or explicit via the use of a regularization functional. In this talk, I present a unifying perspective that revolves around a new representer theorem that characterizes the solution of a broad class of functional optimization problems. I then use this theorem to derive the most prominent classical algorithms — e.g., kernel-based techniques and smoothing splines — as well as their “sparse” counterparts. This leads to the identification of sparse adaptive splines, which have some remarkable properties.

I then show how the latter can be integrated in conventional neural architectures to yield high-dimensional adaptive linear splines. Finally, I recover deep neural nets with ReLU activations as a particular case.

  1. M. Unser, “A unifying representer theorem for inverse problems and machine learning,” Foundations of Computational Mathematics, in press, 2020.
  2. M. Unser, “A representer theorem for deep neural networks,” Journal of Machine Learning Research, vol. 20(110), pp. 1–30, 2019.

Biography: Michael Unser is professor and director of EPFL's Biomedical Imaging Group, Lausanne, Switzerland. His primary area of investigation is biomedical image processing. He is internationally recognized for his research contributions to sampling theory, wavelets, the use of splines for image processing, stochastic processes, and computational bioimaging. He has published over 350 journal papers on those topics. He is the author with P. Tafti of the book “An introduction to sparse stochastic processes”, Cambridge University Press 2014.

Dr. Unser has served on the editorial board of most of the primary journals in his field including the IEEE Transactions on Medical Imaging (associate Editor-in-Chief 2003-2005), IEEE Trans. Image Processing, Proc. of IEEE, and SIAM J. of Imaging Sciences. He is the founding chair of the technical committee on Bio Imaging and Signal Processing (BISP) of the IEEE Signal Processing Society. From 1985 to 1997, he was with the Biomedical Engineering and Instrumentation Program, National Institutes of Health, Bethesda USA, conducting research on bioimaging. Prof. Unser is a fellow of the IEEE (1999), an EURASIP fellow (2009), and a member of the Swiss Academy of Engineering Sciences. He is the recipient of several international prizes including five IEEE-SPS Best Paper Awards and two Technical Achievement Awards from the IEEE (2008 SPS and EMBS 2010).

Ole Winther

University of Copenhagen
Technical University of Denmark (DTU)

Homepage

Latent variable models from independent components to VAEs and flows

Tuesday, September 22, 2020, 08:00–09:00

Abstract: The keynote will cover latent variable models from independent component analysis to variational autoencoders and flows. Properties, relationships and use cases are discussed. Generative modeling of images and analysis of single cell RNA data are used as running motivational applications.

Biography: Ole Winther is professor in genomic bioinformatics at University of Copenhagen/Rigshospitalet (KU) and in data science at the Technical University of Denmark (DTU). Professor Winther’s research interests are in machine learning with a focus in methodology for deep (generative) learning and with applications to biology, natural language processing, and material science. He is also co-founder of two startups raffle.ai and findzebra.com.

Mihaela van der Schaar

University of Cambridge
The Alan Turing Institute
University of California Los Angeles

Homepage

Machine learning: Changing the future of healthcare

Wednesday, September 23, 2020, 08:00–09:00

Biography: Mihaela van der Schaar is John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Turing Faculty Fellow at The Alan Turing Institute in London, where she leads the effort on data science and machine learning for personalized medicine. She is also a Chancellor's Professor at UCLA. She was elected IEEE Fellow in 2009.

Dr. van der Schaar has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), an NSF Career Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award. She holds 35 granted USA patents. In 2019, she was identified by National Endowment for Science, Technology and the Arts as the female researcher based in the UK with the most publications in the field of AI. She was also elected as a 2019 "Star in Computer Networking and Communications". Her research expertise spans signal and image processing, communication networks, network science, multimedia, game theory, distributed systems and machine learning. Her current research focus is on machine learning, AI and operations research for healthcare and medicine.

Razvan Pascanu

DeepMind

Homepage

Improving learning efficiency for deep neural networks

Thursday, September 24, 2020, 08:00–09:00

Abstract: In this talk I will look at learning efficiency for deep learning and attempt to motivate its importance and its place as one of the crucial open problems in the field. In particular I will focus on a series of works that attempt to look at this problem from different angles, from compression to the role of inductive biases and learning dynamics imposed by gradient based learning. These works will highlight how the problem can be looked at from this perspective showing the extent of the problem and hopefully encourage more research on understanding the causes of inefficiency in learning, and looking at solutions for it.

Biography: Razvan Pascanu is a Research Scientist at DeepMind, London. He obtained a Ph.D. from the University of Montreal under the supervision of Yoshua Bengio, working on different aspects of deep learning, particularly optimization, memory in recurrent models and understanding efficiency of neural networks. While in Montreal he was also a core developer of Theano. Razvan is also one of the organizers of the Eastern European Summer School and co-organized two NeurIPS workshops on continual learning, and an ICLR workshop on graph nets. He has a wide range of interests and works on topics around deep learning and deep reinforcement learning including optimization, RNNs, meta-learning, continual learning, and graph nets.