We are thrilled to welcome the following three keynote speakers.

Raia Yellow

Raia Hadsell

Google DeepMind, London, UK

Challenges for Learning in Complex Environments


Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the approach of learning embeddings using Siamese networks, which has been used extensively for representation learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robots. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they often cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting for agents and robots.

Robert Crop

Robert Babuska

TU Delft, The Netherlands

Genetic Programming Methods for Reinforcement Learning


Reinforcement Learning (RL) algorithms can be used to optimally solve dynamic decision-making and control problems. With continuous-valued state and input variables, RL algorithms must rely on function approximators to represent the value function and policy mappings. Commonly used numerical approximators, such as neural networks or basis function expansions, have two main drawbacks: they are black-box models offering no insight in the mappings learnt, and they require significant trial and error tuning of their meta-parameters. In addition, results obtained with deep neural networks suffer from the lack of reproducibility. In this talk, we discuss a family of new approaches to constructing smooth approximators for RL by means of genetic programming and more specifically by symbolic regression. We show how to construct process models and value functions represented by parsimonious analytic expressions using state-of-the-art algorithms, such as Single Node Genetic Programming and Multi-Gene Genetic Programming. We will include examples of nonlinear control problems that can be successfully solved by reinforcement learning with symbolic regression and illustrate some of the challenges this exciting field of research is currently facing.


Prof. dr. Robert Babuska, MSc received the M.Sc. (Hons.) degree in control engineering from the Czech Technical University in Prague, in 1990, and the Ph.D. (cum laude) degree from Delft University of Technology, the Netherlands, in 1997. He has had faculty appointments with the Czech Technical University in Prague and with the Electrical Engineering Faculty, TU Delft. Currently, he is a full professor of Intelligent Control and Robotics at TU Delft, Faculty 3mE, Department of Cognitive Robotics. In the past, he made seminal contributions to the field of nonlinear control and identification with the use of fuzzy modeling techniques. His current research interests include reinforcement learning, adaptive and learning robot control, nonlinear system identification and state-estimation. He has been involved in the applications of these techniques in various fields, ranging from process control to robotics and aerospace.


Ingo Rechenberg

TU Berlin, Germany

Evolution, Robotics and the somersaulting spider


Biological evolution can be really fast. 5000 years ago the Sahara was still green. Now, after the formation of the desert, an ingenious locomotion technique has been invented in an exclave of the Sahara. Like a cyclist, the spider Cebrennus rechenbergi moves over the obstacle-free surface of the isolated Moroccan desert Erg Chebbi.
Turboevolotion, as known from the Darwin Finches on the Galapagos Islands, implies the question: How fast can evolution be? A short introduction to the theory of the evolution-strategy gives the answer. Comparing with a simple random search the speed of progress increases enormously with the accuracy of the imitation of the rules of biological evolution.
We are bionicists: We have transferred the ingenious leg movements of the cyclist spider to a robot. The result is a machine, perhaps a future Mars rover, that can run and roll in many fashions. Videos from the Moroccan Erg Chebbi desert demonstrate the extraordinary performance of the bionics rover.


Ingo Rechenberg is a German researcher and professor currently in the field of bionics. Rechenberg is a pioneer of the fields of evolutionary computation and artificial evolution. In the 1960s and 1970s he invented a highly influential set of optimization methods known as evolution strategies (from German Evolutionsstrategie). His group successfully applied the new algorithms to challenging problems such as aerodynamic wing design. These were the first serious technical applications of artificial evolution, an important subset of the still growing field of bionics.

Rechenberg was educated at the Technical University of Berlin and at the University of Cambridge. Since 1972 he has been a full professor at the Technical University of Berlin, where he is heading the Department of Bionics and Evolution Techniques.

His awards include the Lifetime Achievement Award of the Evolutionary Programming Society (US, 1995) and the Evolutionary Computation Pioneer Award of the IEEE Neural Networks Society (US, 2002).

The Moroccan flic-flac spider, Cebrennus rechenbergi, was named in his honor, as he first collected specimens in the Moroccan desert.

Wikipedia contributors. "Ingo Rechenberg." Wikipedia, The Free Encyclopedia, 22 Mar. 2016. Web. 12 Apr. 2019.