• 22 March 2017: The list of accepted papers is now online.
  • 4 March 2017: We are happy to announce our invited speakers for this year, Prof. Thore Graepel and Prof. Ana Bazzan.
  • 14 February 2017: Submission is now closed, we are happy to have received 22 papers this year!
  • 21 November 2016: ALA 2017 site launched

ALA 2017 - Workshop at AAMAS 2017

Adaptive Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its fifteenth year. Previous editions of this workshop may be found at the following urls:

The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multiagent systems. It aims at bringing together not only scientists from different areas of computer science (e.g., agent architectures, reinforcement learning, and evolutionary algorithms) but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion on ongoing or completed work in both theoretical and practical issues of adaptive and learning agents and multiagent systems.

This workshop will focus on all aspects of adaptive and learning agents and multiagent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

  • Novel combinations of reinforcement and supervised learning approaches
  • Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc.
  • Supervised multiagent learning
  • Reinforcement learning (single and multiagent)
  • Planning (single and multiagent)
  • Reasoning (single and multiagent)
  • Distributed learning
  • Adaptation and learning in dynamic environments
  • Evolution of agents in complex environments
  • Co-evolution of agents in a multiagent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning trust and reputation
  • Communication restrictions and their impact on multiagent coordination
  • Design of reward structure and fitness measures for coordination
  • Scaling learning techniques to large systems of learning and adaptive agents
  • Emergent behaviour in adaptive multiagent systems
  • Game theoretical analysis of adaptive multiagent systems
  • Neuro-control in multiagent systems
  • Bio-inspired multiagent systems
  • Applications of adaptive and learning agents and multiagents systems to real world complex systems

Accepted papers from the workshop will be eligible to be extended for inclusion in a special issue journal.

Important Dates

  • Submission Deadline: February 14, 2017
  • Notification of acceptance: March 17, 2017
  • Camera-ready copies: March 22, 2017
  • Workshop: May 8/9, 2017

Accepted Papers

Full Talks

  • Jayesh K. Gupta, Maxim Egorov and Mykel Kochenderfer:
    Cooperative Multi-Agent Control Using Deep Reinforcement Learning
  • Pieter Libin, Timothy Verstraeten, Kristof Theys, Diederik Roijers, Peter Vrancx and Ann Now:
    Efficient Evaluation of Influenza Mitigation Strategies using Preventive Bandits
  • Patrick Mannion, Jim Duggan and Enda Howley:
    Analysing the Effects of Reward Shaping in Multi-Objective Stochastic Games
  • Rakesh R Menon, Manu Srinath Halvagal and Balaraman Ravindran:
    Shared Learning in Ensemble Deep Q-Networks
  • Priyam Parashar, Bradley Sheneman and Ashok Goel:
    Adaptive Agents in Minecraft: A Hybrid Paradigm for Combining Domain Knowledge with Reinforcement Learning
  • Bei Peng, James MacGlashan, Robert Loftin, Michael Littman, David Roberts and Matthew Taylor:
    Curriculum Design for Machine Learners in Sequential Decision Tasks
  • Roxana Radulescu, Peter Vrancx and Ann Now:
    Analysing Congestion Problems in Multi-agent Reinforcement Learning
  • Ariel Rosenfeld, Matthew E. Taylor and Sarit Kraus:
    Speeding up Tabular Reinforcement Learning Using State-Action Similarities
  • Osman Yucel and Sandip Sen:
    Language Independent Recommender Agent

Short Talks

  • Sultan Alahmari, Tommy Yuan and Daniel Kudenko:
    Reinforcement Learning for Abstract Argumentation: A Q-learning approach
  • Leonardo Rosa Amado and Felipe Meneguzzi:
    Reinforcement learning applied to RTS games
  • Chad Crawford and Sandip Sen:
    Learning Topic Flows in Social Conversations
  • Jose Guillermo Guarnizo and Fany Del Pilar Gonzalez:
    Object Recognition Using Artificial Immune Systems in Robotic Mobile Application
  • Seyed Sajad Mousavi, Michael Schukat, Patrick Mannion and Enda Howley:
    Applying Q(λ)-learning in Deep Reinforcement Learning to Play Atari Games
  • Gabriel De O. Ramos, Liza Lunardi Lemos and Ana L. C. Bazzan:
    Developing a Python Reinforcement Learning Library for Traffic Simulation
  • Junzhe Zhang and Elias Bareinboim:
    Human-Asisted Agent for Sequential Decision Making

Journal Presentation Track

  • Patrick Mannion, Jim Duggan and Enda Howley:
    Potential-Based Reward Shaping Preserves Pareto Optimal Policies
  • Fernando P. Santos, Jorge M. Pacheco and Francisco C. Santos:
    Indirect Reciprocity in Finite Populations of Explorative Agents


To be confirmed.

Invited Talks

Prof. Ana Bazzan

Affiliation: Universidade Federal do Rio Grande do Sul


Bio: Ana L. C. Bazzan holds a PhD degree from the University of Karlsruhe in Germany, and is a full professor at the Informatics Institute of the Federal University of Rio Grande do Sul (UFRGS) in Brazil. She has served as general co-chair of the AAMAS 2014 and is serving as one of the PC chairs of the PRIMA 2017 and one of the area chair of the IJCAI 2017 conference. She has served several times as member of the AAMAS (and other conferences) program committee (as PC member or senior PC member) and as an associated editor for: J. of Autonomous Agents and Multiagent Systems, Advances in Complex systems, and Mutiagent and Grid Systems. She is a member of the IFAAMAS board (2004-2008 and 2014-). She co-organized the Workshop on Synergies between Multiagent Systems, Machine Learning, and Complex Systems (TRI 2015), held together with IJCAI 2015, and the Workshop Agents in Traffic and Transportation (ATT) series. Her research interests include MAS, ABMS, machine learning, multiagent reinforcement learning, evolutionary game theory, swarm intelligence, and complex systems. Her work is mainly applied in domains related to traffic and transportation.

Talk Title: Beyond Reinforcement Learning in Multiagent Systems

Talk Abstract: Learning is an important component of an agent's decision making process. Despite the diversity of approaches in the machine learning area, in the multiagent community, learning is associated mostly with reinforcement learning. Given this background, this talk has two aims: to revisit the old days motivations for multiagent learning, and to describe some of the work addressing the frontiers of multiagent systems and machine learning. The intention of the latter task is to try to motivate people to address the issues that are involved in the application of techniques from multiagent systems in machine learning and vice-versa.

Prof. Thore Graepel

Affiliations: DeepMind and University College London


Bio: Thore Graepel is a research group lead at DeepMind and holds a part-time position as Chair of Machine Learning at University College London. He studied physics at the University of Hamburg, Imperial College London, and Technical University of Berlin, where he also obtained his PhD in machine learning in 2001. He spent time as a postdoctoral researcher at ETH Zurich and Royal Holloway College, University of London, before joining Microsoft Research in Cambridge in 2003, where he co-founded the Online Services and Advertising group. Major applications of Thore's work include Xbox Live's TrueSkill system for ranking and matchmaking, the AdPredictor framework for click-through rate prediction in Bing, and the Matchbox recommender system which inspired the recommendation engine of Xbox Live Marketplace. More recently, Thore's work on the predictability of private attributes from digital records of human behaviour has been the subject of intense discussion among privacy experts and the general public. Thore's research interests are in artificial intelligence and machine learning and include probabilistic graphical models, reinforcement learning, game theory, and multi-agent systems. He has published over one hundred peer-reviewed papers, is a named co-inventor on dozens of patents, serves on the editorial boards of JMLR and MLJ, and is a founding editor of the book series Machine Learning & Pattern Recognition at Chapman & Hall/CRC. At DeepMind, Thore has returned to his original passion of understanding and creating intelligence, and recently contributed to creating AlphaGo, the first computer program to defeat a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Talk Title: TBC

Talk Abstract: TBC

Program Committee

  • Nolan Bard, University of Alberta, CA
  • Jen Jen Chung, Oregon State University, USA
  • William Curran, Oregon State University, USA
  • Sam Devlin, University of York, UK
  • Kyriakos Efthymiadis, Vrije Universiteit Brussel, BE
  • Matthew Gombolay, Massachusetts Institute of Technology, USA
  • Marek Grzes, University of Kent, UK
  • Brent Harrison, Georgia Institute of Technology, USA
  • Mark Ho, Brown University, USA
  • Matt Knudson, NASA Ames Research Center, USA
  • Robert Loftin, North Carolina State University, USA
  • Patrick MacAlpine, University of Texas at Austin, USA
  • Kleanthis Malialis, Telegraph Media Group, UK
  • Kory Mathewson, University of Alberta, CA
  • Bei Peng, Washington State University, USA
  • Roxana Radulescu, Vrije Universiteit Brussel, BE
  • Carrie Rebhuhn, Oregon State University, USA
  • Jivko Sinapov, University of Texas Austin, USA
  • Timothy Verstraeten, Vrije Universiteit Brussel, BE


This year's workshop is organized by:
Senior Steering Committee Members:
  • Enda Howley (National University of Ireland Galway, IE)
  • Daniel Kudenko (University of York, UK)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, USA)
  • Peter Stone (University of Texas at Austin, USA)
  • Matthew Taylor (Washington State University, USA)
  • Kagan Tumer (Oregon State University, USA)
  • Karl Tuyls (University of Liverpool, UK)


If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.2017 AT

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group: