Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks Learning to Communicate with Deep Multi-Agent Reinforcement Learning Learning to Communicate: Channel Auto-encoders, Domain Specific Regularizers, and Attention Modular Multitask Reinforcement Learning with Policy Sketches Learning Language Games through Interaction Emergence of Grounded Compositional Language in Multi-Agent Populations A Paradigm for Situated and Goal-Driven Language Learning Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games Translating Neuralese Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols Emergent Language in a Multi-Modal, Multi-Step Referential Game

Inspired by previous work on emergent language in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal language significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization.