Systems Neuroscience for AI: Conclusions

This post is part of a series “Systems Neuroscience for AI: An Introductory Guide to the Literature”.

Guide contents



If you have been following along by reading the reviews, we have now come a long way. In spite of the ground covered here, we have left untouched entire brain components such as the amygdala, claustrum, hypothalamus, to name only three. Nevertheless, you should now have a very broad picture of what is happening and where in the brain - at least enough to serve as an informed springboard from which you can dive into the literature that you think will be most useful for inspiring progress in AI.

However, systems neuroscience is unlikely to be the only conceptual toolkit needed to build useful general intelligence. For that, other research agendas have much to contribute. To name just a few promising areas: multiagent learning and complex environments (a compelling manifesto for which can be found in Leibo et al. (2019)); computational linguistics and symbolic computation in neural systems; ‘core’ ML topics such as improvements to optimisation algorithms or probabilistic deep learning; hardware; automated architecture search; and, orthogonally (but no less importantly), how we can build a general intelligence that actually does what we want and does it safely. These agendas should be seen as complementary to the systems neuroscience-centred agenda, rather than as competitive.

All this said, it is only the systems neuroscience-centred agenda that can serve to organise under one roof the many apparently disparate functions that we have highlighted in this guide such as imagination-based planning, hierarchical planning, language grounding, abstract concepts, intuitive physics, intuitive psychology, causal reasoning, continual learning, predictive world models, and one-shot learning. It is at present the only agenda that can provide a comprehensive, unified conceptual framework for our thinking about general intelligence.

Written on April 29, 2019