In previous lessons you learned some basics of neuroanatomy and neurophysiology, which gives you a sense of the electrical and chemical signalling that the brain uses to process information, and we then had a tour of the "sensory neurosciences", looking at smell, taste, vision and hearing.
It is probably fair to say that the whole point of having a brain is to enable us to use the sensory information we receive to make "good" decisions about what actions to take. The second half of the course will therefore focus more on how the brain controls and chooses actions. But before we delve into that, we thought it might be a good idea to zoom out a bit, and consider generally how networks of neurons process information.
Much of our understanding of that topic has come from modelling studies, where aspects of neural information processing are reproduced in computer models. Some of these modelling efforts have inspired the artificial intelligence research which has recently boomed into the enormously successful use of so called "deep neural networks" that allow computers to perform tasks which they were until recently notoriously bad at, such as recognizing or manipulating photographs, or translating human languages.
After this lesson, you will hopefully appreciate what an enormously important role "synaptic weight matrices" play in shaping what a neural network is capable of, and why we can be pretty confident that the plasticity of synaptic connections between neurons lies at the very heart of turning brains into tremendously powerful learning engines.
Before the lecture we strongly recommend that you watch this video here below about (artificial) neural networks which we copied from the very, very awesome youtube channel "ThreeBlueOneBrown". (If you are interested in scientific topics with a somewhat mathematical bend but always wished that someone would replace most of the tedious formulae with intuitive and visually pleasing animations, then you should consider subscribing to that great channel).