Brain Computer Interfaces


Background

The keyboard and mouse provide us with reliable, but unnatural forms of input, being primitive transducer of muscular movement. People who lack muscle control cannot used them. Wouldn’t it be nice some day to be able to replace the mouse and keyboard with systems capable of directly interpreting the intentions of computer users from their brain activity?

This is the goal of the field of Brain-Computer Interfaces (BCI). Unfortunately, this goal is hampered by a number of problems: brain signals are typically extremely noisy, they vary in location and temporal dynamics from subject to subject, they depend on the age, tiredness, attention, food and drug intake of subjects, etc.

Theoretical part (approx 2 hrs)

In this lecture I will briefly review the different approaches to BCI, with particular attention to non-invasive EEG-based BCIs, highlighting their difficulties and limitations resulting in even the best BCIs being slow and prone to misinterpret user intentions. I will then illustrate a number of cases from our own research in the Essex BCI and Neural Engineering laboratory, where machine learning and evolutionary algorithms, have helped develop systems which are competitive with human-designed ones, thereby accelerating the development of practical BCI technology. I will also mention hybrid and collaborative forms of BCI for able-bodied users, where brain signals are combined with behavioural responses to improve human-group performance.

Practical element (approx 2 hrs)

In this part, you will see a real EEG system in action, including preparing the subject and visualising his/her signals in realtime. Then, through a web browser, you will connect to our specialised server that will allow you to familiarise yourself with visualising and analysing real brain signals (recorded via EEG), represented individually, through averages, grand averages, and scalp maps. You will see how muscular activity, such as eye blinks, can create huge artefacts in brain signals, and how one can deal with these. You will then attempt to manually classify signals acquired in two conditions, when a target is presented on the screen and when a non-target is presented, and discover how hard that is. Finally, you will see some machine learning algorithms in action which automatically perform that same classification.