
BrainCom is a collaborative research project that aims to develop a new generation of neuroprosthetic devices for large-scale and high density recording and stimulation of the human cortex, suitable to explore and repair high-level cognitive functions. Since one of the most invalidating neurospychological conditions is arguably the impossibility to communicate with others, BrainCom primarily focuses on the restoration of speech and communication in aphasic patients suffering from upper spinal cord, brainstem or brain damage. To target broadly distributed neural systems as the language network, BrainCom will use novel electronic technologies based on nanomaterials to design ultra-flexible cortical and intracortical implants adapted to large-scale highdensity recording and stimulation.


Existing technologies, such as fMRI or EEG, are relatively effective at indicating the sorts of neural signals relevant to speech. However, these operate too slowly and generally at too low a resolution to be of use in a realistic speech situation, such as conversation. Progress in electrocorticography (ECoG) recording has provided the means to surpass these limitations.
Via electrodes placed directly onto the cerebral cortex, i.e. the surface of the brain, high resolution electrical activity can be read from the brain very quickly. These electrodes can be placed in important regions of the cortex known to be related to speech, such as the motor areas associated with mouth and throat movements.
Taking advantage of novel materials and device designs, BrainCom is developing new ECoG technology based upon this kind of approach through decoding articulatory-related activity from premotor, motor, and Broca cortex areas. These are brain regions that are associated with planning movement, movement, and linguistic expression. From the information recorded using BrainCom ECoG arrays, speech could be eventually ‘decoded’ with very high accuracy. This is based upon inferences from the motor-area activity, to phonemes that such activity would create.
When users of the neuroprosthetic technology vividly imagine that they are saying words — when they say the words clearly in their head — the brain realises similar patterns of activity as if they were actually saying them out loud. Through targeted use of ECoG recording, and sophisticated deployment of algorithms to extract maximally useful brain data, speech production could be predicted, based on vivid imagining, and the possibility of externalising unvocalisable speech realised for language-compromised users.
Imagining this principle on a larger, faster scale, provides the vision that animates BrainCom’s neuroprosthetic aims; phonemes, words, sentences, dialogue, produced from accurately decoded neural signals.
