Researchers from the University of Electronic Science and Technology of China have developed a method that is able to convert brainwaves into expressive music. The process involves using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) to create the soundtracks.
The data from the fMRI creates a better intensity of notes that mimics the work of human composers where “pitch and intensity are largely independent of one another.” The team explains in the online published study that:
The intensity of EEG music changed quickly and abruptly and this is not the usual case in man-made music. We chose another source of brain information, the fMRI signal, to serve as the intensity information source. As the EEG fMRI intensity evolution is smooth and leisurely, the resultant EEG-fMRI sounds closer to man-made real music.
Below is a score sheet of the compositions produced by the mechanism.