Audio Writer- Transform your thoughts into words effortlessly
- Free Plan : Yes
- Subscription
Brain2Music is amazing project.
The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. In this paper, we introduce a method for reconstructing music from brain activity, captured using functional magnetic resonance imaging (fMRI). Our approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. We investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, we discuss which brain regions represent information derived from purely textual descriptions of music stimuli.
The project was developed by a research team from Google, Osaka University, NICT and Araya Inc. It can read the characteristics of the “type”, “instrument arrangement” and “mood” of the music being auditioned from the brain response, and generate music based on these characteristics. For example, if the auditioned music is jazz, the generated music will also have elements of jazz.

This research is expected to deepen our understanding of the relationship between music and the brain, and may one day reconstruct music from the human imagination. They used functional magnetic resonance imaging (fMRI) to capture brain activity, and music retrieval or the MusicLM music generation model to generate music. The generated music resembles musical stimuli experienced by human subjects in terms of semantic properties such as genre, instrument, and mood.
Their method consists of the following steps: