Brain2Music: Reconstructing Music from Human Brain Activity
vsHitPaw AI Video Enhancer
Listing Type |
AITools |
AITools |
Price |
$29.95 | |
Reviews | ||
Category |
AI Image AI Video and Shorts macOS and Windows Music & Audio & Voice |
Music & Audio & Voice |
Choose Ad Type | ||
AI Tool or Product Features |
Powerd by trained AI, AI upscaling your video with only one click.
Solution for low res videos, increase video resolution to SD, HD, 4K, up to 8K.
Provide best noise reduction for videos to get rid of unclarity.
Exclusive designed AI for perfection of anime and human face videos. |
1. Brain activity data acquired through functional magnetic resonance imaging (fMRI). This data is then compressed or transformed into a model called "MuLan". MuLan is a music language model that can represent music as a 128-dimensional vector (that is, a list containing 128 elements). Each dimension represents a certain characteristic of music, such as rhythm, melody, harmony, etc.
2. Subsequently, the music generative model MusicLM is conditioned to generate music reconstructions, and the generated music is designed to resemble as closely as possible the original stimulus, that is, the activity in the human brain when listening to music. In addition to generating new music, they also considered another method, which is to find the music that best matches the brain activity from a large existing music database.
3. They also found that two components of MusicLM (MuLan and w2v-BERT) have a certain correspondence with human brain activity in the auditory cortex. In addition, they found that the brain regions involved in extracting information from text and music overlapped. |
Paid Plan |
Pay one time | |
Free Plan |
Free Try | |
Open Source or API |
Other |
Other |
AI Product Website URL |
https://www.hitpaw.com/hitpaw-video-enhancer.html |
https://google-research.github.io/seanet/brain2music/ |