The Science. This work was performed by Dr. Psyche Loui at Wesleyan University under NSF-STTR # Please direct inquiries to

Similar documents
CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

Auditory Scene Analysis

EEG Analysis on Brain.fm (Sleep)

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

What do you notice? Edited from

Auditory Scene Analysis. Dr. Maria Chait, UCL Ear Institute

Computational Perception /785. Auditory Scene Analysis

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Sonic Spotlight. Binaural Coordination: Making the Connection

Analysis of EMG and Biomechanical Features of Sports Aerobics Movements

Lecture 3: Perception

Effects of Light Stimulus Frequency on Phase Characteristics of Brain Waves

A User s guide to MindReflector Training

Auditory scene analysis in humans: Implications for computational implementations.

1. A. The tinkle of the wind chimes tells me that the breeze is still rustling outside. In the distance, I can hear the whistle of the train.

Hearing in the Environment

AccuScreen ABR Screener

9.3 Sound The frequency of sound Frequency and pitch pitch Most sound has more than one frequency The frequency spectrum

EEL 6586, Project - Hearing Aids algorithms

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

EEG Analysis on Brain.fm (Focus)

Hearing II Perceptual Aspects

16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14

Hearing. Juan P Bello

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

EKG and Sound.

Hearing the Universal Language: Music and Cochlear Implants

Music and Hearing in the Older Population: an Audiologist's Perspective

Restoring Communication and Mobility

Sensation and Perception: How the World Enters the Mind

Hear Better With FM. Get more from everyday situations. Life is on

ADHEAR The new bone-conduction hearing aid innovation

Music Therapy explained by the Principles of Neuroplasticity

Brain wave synchronization and entrainment to periodic acoustic stimuli

Modules 7. Consciousness and Attention. sleep/hypnosis 1

Brain Rhythms and Mathematics

SENSATION AND PERCEPTION KEY TERMS

Discrete Signal Processing

Aging & Making Sense of Sound

Chapter 11: Sound, The Auditory System, and Pitch Perception

Before we talk about the auditory system we will talk about the sound and waves

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

HCS 7367 Speech Perception

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

The pleasure of hearing

Getting started. Advice for first-time hearing aid users

Product Model #:ASTRO Digital Spectra Consolette W7 Models (Local Control)

= + Auditory Scene Analysis. Week 9. The End. The auditory scene. The auditory scene. Otherwise known as

Hearing Thinking. Keywords: Cortical Neurons, Granular Synthesis, Synaptic Plasticity

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

The role of amplitude, phase, and rhythmicity of neural oscillations in top-down control of cognition

IsoAcoustics Technology Explained!

PATTERN ELEMENT HEARING AIDS AND SPEECH ASSESSMENT AND TRAINING Adrian FOURCIN and Evelyn ABBERTON

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

HCS 7367 Speech Perception

My child with a cochlear implant (CI)

EBCC Data Analysis Tool (EBCC DAT) Introduction

Precise Spike Timing and Reliability in Neural Encoding of Low-Level Sensory Stimuli and Sequences

iclicker+ Student Remote Voluntary Product Accessibility Template (VPAT)

Ganglion Cells Blind Spot Cornea Pupil Visual Area of the Bipolar Cells Thalamus Rods and Cones Lens Visual cortex of the occipital lobe

Understanding Users. - cognitive processes. Unit 3

Practice Test Questions

MSD P13038 Hearing Aid Design. Final Review May 10, 2013

Neuron Phase Response

Chapter 1: Introduction to digital audio

Seizure onset can be difficult to asses in scalp EEG. However, some tools can be used to increase the seizure onset activity over the EEG background:

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Nikos Laskaris ENTEP

Sleep-Wake Cycle I Brain Rhythms. Reading: BCP Chapter 19

Making charts in Excel

9.3 Sound. The frequency of sound. pitch - the perception of high or low that you hear at different frequencies of sound.

Robust Neural Encoding of Speech in Human Auditory Cortex

AUTOCORRELATION AND CROSS-CORRELARION ANALYSES OF ALPHA WAVES IN RELATION TO SUBJECTIVE PREFERENCE OF A FLICKERING LIGHT

Digital. hearing instruments have burst on the

How to create an autism-friendly environment

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator

A Brain Computer Interface System For Auto Piloting Wheelchair

17.4 Sound and Hearing

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence

the human 1 of 3 Lecture 6 chapter 1 Remember to start on your paper prototyping

PSD Analysis of Neural Spectrum During Transition from Awake Stage to Sleep Stage

Avaya 2500 Series Analog Telephones Voluntary Product Accessibility Template (VPAT)

Biological Psychology. Unit Two AD Mr. Cline Marshall High School Psychology

Oscillations: From Neuron to MEG

Sum of Neurally Distinct Stimulus- and Task-Related Components.

SONIFICATION AND VISUALIZATION OF NEURAL DATA. Mindy H. Chang, Ge Wang, Jonathan Berger

ID# Exam 2 PS 325, Fall 2009

SpikerBox Neural Engineering Workshop

VPAT Summary. VPAT Details. Section Telecommunications Products - Detail. Date: October 8, 2014 Name of Product: BladeCenter HS23

Vision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems.

Auditory Perception: Sense of Sound /785 Spring 2017

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

Keywords: Gyrosonics, Moving sound, Autonomic nervous system, Heart rate variability

shows syntax in his language. has a large neocortex, which explains his language abilities. shows remarkable cognitive abilities. all of the above.

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Simple Solutions to Sensory Processing Issues. Oh, so THAT S WHY they re melting down...

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy

Transcription:

The Science This work was performed by Dr. Psyche Loui at Wesleyan University under NSF-STTR #1720698. Please direct inquiries to research@brain.fm

Summary Brain.fm designs music to help you do things. Most music is optimized for how it sounds (aesthetics), but Brain.fm s music is also optimized for what it does (function). Our various music-engines (A.I. systems) are each built to help the listener achieve desired different mental states, such as focus or sleep. Our focus function has been very popular, and we recently won an National Science Foundation (NSF) grant to pursue this kind of music further. Initial results of this research suggest that: Brain.fm focus affects brain rhythms in the way we thought it would, promoting large-scale synchrony ( entrainment ) much more than other music. Brain.fm focus reduces mind-wandering during task performance, compared to other backgrounds such as silence, noise, or other music. The people with brain rhythms most affected by Brain.fm also have the most reduced mind-wandering. This suggests that, as intended, Brain.fm s unique effect on brain rhythms can influence performance on a task. Other functions such as sleep are also producing exciting results. This work can be found at brain.fm/science (or in the attached supplementary materials), but is not detailed here. We use music for all sorts of things: to help us study, to push us in a workout, or to get us to sleep, just to name a few. These functions probably weren t intended by the artist, but are useful side-effects of music that we recognize and utilize in our everyday lives. At Brain.fm we directly optimize music to target these functions. Guided by neuroscience and perceptual psychology, we develop hypotheses about what might be effective and test these hypotheses on a large scale to figure out what works. Some desirable features of useful music are straightforward, such as not having vocals or sudden bursts of loud sound if you are trying to focus. But other less-obvious features seem to make a difference as well. One approach that appears successful is to deliver rapid amplitude modulations in the music (Fig.1) that are designed to alter neural oscillations (rhythmic activity in the brain). The ability to change brain rhythms in order to boost task performance has recently been demonstrated in the lab by others (e.g. Albouy et al. 2017, Neuron), but these methods require magnetic or electrical stimulation and are impractical for widespread use.

In contrast, music is easily delivered and enjoyable, and the possibility of similar performanceenhancing effects via auditory stimulation is apparent from the positive feedback of current users and from Brain.fm s success as a productivity tool to date. But the mechanisms behind such effects are not yet understood. Here we report on some of our ongoing work under an NSF (Technology-Transfer) grant awarded for this purpose. The Music All music contains amplitude modulation (quick changes in loudness), at many rates. Differences in amplitude modulation make sounds smooth or rough, and are heard as aspects of timbre and sonic texture in music. Brain.fm s music-engine for focus concentrates modulation energy at particular rates (Fig.1), to more dramatically change the listener s brain rhythms (Fig.2). Figure 1. Acoustic features unique to Brain.fm. The vertical axis on these plots depict frequency (0-20kHz). So, acoustic energy on the lower portion of both plots (warm colors) indicates that these tracks sound dark rather than bright, having little energy at high frequencies. The horizontal axis shows the amplitude modulation spectrum (amplitude fluctuations from 0-100Hz). Left panel: Modulation spectrum (averaged over time) of a typical Brain. fm track. Right panel: Modulation spectrum of a comparison track with similar timbre, tempo and feel. Modulation energy is concentrated in Brain.fm and distributed in the

comparison music. This is an unusual feature unique to Brain.fm, intended to have particular effects on the brain s rhythms. The results below (Fig.2) show that it has the effects intended. Figure 2. Synchronized neural activity responding to Brain.fm versus Control Music. Red shows higher synchronized activity. Left: Brain.fm. Phase-locking values to amplitude modulation patterns in Brain.fm. Right: Control music. Same measurement for other music similar to Brain.fm but lacking particular modulation patterns, as depicted in Fig.1. Comparisons to noise and silence are not shown because the control music produced greater phase-locking than either. The differences in acoustic modulation seen in Figure 1 appear in the brain as differences in synchronized neural activity in Figure 2. Brain.fm can thus be shown to produce very high levels of synchronized activity--and the music is designed to do just this, by having a predominant modulation rate. But will this process we see in the brain (called entrainment ) have effects on task performance? BEHAVIOR: How does Brain.fm ( Focus ) affect performance? We needed a standard task to evaluate focus, and chose the Sustained Attention to Response Task (SART), which is the current gold standard in evaluating mind-wandering behavior. In this task, digits flash on screen one at a time, and the participant responds as quickly as

possible by a simple rule (e.g., hit a key if the number is not a 3 ). At first their response-times will be similar from one trial to the next, but the task is pretty boring, and over the course of hundreds of trials (several minutes) responses times will increasingly vary as the participant s attention drifts in and out of the task. We found that this mind-wandering behavior was reduced when listening to Brain.fm, compared to silence, noise, and other commercially-available music (this control music was chosen to be very similar in tempo and feel, but lacked some of the acoustic features introduced in Brain.fm to drive oscillatory activity). Figure 3. Change in reaction-time variability ( mind-wandering ) over ~10 minutes, under different background sounds. The y-axis depicts the slope of reaction time coefficient of variation (RTCV), taken over windows of 10 trials. Higher values indicate that reaction times became more variable over the course of the experiment, suggesting mind-wandering. Lower values indicate consistent response times (better). N=45. Errorbars depict +/-1 standard error of the mean. This effect was encouraging, but we noticed that individual people could show very different patterns of performance under the different conditions. We split our participants into two groups, based on the comparison between their performance in Brain.fm versus silence; those with more consistent response times in silence (i.e., a lower slope of RTCV) are grouped as Brain.fm-ineffective users for this analysis. The other participants showed more consistent

responses in Brain.fm than in silence, and are grouped as Brain.fm-effective users. Figure 4. Change in reaction-time variability ( mind-wandering ), participants in two groups. Same data as the previous figure, with participants in groups: Effective users (N=19 out of 45) had more consistent response times when listening to Brain.fm as compared to silence. Since the grouping was based only on the comparison to silence, the comparison between the resulting groups in the other conditions (Spotify, Noise) tells us that the Brain.fm-effective users were not better with any music or sound (instead, the benefit was specific to Brain.fm). N=45. Errorbars depict +/-1 standard error of the mean. These groups of people were doing the same task and had the same sounds entering their ears, so how did they differ, that their performance would vary so widely due to the auditory background?

The brain s response to the music affects task performance It turns out that if we place people into groups based on their task performance while listening to Brain.fm (as in the behavioral data above), we find large differences between these groups in how their brains respond to the music. This is remarkable because the groups are chosen solely based task performance; the group selection process is blind to the neuroimaging data. Below on the left are the participants who became more consistent over time in the task while listening to Brain.fm; on the right are the participants who became less consistent in the task while listening to Brain.fm. Again, both groups were listening to Brain.fm and doing the same task. Figure 5. EEG analysis. Participants in the left and right panels were grouped by task performance, looking only at their behavioral data. Plots depict Power Spectral Density (PSD) around the dominant modulation frequency in Brain.fm music, indicating the degree to which neural populations were fluctuating at the music s dominant rhythm. All participants were listening to the same music and doing the same task. The remarkable finding is that Brain.fm s neural effects are much clearer in the participants who performed consistently on the visual task.

Since the task and music are unrelated, this phenomenon is hard to explain unless we suppose that the individual s neural response to the music somehow drives task performance (reactiontime consistency). Alternatively, some third factor could underlie both task performance and the neural response to Brain.fm, but it is hard to imagine what this might be. One possibility is that people who entrain better to tasks in general tend to benefit from Brain.fm music, which, after all, is designed to entrain brain rhythms. In other words, individuals differ in how entrainable they are, and Brain.fm appears to be more effective for people whose brains are better at entrainment. What s next? Research under our government grant is only beginning, but these initial studies are already delivering exciting results. We are now using other imaging methods (fmri) to look at how music interacts with attention and memory, and plan a wide variety of tasks to look at how music affects behavior on a large scale. A priority now is understanding the large and important differences we observe between people. What differentiates the groups of participants described above? Learning about who benefits most and why will allow us to further improve the music. This document highlights one recent finding from our NSF-funded research on Brain.fm focus. Additional results have been presented at conferences (e.g., Auditory Cortex, 2017), and are attached as supplementary material.