The Science Behind Music: How Dıgial Signal Processing Powers Our Favorite Tunes

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

The Science Behind Music: How Dıgial Signal Processing Powers Our Favorite Tunes

Tuesday, 23 May, 2017
By: 
Dr. Ibrahim Atakan KUBİLAY

What if we lived in a world without music?  It’s utterly unthinkable. Music has long been an important part of human life.  Researchers found human remain alongside flutes made from animal bones dating more than 40,000 years ago1.  In ancient Greek mythology, muses were goddesses or water nymphs, inspiring humans in the fields of art and science, and spurring incredible developments2. Although there are many definitions of music, a concise one is a piece of human audible sound which, when listened to by others, causes feelings and emotions to be stimulated. While the border between noise and music is further blurred in the modern era, we define audio noise as an audible sound that is disturbing to us, which is admittedly a very subjective definition based on perspective.

Moving from the emotional to the physical side, all signals we consider as sound are nothing but pressure variations in the air. Essentially, our ears are barometers able to detect these tiny pressure variations. When the air is disturbed by a sound source, such as a guitar, invisible oscillations like ripples in a pond occur.  As a result, some air molecules group tightly together (compression) and some move away from each other (rarefaction). Our ears pick up these disturbances in the air, and the signals transmitted from our ears to our brains cause emotions to be stimulated.

The Introduction of Digital Signal Processing (DSP)

Around the early 1950s, a machine was introduced that forever changed and transformed how we hear music – the computer – and a new field was born. Digital Signal Processing (DSP) is the alteration and manipulation of digital signals. The term “digital signal” may define a large number of data, provided that it satisfies two conditions: it’s discrete (usually defined at fixed intervals) and it consists of values that are readable by a computer. One of the most intuitive examples of a digital signal is stock market data (the value of a given stock at the closing moment). Such data is obviously discrete, as it’s defined only once per day. And, as long as we can map the real number data to a somewhat rounded or truncated form that the computer can process, it satisfies all conditions for being qualified as a digital data stream. Dr. İbrahim Atakan KUBİLAY article image

The digital computer (yes, there are analog computers) requires eveything it can process to be digitized. This means that the sound can no longer be continuous; it must be discretized by taking only values at specific points in time (this discretization process is called sampling). The values themselves must be quantized as well, and as a result, we no longer have the infinite possibilities of real numbers. As bad as this sounds, today we usually sample an analog sound 44,100 times per second, and assign it to values that can take on any of the 216=65536 possibilities (this is CD quality audio). Professional studios use much higher sampling rates and quantization values. Once this conversion is done, we have digital data that the computer can play with, process, enhance and manipulate at will.

A goal of computer musicians is to make a piece of music appear to be heard in a very large room with reflections from the walls. These sound reflections are called reverberation.  Importantly,  the direct path from the sound source to the listener is the shortest one, and the reflections from the sound arrive later and with reduced strength.  Therefore, if the same sound is added to the main sound source, scaled down and delayed by a certain amount of time, the result is an impression of reverberation caused by a large hall, like an auditorium, while in fact everything is coming from your earphones. To add a scaled down and delayed version of a signal to the original signal, the main building blocks of DSP, a delay block, a gain block and a summation block are used. So this application of adding reverberation to a sound fits very nicely with how a DSP system works. 

DSP is responsible for much of the sounds you hear today. Though it has altered how we hear music, it has also enhanced the world around us greatly. The next time you hear a sound, think about where it’s coming from, the complexities behind the noise, and the impressive science that’s making it all possible. 

About the Author: 

Ibrahim Atakan KUBILAY, Ph.D. works in the Department of Computer Engineering at Dokuz Eylul University, İzmir.


1 https://en.wikipedia.org/wiki/Prehistoric_music

2 http://www.greekmythology.com/Other_Gods/The_Muses/the_muses.html

SPS on Twitter

  • DEADLINE EXTENDED: The 2023 IEEE International Workshop on Machine Learning for Signal Processing is now accepting… https://t.co/NLH2u19a3y
  • ONE MONTH OUT! We are celebrating the inaugural SPS Day on 2 June, honoring the date the Society was established in… https://t.co/V6Z3wKGK1O
  • The new SPS Scholarship Program welcomes applications from students interested in pursuing signal processing educat… https://t.co/0aYPMDSWDj
  • CALL FOR PAPERS: The IEEE Journal of Selected Topics in Signal Processing is now seeking submissions for a Special… https://t.co/NPCGrSjQbh
  • Test your knowledge of signal processing history with our April trivia! Our 75th anniversary celebration continues:… https://t.co/4xal7voFER

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel