HomeBasic notions of sound editingBasic notions of sound editingHow does a computer represent it?
  Amadeus Pro > Basic notions of sound editing Index

2.1 What is a sound?

Let’s start with the basics since, in order to understand how a sound is represented and can be manipulated by a computer, one should have some understanding of what a sound actually is physically. In a nutshell, a sound is a small rapid variation in the air pressure. The variation of pressure is captured by your internal ear and transmitted to the brain, which interprets this signal as a sound.

A microphone is nothing but an artificial ear. It measures the air pressure and transforms it into an electric impulse. A sound is therefore usually represented as a function representing the air pressure (or equivalently the voltage at the exit of the microphone) as a function of time. A typical representation of a sound signal would be:

Note that, very often, the signal is (at least approximately) periodic as in the picture above. The time interval between two identical looking chunks of the signal, denoted by T in the picture above, is called the period of the signal and is usually measured in milliseconds. The inverse of the period is called the frequency of the signal and is measured in Hertz, also abbreviated by Hz. Some high-frequency sound signals are measured in kiloHertz (that is thousands of Hertz), also abbreviated by kHz. A Hertz is the inverse of a second. So a sound with a period of 0:05 seconds has a frequency of 1=0:05 = 20 Hertz. The human ear can hear sounds that have frequencies ranging approximately between 20Hz and 20kHz. These bounds however vary with age, wit older people less able to perceive high frequencies. Sounds with frequencies lower than that are called infrasounds and sounds with frequencies higher than that are called ultrasounds.


Copyright © HairerSoft
Last updated on June 22, 2015

HomeBasic notions of sound editingBasic notions of sound editingHow does a computer represent it?