This task is to expand my knowledge on sound and how it is applied in the media sector, and how I will go onto use it for this unit, in my trailer.
What is Sound
Sound is a type of energy that is created through vibrations; an object will vibrate which will cause the air particles to move – bumping into more air particles which is what creates sound waves and will travel through air 340 meters per second, and if in distance, to be heard.
A waveform is an image which represents an audio signal or recording; more specifically, it will show the changes in the waveform’s amplitude which is measured on the y-axis, with time on the x-axis.
A Waveform’s Three Elements
As waves travel they create patterns of disturbance, the “amplitude” is where the waveform is at its maximum disturbance to its undisturbed position though not to be confused with the distance between the top and bottom of a wave.
The wavelength is between a point on one wave – or “crest” and the same point on the next.
How fast those air particles move through the air, will change the frequency of the sound. The waveform will change according to the frequency with a low frequency having less waves and a high frequency having more.
The unit of a frequency is “hertz” (Hz) or kilohertz (kHz), megahertz (MHz) and gigahertz (GHz) to be used when the frequency is very high.
What Is A Decibel?
The decibel (dB) is a logarithmic unit in which the intensity of sound is measured.
Human ears are both sensitive and delicate; the lower the decibel, the harder it will be for us to hear it but the higher the decibel and the more damage it can do.
Human Hearing Range
“Hearing range” describes the frequencies that can be heard by either humans or animals. For a human – though it depends on the individual – the common range is from 20Hz – 20,ooo Hz.
Below are the decibels that a human can experience and what it can do to our hearing:
- 0dB: the weakest sound to hear.
- 90-95dB: a level where sustained exposure could result in hearing loss.
- 125dB: where pain begins.
- 140dB: where even small exposure can cause permanent harm to hearing.
Analogue And Digital
The difference between analogue and digital is how it is represented on any one recording device/ software. In an analogue recording, the waveforms varies the same way – or analogous – to the actual compression, expansion circles of sound itself. When an analogue signal is reproduced, the wave is converted back into acoustic energy and any noise is reproduced.
In digital recording, the wave is sampled at a constant rate, at a selected bit depth.
- Working with physical equipment can be easier as computers aren’t always accurate.
- It can sound more natural due to the changes in air pressure.
- Working on long, big session won’t be slowed down like when working with digital equipment.
- Analogue recordings is both harder to edit and more time consuming.
- There can be unwanted noise like hissing that can be difficult to edit out.
- No undo button if something does wrong
- Faster to work with and edit.
- Far easier to store and transport than analogue equipment.
- Unwanted noises/ frequencies/ distortion can also be gotten rid of far more easily and will take less time than analogue.
Distortion refers to any form of processing which changes or damages the original signal.
Distortion usually happens when a circuit becomes overloaded. Waveforms get squashed when the peaks are flattered by a circuit being unable to reproduce to higher levels.
Mono and Stereo
Mono -or Monoaural – sound is when a single channel is used and it is able to be reproduced through multiple speakers (though it remains a cope of original signal).
Stereo – or stereophonic – sound is when more channels are used that are than split into the two speakers for directionality or depth.
The Production Process Of Sound Recording
In the production of sound, the first step is usually the recording process, where a track will be created.
Next is usually the editing process where the arrangement will be looked at, comping and noise reduction beforetime and pitch editing is done.
Than is the mixing process where – once the track as been arranged – you blend them so they harmonies with balancing faders and compression so each sound can be heard well.
Next is where you “bounced” as in, all tracks must be re-recorded so it sounds better. Maximising loudness, and steer widening is also common.
It is than converted to an appropriate sample rate and bit depth for example, for a CD it should be 44.1 kHz and 16 bits.
Forms Of Raw Recording File Formats And Audio Compressions Formats
A RAW audio format is for storing uncompressed audio and will not have any header information, like bit depth.
Examples of RAW audio files are:
Pros for using RAW audio files:
- Ideal for mastering sound
- Windows standard, making it widely supported
Cons for using RAW audio files:
- Large file size
Examples of audio compression files include:
Pros for using compressed audio files:
- Small file sizes
- Supported on a range of devices/ software
- Good quality on most applications
Cons for using compressed audio files:
- As a lossy file format, it removes certain elements for audio
Where Sound Is Used
Games, of course, uses sound though with certain style games you have various degrees of use. In a Pixel game you are unlikely to have narration or human speech, sometimes it also lacks sound effects or they are limited.
However in Cel-shaded and Photorealist games, they are usually produced with great care, including sound effects (footsteps, gunfire etc), narration and human speech as well as a soundtrack to make the game immersive and realistic.
In The Last Of Us, a horror game focusing around a dystopian future, they have given each of their different zombies various sounds. It would make the player wary and overly conscious as they navigate the game which is probably why they made that decision. If a player is quiet enough they will be able to hear a zombie before it has time to see you, a good gameplay mechanic.
Movies, also, heavily rely on audio and not just for talking or special effects like cars honking etc, but also in use of music to set up scenes or even for character introductions.
The French film, “Irreversible” (2002) uses low frequency background noise. It is inaudible to humans however infrasound can induce anxiety and sorrow, as well as physical reactions like heart palpitations which was deliberately used for the thriller. It cuts off the logical part of the brain and – while a natural reaction – is perfect for films like that Irreversible.
Animation also relies on sound and uses it not unlike film does, however in animation it is usually deliberately exaggerated to make it understood and for it to fit into a less realistic environment. For example, in a fight scene a character’s grunt or noise of discomfort will be far more pronounced and louder than it would be in real life.
Overall, sound is very important, especially in the media industry where hooking an audience is key. Sound will help the audience emote and engage them.