Recent Comments


Categories


Archives


Tags

Reverb, delay, and the Haas effect

Below you find some basic information about reverbs, delays, and the Haas effect that I compiled/copied from several sources mentioned in the references and throughout the text. For more information about (the use of) reverbs and delays see, for example, the books of Wessel Oltheten (Mixing with impact), the book of Hans Weekhout (Music Production), and Mastering Audio from Bob Katz.

 

Reverb

Reverb (from reverberation) is the persistence of sound after a sound is produced and is created when a sound is reflected causing numerous reflections to build up and then decay as the sound is absorbed by the surfaces of objects in the space their amplitude decreasing, until zero is reached. Reverberation is frequency dependent. In comparison to a distinct echo (delay), that is detectable at a minimum of about 50 to 100 ms after the previous sound, reverb is the occurrence and smear of reflections that arrive after the initial reflections.  Reverberation occurs naturally when a person sings, talks, or plays an instrument acoustically in a room or hall, but it can also be applied artificially during mixing using reverb effects (plugins).  Although reverberation improve our experience of recorded tracks by adding a sense of space, it can also reduce speech intelligibility or make a mix muddy.

Basically, there exist two types of reverbs. One category are the algorithmic reverbs. The second category are the convolution reverbs. Convolution reverb uses an impulse response (IR) to create reverb. An impulse response is a representation of how a signal changes when going through a system (in this case the ‘system’ is an acoustic environment). An impulse response of both real-life acoustic environments and electronic hardware reverb units can be created and used. The advantage of convolution reverb is that it can accurately simulate reverb and can sound very natural. The disadvantage of convolution reverb is that is it computationally complex which can take up a lot of a computer’s processing. Algorithmic reverb uses mathematical algorithms to simulate the delays that occur in reverb. The synthesis of echoes can be performed much for efficiently on a computer using less processing. The trade-off is that these algorithms rarely sound as natural as convolution reverb.

 

Early and late reflections

Reverb is made up by early and late reflections. Early reflections are the first reflections that reach our ear after being bounced off the nearest surfaces. For example, the floor or a desk. Early reflections give us a first impression of the size of the room: in a small room they are many and arrive relatively early, in a large room they are fewer and it takes a while before they reach our ears. These early reflections consist of the part of the room sound with the first 5 to 100ms of the direct sound. Consequently, the direct sound and these early reflections are highly correlated (there are attached to the direct sound). Since early reflections have been reflected relatively few times and have traveled a short distance, the high frequencies are still relatively intact.

In a large and diffuse room, the actual (uncorrelated) reverberation consists of reflections after about 100ms. The reverberation is detached from the direct sound. They bounce from surface to surface, making them take longer to reach our ear. There are so many that they smear and, consequently, we can no longer perceive them individually. The reverberation therefore sounds like a continuous signal. Because late reflections are reflected more often and have traveled a longer distance, they contain fewer high frequencies.

The attached early reflections affect our perception of the depth and direction of the sound. The detached reverberation defines size of the space.

Image. Reverberation: early and late reflections (image copied from Karagioza)

 

Reverb types

There exist different types of reverbs that you find back in most reverb plugins. Five main ones are:

Hall reverbs. These replicate the sound of a concert hall. Because of their gigantic size, they have super-long decays — even as long as several seconds. These reverbs are perfect for thickening up and adding space to strings and pads. Because of their thick, layered sound, halls can really muddy up a mix if you overuse them.

Chamber reverbs. These are similar to halls, delivering a lush, ambience-soaked sound. But they also give you an extra dose of clarity, which safeguards against the washed-out effect inherent in many hall reverbs. Historically, studios built reverb chambers by placing a speaker and a microphone inside of a reflective room, such as a tiled bathroom, a hallway, or a stairwell. A track would be amplified into the speaker, picked up by the microphone(s), then routed back to the recording desk.

Room reverbs. Based on the sound of a smaller acoustic space, room reverbs sound most like the normal ambience we’re used to hearing in the real world. Room reverbs impart a natural color and liveliness to a track. They’re also the easiest to fit inconspicuously into a mix. When used in moderation, these reverbs can add space to a source while maintaining an intimate, in-person character.

Plate reverbs. These reverbs don’t mimic a real-world acoustic space. One of the first types of artificial reverb, the plate reverb was originally produced using a magnetic driver (think of a speaker coil) to initiate (drive) vibrations in a large sheet of metal. The large metal plate (usually 6–7′ long by 3–4′ wide) vibrated via a signal passed from a transducer. The vibrations were then captured with a contact microphone. The result was dense, warm, and inviting.

Spring reverbs. These deliver a sound unlike any other. They operate similarly to a plate reverb with a transducer at one end and a pickup at the other but employ a spring (or multiple) instead of a plate. Because of their small size, spring reverbs are often found in guitar amplifiers; although, standalone spring tanks exist for studio use, as well. Spring reverbs yield a clean, bright sound, and are a must-have final touch for vintage-inflected guitar tracks.

 

 

Reverb parameters

The basic parameters that control a digital reverb (plugin) are the decay time (or reverb time, or RT60), predelay (see below), early reflections.

Image: Some characteristics of a reverb (image copied from SoundOnSound)

 

Pre-delay. This is the time between the arrival of the direct sound and the arrival of the first reflections.

If the pre-delay is set to a higher value then this gives the impression that the listener is closer to the sound source since the reflections are (much) later than the direct sound. A higher pre-delay also helps to improve the vocal intelligibility since the reflections do less overlap with the direct sound. Obviously, the generated reverb may overlap with the next word but this is not per se a problem since the intensity will decay over time and/or can be set short enough to eliminate this problem. If the pre-delay is set to a sufficiently high value then it will not/less color the direct sound since once the reflections arrive at the listening position, the direct sound already changed (reducing the amount of comb filtering and interference)

Density. The number of reflections per unit of time.

Density is generally related to the total number and temporal distribution of the early reflections. It simulates also the manner in which the sound is diffused with various reflective materials. For example, surfaces such as marble and glass have almost no dispersion, while the wood due to the microporous structure has a high degree of dispersion.

The high value of the density in the reverb gives us a feeling of a rich and saturated reverb, while low density „thins“ the reflections and makes it „light“. However, with very low values this can lead to a ringing effect due to the small number of early reflections. For this reason, care should be taken with the reduction of density parameter when we process musical instruments with fast transients such as snare or percussion. In case of percussive sounds the density is usually set to a higher value since otherwise it sounds like a flutter echo.

Diffusion. Amount of spread of the reflections

Diffusion is a parameter linked to the density, as well as other parameters such as the size (size of the room), the shape, etc. It extends not only to the early reflections, but also on the tail for the reverb and is formed by the amount of randomness generated from reflective surfaces in terms of type, reflectivity and placement in the hall. If a room has a strict form and uniform surfaces, this parameter is low, if you simulate various obstacles of different nature and with different structure of the material, the diffusion is large.

Distance. Ratio of early reflections and tail.

This determines the distance from the sound source.

Size. width of stereo image

Damping. Damping of specific frequencies of the reverb

This allows to simulate specific acoustic characteristics of a room. Different materials absorb different frequencies. If higher frequencies are damped then the reverb will be less dominating and gives the impression that the sound source is more distant.

Low/high cut. lets you add a high-pass or low-pass filter to your delay.

Wet/dry mix. the balance between the wet (with delay) and dry (without delay) signals.

 

Image: H-reverb from Waves. This is one of the reverb plugins that I use and offers many possibilities to change the nature of the reverb. 

 

Masking

Amplitude masking occurs when a louder sound masks a softer sound, especially if these are in the same frequency range. The direct sound can mask the initial reverberation of that sound. Or vice versa, the initial reverberation can mask, to some extend, the direct sound (for example, making the vocal less intelligible). By adding a small pre-delay between the direct sound and the onset of reverberation we can separate both sounds (temporal unmasking). In addition to amplitude masking there might also be the problem of directional masking during stereo-to-mono reduction. In mono the reverberation (which normally would arrive from different directions) will mask the direct sound resulting in a less reverberant perception. Therefore, if you use an artificial reverb (plugin), in contrast to a stereo-recording of the reverb, you may want to spread the reverberation away from the direct sound. Putting the reverb on the opposite channel from the direct sound is one way of doing this but may result in an unnatural effect.

Processing of Reverb

Reverbs can be further processed with, for example, EQ, compression, or modulation if you put the reverb on a separate effect bus in your DAW instead of putting a reverb directly on the audio track. For example, compression of the delay will give a more dense reverb.

 

Delay

A delay (or echo) is nothing more than the same original audio signal being repeated again and again after a short period of time. A delay repeats an audio signal at regular intervals, decaying with each repetition.  The number of times the signal is repeated is called feedback. This means that the more feedback, the more the signal will be repeated and the effect will carry on for longer.

Short delays are usually classified into Doubling Delays and Slapback Delays;

  • Doubling Delay: The delay happens 20ms-40ms after the original sound.
  • Slapback Delay: Usually around 75ms-250ms after the original audio.

 

Delay Parameters
Delay time. the time it takes for the first repetition of the delay to kick in.
Sync. this lets you synchronize the delay’s repetition with the song’s BPM.
Low/high cut. lets you add a high-pass or low-pass filter to your delay.
Feedback: the amount of signal you want feeding back into the sound source. The higher the amount, the more echoey it will sound.
Phase: inverts the phase of the signal.
Wet/dry mix. the balance between the wet (with delay) and dry (without delay) signals.

Image: one of the better delays: EchoBoy from SoundToys.

 

 

The Haas Effect

 

Helmut L Haas, Professor Emeritus, Heinrich-Heine-Universität Düsseldorf, Institute of Neuro-and sensory Physiology.

 

The Haas Effect, also called the precedence effect or first wavefront, describes the human psychoacoustic phenomena of correctly identifying the direction of a sound source heard in both ears but arriving at different times. Due to the head’s geometry (two ears spaced apart, separated by a barrier) the direct sound from any source first enters the ear closest to the source, then the ear farthest away. The Haas Effect tells us that humans localize a sound source based upon the first arriving sound, if the subsequent arrivals are within 25-35 milliseconds. If the later arrivals are longer than this, then two distinct sounds are heard. The Haas Effect is true even when the second arrival is louder than the first (even by as much as 10 dB). In essence we do not “hear” the delayed sound.

It is this precedence effect that allows accurate sound localization in reverberant locations, since only the direct sound determines the perceived source location, and the later reverberant reflections are merged into the first sound. The second sound still affects the perceived location, but not as you’d expect. Because it’s still understood by our brains as one sound, you’ll hear a widening of the sound across the stereo field although you can still pinpoint the exact location of where the sound is coming from.

Sound arriving at both ears simultaneously is heard as coming from straight ahead, or behind, or within the head. The Haas Effect describes how full stereophonic reproduction from only two loudspeakers is possible.

The Haas effect can help increase definition, depth and fullness without causing masking problems that we may have with reverb. Haas said that very short echoes (less than about 1ms) produce an ambiguous (confused) image. However, echoes from about 10 through approximately 40ms after the direct sound become fused with the direct sound.

The Haas effect is used for

  • Overcoming directional masking
  • Creating depth in mono without reverb
  • More focus when panning

In pop or classical mixing, we can use delays to take advantage of a very important corollary to the Haas effect, which says that fusion (and loudness enhancement) will occur even if the closely-timed echo comes from a different direction. The brain will continue to recognize (binaurally) the location of the original sound as the proper direction of the source. The Haas effect allows added delays to enhance and reinforce an original sound without confusing its directionality – just as long as the delay is not too long and the level of the delayed signal is not too loud. When the delay is too long or the delayed signal too loud, it starts to be perceived as a discrete echo; which we call the Haas Breakdown point. Long delays maximize the definition of the source, as long as we have not reached breakdown.

Haas and mono-compatibility.
Haas delays often sound very impressive when heard in stereo, the parts on which they’re used will disappear or, at best, change in tone and level when mixed to mono, due to phase cancellation. When utilizing simple Haas delays, be sure to check the recording in mono for cancelation and comb filtering. Tend to stay above 10ms to improve mono compatibility. The more complex, diffuse and numerous the delays, the less likely that comb filtering will occur. See also this SoundOnSound article.

Various studies have since occurred, but here’s the safe numbers you can work with to achieve fusion of two separate sounds into one:

    • For short sounds like clicks: 1 to 5 ms
    • For longer & more complex sounds: 1 to 40 ms
    • Volume variation must stay within 10 dB

Isn’t Panning the Same Thing as the Haas Delay?
Panning is close but not quite the same as a delay that effects sound localization. When panning, all you’re doing is adjusting the volumes in your two speakers. If you pan to the left, the volume is gradually increased in the left channel as it’s decreased in the right channel.

So, in the case of panning, you do experience the location of the sound shifting in either direction, but this is happening due to the change in amplitude. Loudness plays into localization, obviously, and with panning it occurs entirely due to the volumes with zero influence from delays. The way to remember this is that panning is about levels while delay is about timing (see also the Youtube video below)

How to Use the Haas Method
To put this method into action, you’ll take your boring mono track in a sparse mix and apply some very specific steps:

  • Duplicate the mono track and pan both versions opposite of each other.
  • Choose which side you want to be the location of the sound and add a delay to the other.
  • Combat phase issues by detuning slightly with a pitch shifter.

Alternatively, you can use an effect bus with a delay which is panned to the opposite channel.

References

The information above (Haas Effect) is based on/copied from:

  • After Helmut Haas’s doctorate dissertation presented to the University of Gottingen, Gottingen, Germany as “Über den Einfluss eines Einfachechos auf die Hörsamkeit von Sprache;”
  • Translated into English by Dr. Ing. K.P.R. Ehrenberg, Building Research Station, Watford, Herts., England Library Communication no. 363, December, 1949;
  • Reproduced in the United States as “The Influence of a Single Echo on the Audibility of Speech,” J. Audio Eng. Soc., Vol. 20 (Mar. 1972), pp. 145-159.) [here]
  • See also Ledgernote.com
  • Mastering Audio (Bob Katz)
  • Wikipedia
  • Mono compatibility (Sound on Sound)

 

 

Published On: December 23rd, 2021Last Updated: December 20th, 2023Categories: Audio Processing Education, EducationTags: , , , ,