DSP "digital complex limiter" "crest factor" - digital audio soft clipping

Started by deanpk 4 years ago28 replieslatest reply 3 years ago613 views

Hello - I read an old thread on how in some circumstances DSP deals with peak digital audio clipping.

I have a question regarding how DSP processes clipped audio peaks inside common DAW (desktop audio workstation) like Pro Tools. In floating 32 bit software like Pro Tools internal processing of audio has great headroom and the claim is there is no clipping at all. Yet the waveform viewed can looked to have clipped audio waves.

1. Is there digital complex limiters and crest factor manipulation functional inside DAW DSP when audio 'peaks' in digital audio workstation software such as Pro Tools? Some have termed these functions the 'characteristics of soft clipping'.

2. If DAW software does attenuate peak audio clipping using some 'algorithm', the energy must go somewhere, it becomes distortion, and these bits of data are merged with the signal. I assume they are merged at peak bit rate and amplitude. In some music audio production environments this gives the impression of the music sounding 'louder' - is this because there is actually now more 'energy' (bits) placed at peak amplitude?

There are a lot of parameters I have not specified in this topic - I hope some one with much greater math/code understanding of this topic is interested in helping me grasp the underlying detail of this practical situation.

Thank you - Dean

[ - ]
Reply by dudelsoundJanuary 23, 2021
I doubt there is something very complex going on at the output of your average DAW. I wouldn't really want it to. Within the entire signal chain there is a lot of headroom, so within channels and inside or between plugins, the audio can have all the amplitudes the float range allows, but at the output, the signal needs to be in the range -1.0 to 1.0 and as far as I know the signal is simply clipped (if (s>0.99) then s = 0.99).

Soft clipping is a method that has a non-linear amplitude curve and transition more smoothly between linear range and cut-off. This kind of distortion is a lot less audible and is part of many mastering setups.

Clipping and soft clipping remove peaks from the signal thus increasing the crest factor and the loudness of the material.

Be aware that clipping also introduces DC offsets. Running an already clipped signal through a high pass might move some samples outside the -1.0 to 1.0 range

[ - ]
Reply by deanpkJanuary 23, 2021

Thank you I appreciate your reply. I can add some more detail in response.

This process described "Soft clipping is a method that has a non-linear amplitude curve and transition more smoothly between linear range and cut-off." What part of the software processing chain does this, and how? This is the area of interest, this margin between soft and hard clipping - what process does the manipulation of the curve to make it smoother..?

"Clipping and soft clipping remove peaks from the signal thus increasing the crest factor and the loudness of the material."

What happens to the extra amplitude (energy.. bits..) that were the peaks - I am of the belief the extra bits have to go somewhere, like the conservation of energy? Do they get put back into the audio as a form of distortion? So that there is actually more 'energy' now in the peak area of the waveform..?

Thanks for considering.

[ - ]
Reply by rbjJanuary 23, 2021


In the digital signal world, the waveform is made up of numbers.  A stream of numbers called "samples".  There isn't an "energy" in the sense that you have to worry about any Conservation of Energy theorem.  In an analog signal, the energy that is clipped off goes somewhere (like in a resistor), but not in a digital signal.  The DSP algorithm can do whatever it wants or whatever it's programmed to do with the stream of numbers that is a digital audio signal.

The non-linear soft clipping that reduces peaks is often "memoryless", which means there is no filtering and no analysis over a window of adjacent samples of audio. And there is no delay. It's pretty simple but a bit smoother than the hard clipping which is the simplest.

But there is also a DSP process called "limiting", which is very close to audio level "compression".  In those cases, there *is* analysis done to the audio in which when a narrow peak that exceeds the rails is detected (in advance), the gain of the limiter or compressor is adjusted lower to keep that peak from hard clipping.  Sometimes this might be called "Automatic Gain Control" or AGC.  These methods *do* require some memory and have a small amount of delay in their operation.

[ - ]
Reply by deanpkJanuary 23, 2021

Excellent. "DSP process called "limiting" - this is the topic I am interested.

"narrow peak" - in audio this is likely what we call a 'transient peak' in the approx. order of 1-300 of milliseconds.

"the gain of the limiter or compressor" - is this a piece of code in the DSP to which there is no direct user control? Is it an embedded part of the DSP which does 'limiting' right on the margin between audio under and audio over the digital 'rails'?

FYI - the scenario I am studying is digital audio processing within the DSP (could be called AUX bus processing by an audio engineer). The user (engineer) has the ability (at certain points in the signal path) to 'exploit' this clipping limiter at the margin of the 'rails', using the AGC/limiting function in a sort of controllable manner determined by how much gain over the rail the user wishes to add. (This produces a sonic result that is sometimes desired in some types of audio/music production, but certainly not all.)

Is the above an accurate interpretation? I know not every situation is the same, but in terms of the variables we are discussing, is my interpretation compliant with general DSP understanding?

Very interesting thank you.

[ - ]
Reply by dudelsoundJanuary 23, 2021

If I understand you correctly, you want to use a digital signal path that does some sort of clipping as some sortz of musical effect - very much like people use tape recorders for saturation, right?

If there is a limiter or soft clipper at the end of a DAW signal chain, then it is usually explicitly put there by the audio engineer (e.g. as a VST/AU/RTAS effect plugin). This is usually not just done by the DAW without telling the user.

[ - ]
Reply by deanpkJanuary 23, 2021

Thank you for your reply. Audio engineering the signal and usage of plugins to achieve analog style saturation and coloration effects is very common and I am very familiar in both the analog and digital domains.

My thread is dealing with some DSP level operations that are not intentionally trying to do audio engineering style processing; but how certain amplitude events (digital clipping overages) are managed internally by the DSP at the sample level.

There are some DSP level processes (as noted in earlier reply) described as DSP level 'limiting' not 'limiting' in the sense an audio engineer might apply a signal limiting plugin or external analog limiting device.

I am exploring how some audio engineers exploit the DSP internal 'narrow peak crest limiting' margin in soft clipping audio - the sonic effect of 'loudness' this brings to some transient peaks in some music audio. This method is critically opposed by some engineers and generally is not recommended, but in some music styles it creates a 'loudness' which is not achieved using other methods.

I am trying to come up with a brief description on the sample level of what is happening in this process, in an attempt to causally correlate the sample processing to the sonic result.

[ - ]
Reply by dudelsoundJanuary 23, 2021

I have programmed soft clippers and used them for mastering purposes and would share the source code if I hadn't done that on comany budget :(

Pseudo code for positive samples

brick wall clipping:

y = x;
if (x>thresh) {x=thresh};

soft clipping:

y = f(x);

f(x) being a function that has a gradient of ~1 @ x=0 and saturates to f(x) = thresh for x->infinity. f(x) = c0*atan(c1*x) is an example.

Sonically the result can be described as follows:

All clipping creates overtones or harmonics. Hard clipping creates more/higher overtones, soft clipping reduces the high frequency content of the clipping error.

Short impulses (like snare drums, etc) in x are wide-band (they contain 'all' frequencies up to very high frequencies). The clipping noise is masked by the signal and clipping is only noticable as a decrease of dynamic or directness.

Generally, wide band signals (like complete mixes of a song) mask a large amount of clipping (exploited in the mastering loudness war of rock/pop songs). For narrow band signals, less clipping can be applied before it is audible.

Hard clipping usually sounds harsher - like a 'crack' for impulsive sounds and like a 'ratteling' for continuous sounds. Soft clipping sounds like that with less overtones :)

[ - ]
Reply by deanpkJanuary 23, 2021

Fantastic information. And your descriptions of the sonic effects I corroborate, I grasp you points.

In orchestral recording for film, which I do a lot of, we do not clip of course. But here I am studying the premeditated use of clipping in some 'loudness' driven music.

If I might describe a process, if you were to give it a thought as to it's integrity. Clarifying, I do not wish to insert a new processor or plugin effect. I wish to utilize the 'onboard' DSP soft clip 'limiting' of DAW software like Pro Tools. I think I am attempting to do manually, what you designed your plugin for. Viable plugins like that do exist,  mostly called simulators, attempting to simulate analog coloration and saturation. I am researching digital 'limiting', not directly the sonic effects of analog (saturation, coloration etc), though that might be an outcome.

EXAMPLE: I have a signal routed internally through the AUX busses in DAW software. I am starting with an audio track already printed at non-clipping standard audio level, below 0db digital. If I send this track through an internal DSP AUX (auxiliary) bus, in the AUX bus I add gain until soft clipping occurs (on transient drums) using my ears as guide, and then I send/sum this AUX buss at maximum non-clipping amplitude with other non-soft clipped AUX busses, and then print this audio as a wave file to a track. I now have the effect to my ears of adding a type of 'loudness' to the clipped audio track which is now preserved as 'louder' in the final printed wave form. So this soft clipping is done en-route in AUX busses in the DSP DAW - en-route to the final mix wave form. 

Has the soft clipping process 'put' more bits of amplitude data of the soft-clipped AUX bus into the final waveform than was previously possible? (But note that the final waveform is not clipped - it is simply printed at the max amplitude before clipping.)

I realize there is some overlap of sonic effects, and subjective 'ears' - "Has the soft clipping process 'put' more bits of amplitude data of the soft-clipped AUX bus into the final waveform than was previously possible?" Thus seemingly, or in reality, made those clipped sounds command more bit space at peak amplitude - the downside is the addition of distortion, which as you note, on transient sounds, this distortion can be regulated, might sound 'louder' and has possibly analog like sonic effects.

When you can.. thank you. Appreciating your code knowledge of what is happening at the sample level.

[ - ]
Reply by dudelsoundJanuary 23, 2021
I don't exactly get what you mean, but I'll try anyway.There is no difference between analog and digital clipping (except for numeric overflow which is NOT happening in modern DAWs). Both know hard and soft clipping.

There are no extra bits allocated for louder signals - louder signals are just represented by larger numbers (just like larger voltages or currents in the analog domain).

If you clip a signal with loud transients, you will increase the average loudness while reducing the transients (equally in analog or digital domain). 

If you mix such a signal with its unclipped original you will get some of the transients back and still increase loudness a little (just like parallel compression)

[ - ]
Reply by deanpkJanuary 23, 2021

Thanks again. I am better informed.

Can I put it in terms of wave form shape, maybe it clarifies:

A transient wave form that does not clip has a smooth short crest at peak amplitude. If gain is added to this same wave form such that it clips, the wave form crests, the wave form flat lines at peak amplitude for a period, then the smooth wave form crests and returns. The wave is clipped, the wave form is flat at peak amplitude, for longer.

The clipped wave form is louder because more of it is now occupying peak amplitude on the flat line (clipping) of the wave form.

(Given wave forms are measured in time/duration on the horizontal access.)

Thanks for dialog.

[ - ]
Reply by deanpkJanuary 23, 2021

"(except for numeric overflow which is NOT happening in modern DAWs)"

Is this due to the greater headroom of 32 floating point processing?

[ - ]
Reply by dudelsoundJanuary 23, 2021

yes, mostly. With integers, a large number plus a small number may create a large negative value. For 16bit signed integers:

32767 + 1 = -32768

[ - ]
Reply by deanpkJanuary 23, 2021

Thank you I appreciate that. Looking back through the thread here I feel I have a much better understanding of clipping and how DSP might deal with it.

From an audio production perspective I will attempt a summary: deliberate regulated digital soft clipping inside DSP (this might be represented in DSP as peaking meters) is a technique suitable in some circumstances, the dynamic and sonic effects it creates are loathed by many engineers though utilized by many others in particular circumstances. It is somewhat considered to be an attempt to color and saturate the sound like analog equipment, though that is a somewhat misguided use of the technique, as there a good platform level plugin simulators to do that. The internal DSP clipping limiting technique can however increase 'loudness' in a unique fashion incorporating digital limiting in the soft clipping area of the waveform's amplitude. This process produces it's own characteristics determined by the distortion characteristics of the memoryless soft clipping process engaged in the DSP.

[ - ]
Reply by deanpkJanuary 23, 2021

One last summary for the archive - clearly I am not claiming 100% truth, just a narrative of what might be happening in some cases.

Deliberate digital limiting soft clipping internal DSP - from thread above in some fashion.

For several reasons, like the ongoing development of software/hardware and DSP DAW systems, the concurrent usage of various software/hardware with various proprietary processes and stages of development, and the technological improvements in computer design and others, this topic has various possible pathways of understanding. What follows is generalized.

From an audio production perspective, deliberate regulated digital soft clipping inside DSP (this might be represented in DSP as peaking meters) is a technique suitable in some circumstances, the dynamic and sonic effects it creates are disliked by many producers/engineers and not appropriate for most circumstances, though utilized by other producers/engineers in particular circumstances. It is sometimes considered to be an attempt to color and saturate the sound like analog equipment, though that is a somewhat misguided use of the technique, as there are platform level plugin simulator for that purpose. The primary purpose of DSP clipping limiting technique can however increase 'loudness' in a unique fashion incorporating digital limiting in the soft clipping area of the waveform's amplitude. This process produces it's own sonic characteristics determined by the distortion characteristics of the memoryless soft clipping process engaged in the DSP.

It is generally understood that modern DAW systems utilizing 32-bit processing have so much headroom it is not possible to clip internally in the DSP. In practical use this seems to be contradicted, perhaps by production routing/techniques, 3rd party plugins and somewhat arbitrary and unique metering inside DAW software. It is also noted that 32-bit systems produce 32-bit wave files which must be truncated when used in any process which is not 32-bit, how this is done also seems greatly influenced by the stage of development of DAW systems.

This topic crosses over the expertise of several specialized areas of technology. Depending on one’s perspective it starts with a mathematical function, then platform coding, software implementation and finally user operation with hands, ears and speakers – one could and should reverse that order in consideration. Each specialty area is a subset of the complete knowledge and it seems no design engineer designed a path through these areas with creative ‘digital limiting’ in mind. Unlike say recording audio with a microphone and saving it as a file – this path is inherent to the DSP DAW system by design. Thus, a logical path of ‘digital limiting’ is deliberately drawn through the areas of specialty expertise and is ultimately only determined a positive or negative attribute by the producer/engineer’s ears. One might contemplate that the only path by design that might exist in this thinking is that of protective and cautionary processes built into the DSP to protect and limit hard clips escaping to the speakers and operator ears. Perhaps the producer utilizing soft clipping digital limiting is selectively passing audio through the marginal protective zone in each stage of the DSP, the trade-off is the addition of regulated distortion. Distortion according to DSP math is always a function of harmonics and might sound similar to analog distortion (coloration and saturation) in some circumstances, but the primary goal of digital limiting is the effect of loudness.

[ - ]
Reply by dszaboJanuary 23, 2021

In ProTools, and most other DAWs, internal processing is done with floating point maths, either 32 or 64 bit.  So for plug-ins and bussing, clipping more/less doesn’t exist. Clipping can occur during recording or playback if the limits of the converter are exceeded. Also, if you are bouncing to a file, clipping can occur for the same reason.  No DAW that I’m aware of will automatically deal with clipped audio samples in an elegant way.  It is the audio engineer’ job to ensure that audio signals do not exceed the limits of the converters/media, which is typically done by watching meters, mixing, applying dynamic range compression/limiting, or saturation/distortion. So while the DAW doesn’t clip audio while moving data around internally, it’s considered good practice to keep levels below the full scale output on the meters so you can be sure clipping doesn’t happen on the outputs.

[ - ]
Reply by deanpkJanuary 23, 2021

Thank you Thread contributors.

Points taken on DAW and internal DSP 32 processing. Thank you - question at bottom.

Metering in DAW can also be various, and misleading and arbitrary. Pro Tools can reveal internal peak metering several ways, some audio engineers use this as an 'indication' of clipping even though high bit rate processing makes clipping not plausible - as you note. ?

Your point about using the peak meters as 'good practice' is followed in nearly all audio engineer situations to my experience. Certainly at the converter A>D D>A stage, it would be hard to conceive any hard clipping during conversion being anything other than an error.

Question: Given your notes on 32 bit rate internal DSP processing, what is happening on the waveform in Pro Tools when a waveform shape exceeds the rails and visually seems to be "clipping" (not in the converter A>D D>A stage) but simply internal buss processing - what is the waveform indicating if it draws over the 0db line when the audio is viewed as audio in a track as a wave form - this is easy to create and print audio to do this. It might not sound like hard clipping, but it is over the top rail of the wave form - what is happening? Audio engineers call this clipping - it is an internal signal, printed to a track using busses only, it can be seen in the wave form to be clipping over the 0db.

What is this wave form representing if not clipping?

[ - ]
Reply by dszaboJanuary 23, 2021

What the meters show is left to Avid, but one thing you can trust is if they indicate clipping, than 0dB was exceeded, if they don’t than it wasn’t. That’s the primary utility and I’ve never had a problem with it.

All waveforms you see in the edit window are external audio files, and reflect the format specified in the session data.  As such, the data in those files can be clipped, but the internal processing is not

[ - ]
Reply by deanpkJanuary 23, 2021

Yep. Agree - if the Pro Tools meters and wave form specify and indicate clipping, one would think they are clipping. If the file is printed clipped, it is clipped.

I am trying to resolve the notion, that while the internal 32-bit DSP is not clipping, to the audio engineer they perceive the wave form being clipped and meters peaking (in soft clipping zone). The wave seems to be clipping internally by indication of the metering and the waveform as it is drawn - but 32-bit is not clipping internally.. no conversion engaged, all internal DSP processing.

Does one just accept, the paradox, that an engineer is doing things in the bussing that can create soft clipping in a 32-bit environment that has no mathematical clipping..? (This seems to be the real world scenario.)

This seems like a paradox to an audio engineer, yet I am sure, there is an explanation.

Thank you again.

Note: In AUX bussing the audio does not need to be printed to a file, it is soft clipped in the bus routing before it is summed with other audio and becomes a data file, at this point of creating the file (output stage) the audio should not clipped - this assessment of soft clipping in the bussing might seem absurd, but it is a scenario that plays out deliberately by some engineers in usage. They feel they get 'loudness' not attainable using plugins. (But worth mentioning again, the final file is not clipped.)

[ - ]
Reply by dszaboJanuary 23, 2021

To be clear, ProTools does not soft clip anything. There are plug-ins to apply soft clipping, but it’s the user’s obligation to use them or not.

Here’s an example.  Let’s say I have a track, and it isn’t clipped.  I put a gain plug-in on the track with 40dB of gain. The meter will show that it is clipped.  I output that track to bus 1, create a bus track with input bus 1 and output bus 2, and create an audio track with input bus 2.  On the bus track I put a gain plug-in with -40dB of gain. Now record the audio on the new track. Even though the source audio is clipping on the meter, the recorded data is not. This is because all that internal processing is using floating point math.  However, if you move the attenuator to the second audio track and record it again, you will see clipping. This is because the audio was clipped when the file was generated, which is now external to PT.  And once the data is clipped, it’s clipped forever.

In terms of a paradox, remember that we are talking about the internal processes.  The user can simply watch the meter’s to ensure that no clipping occurs, and you’ll be fine. The point is that through the busses and plug-ins, meters may show clipping, even though no clipping has occurred. But if you record or playback a clipped signal, you cannot get the unclipped audio back

[ - ]
Reply by deanpkJanuary 23, 2021

Very comprehensive and exactly to my practical application. And patient, thank you.

So at 32-bit processing the internal DSP is not clipping even though the meters might indicate peaks. OK.

Now by practical example: I hear louder audio when I raise the fader up over 0db, it is peaking over 0db on the meters (all internal DSP) it is not getting internally 'limited' by any process, it keeps getting louder as I raise it over 0db.

If one is summing a bus that has it's transients peaking over 0db, with another bus that is kept peaks under 0db, both sending to a bus that has gain structure restored below 0db (like your -40db tool) to store the file. How does the DSP 'pack' the over 0db audio into summed busses not peaking over 0db..?

How is DSP summing busses with peaks over 0db with busses with no peaks.. and maintaining original dynamic range..?

It seems like there is some point in the DSP internal bussing/summing path where the audio over 0db is somehow 'packed in louder' into the now gain structure restored signal. (I know the process of mixing, not that, how does DSP maintain accurate dynamic range summing two busses - one peaking and one not - into the same below 0db file..

Thank you, thank you, for you time.

[ - ]
Reply by dszaboJanuary 23, 2021

If you don’t already know about floating point math, it’s worth your while to look that up on Wikipedia. But to answer your question, digital sampling will always have some quantization error, which basically means that numbers will will differ from some correct value by some amount. With fixed point math, the quantization error is fixed, but with floating point, it changes based on how big the sample value is. If you add two signals, one of which is big relative to the other, the accuracy of the smaller signal will be reduced and become distorted. Generally, this isn’t a problem because the distortion is quiet relative to the larger signal, so you probably won’t hear it. Note that this isn’t a function of 0dB or any fixed level, and that it occurs all the time regardless of the peak level. 

[ - ]
Reply by deanpkJanuary 23, 2021

Astounding information. I thank you for taking the time to explain in these terms, thank you. I grasp it's practical implementations and my incorrect notions of 32-bit DSP processing. Part of my intelligibility problem was not considering a stored file as external to internal DSP processing, my definitional error from the start, once I got past that, it fell into place. (External for me was playback through an external digital to analog hardware converter, a file was still internal.. which in DSP world, I get it, it is no longer in DSP, it is externally stored as a file, and in this form the audio can be clipped...)

In  gratitude, if you're interested, I will share how some of this has been experience in a film music mixing studio over the last 22 years.

We adopted a completely digital studio based around the Euphonix System 5 in 1998, our console was number 4. There were a couple large hardware DSP changes along the way, the DSP was 32-bit and we could hear the benefit easily when we compared it to the other digital systems of that era, that might have been in use like Pro Tools etc. For our orchestral recording techniques the 32-bit processing floating point in 1998, and then 40 bit, 32 bit IEEE floating point and 32 bit fixed point formats in about, was a huge a quality improvement in 1998: it maintained audio quality in the large dynamic range we mix with in film music, the quality of reverb, and reverb tails which are often mixed at very low level where part of the psyche/sonic effect of the music underscore adds to the movie. And also the bus summing, was much better for decades. And then AVID purchased Euphonix (and the two tech support systems I utilized become one which was helpful). Just for the record, we don't clip or peak in this film score audio world.

My thread here in the forum has been about work in other audio music mediums where over the years various approaches have been taken to get digital 'loudness' in pop/rock audio production. The thread suggests previous to 32-bit processing (in DAW systems like Pro Tools) there might have been some DSP function, variously applied or not in proprietary software, that engaged some sort of soft-clipping 'limiter' to avoid hard clipping speakers etc while working with audio, and that this DSP soft-clipping being the exploited to affect 'loudness' in the form of a soft clipping limiter, which when 32-bit processing is introduced, becomes a redundant process, thus DSP engineers start to create soft clip plugins. My experiences in audio traverse this arc, but some of my technical detail was living in both worlds... some of my systems were even in two worlds due to hardware OS restrictions.

Thank you again all who contributed.

[ - ]
Reply by deanpkJanuary 23, 2021

I also have to note that this information is incredibly helpful:

"If you add two signals, one of which is big relative to the other, the accuracy of the smaller signal will be reduced and become distorted. Generally, this isn’t a problem because the distortion is quiet relative to the larger signal, so you probably won’t hear it."

I feel I can make the following claim:

When DSP sums audio busses A and B (or A and B and C and... etc...) into bus C, the proportional difference in size between A and B will affect the accuracy of the smaller signal in bus C.

When summing two audio busses called A and B in a 32-bit DSP environment, with bus A sending signal proportionally larger than bus B, into a third Bus C, "the accuracy of the smaller signal [B] will be reduced and become distorted".. "and that.. [this] occurs all the time regardless of the peak level".

I realize that in this scenario I am matching events of very different proportion, and the distortion added to Bus B due to the relative difference in amplitude might be considered inaudible (a false equivalency) though I feel the logic is correct.

To suggest a sonic outcome, one might consider that the audio of bus A, which is of greater proportion, is more accurately defined in summed bus C, it is bussed with less distortion.

[ - ]
Reply by dszaboJanuary 23, 2021

Sounds like you’ve had a long career with audio. Very very cool. I’d be curious to know what productions you’ve gotten to work on.  It sounds like you’ve got it, but a couple more points of clarification. When we design DSP systems, the bit depth is important, but so is fixed point vs floating point. We don’t generally refer to data as 32 bits without specifying fixed or floating point. The behavior and mechanics of each type are different.

Also, the relative signal level is on a sample by sample basis (each point in the waveform). So generally, a louder signal will have less distortion from the summing operation, because most of the samples are larger.  However, there will be samples where the quieter signal is larger, in which case the opposite is true. Again though, we’re looking at distortion that is much quieter than the audible signal.

Your example of reverb is notable as well. Let’s say we have a signal that has a high dynamic range, with loud and quiet parts, maybe someone talking. We mix in a bit of reverb, but just a bit.  The reverb will be distorted while the person is talking, because it is a lower signal level.  However, when the person is not talking, now the reverb is much louder, so it will not be as distorted. The nice thing about floating point is that the quantization scales based on the signal magnitude, so even very low mix levels for reverb will still be pretty accurate on the ‘tails’.  Cheers

[ - ]
Reply by deanpkJanuary 23, 2021

Great thanks!

[ - ]
Reply by deanpkJanuary 23, 2021

My work has been for the film composer Carter Burwell, and for the notable productions I will refer you to the IMDB.com database as there are many and interesting.

And some more details here:


This website has not been updated the studio pictures are old. The System 5 now resides at Vanderbilt University, NA. And we have built a new studio with an AVID S6. Over the years I have worked on nearly all of the large scoring stages in London, Abbey Road, and Hollywood, though I do most of my work in NYC where I live.

[ - ]
Reply by dszaboJanuary 23, 2021

Wow, you aren't kidding.  That is a very impressive CV.  The S6 is a beast of a board, like serious big boy stuff.  It's the kind of thing that if I walked into a studio and saw it, my first thought would be, "these guys are not messing around!"  Cheers!

[ - ]
Reply by dszaboJanuary 23, 2021

The meters in PT will clip indicate at 0dB. Audio inputs/outputs and recorded audio will all clip at 0dB. Plug-ins and busses clip at +(some big number)dB, such that they essentially do not clip ever