The inherent sound quality of particular DAWs has been the source of a running debate for years. All things being equal, audio at the same (24) bit resolution and sample rate should sound the same in any DAW -- from that narrow technical standpoint there are really no inherent differences in audio quality.
But of course in the real world all things are not equal, and that's undoubtedly what gives rise to the impression that one DAW or another might sound better -- or just sound different. Obviously any use of plug-ins will cause audible differences, as will even the smallest level differences (overall level, or the level of particular tracks in a mix). But there are other less obvious reasons why audio (or a mix) may sound different in different DAWs, that are not necessarily an indication of inherent technical differences in audio quality.
Many people have turned to the familiar null test in an effort to pinpoint potential differences. This would be done by creating the same mix in, say, two different DAWs, lined up precisely (sample-level accuracy) in time, with all identical levels and settings, and no processing (which of course would impart significant differences in sound); these would be bounced, and the bounced files would be lined up in another session. The polarity of one file (either one) would be flipped -- if the files are truly identical, the result should be silence. If not, small bits of audio would indicate differences between them.
But even this simple test is not as straightforward as it may seem.. Even with identical levels, panning tracks in a test mix may introduce subtle differences that will show up in a null test. On the most basic level, DAWs often express Pan position differently (% vs other L/C/R numbering), potentially resulting in different positioning, which may affect audible masking and other subtle aspects of the mix. On a more technical level, different DAWs use different Pan Laws -- that's how much the DAW compensates when a signal is panned away from center. So a track panned with a 3dB compensated Pan Law (the default in Logic) may end up at a different level than that same track panned in Pro Tools with a different default Pan Law; in either case the result will likely be subtly audible in both normal listening and a null test.
If a virtual instrument has random elements (like random or round-robin sample selection, commonly employed in better VIs), this will also result in audible differences in both listening and null-testing. And some technical processes will certainly result in audible differences -- the Sample-Rate Conversion algorithms in different DAWs are most likely to yield different results, and indeed one may be both measurably and audibly better or worse than another. But two truly identical simple bounced mixes (with no dithering) should null out, and sound pretty much indistinguishable.
Another common argument is that the summing algorithms (the math used when multiple tracks are combined into a final stereo mix) in various DAWs sound different/better/worse. There may have been something to this when hardware-based Pro Tools used different math (fixed) than native DAWs (floating-point), but nowadays all DAWs use floating-point math. While some may offer options for higher-resolution internal processing (64-bit vs the normal 32-bit), at the same setting I wouldn't expect there to be a significant difference in sound quality.
But the debate will certainly rage on.. I'm sure there are many people who would challenge my comments, and I welcome them. The industry is full of examples where people swore that "x" was audibly indistinguishable from "y" (like live acoustic players vs speaker playback, a common demo from speaker manufacturers in days past), while our ears nowadays would clearly perceive differences. So it's always good to keep an open mind.. :-)
Reply