Alan Ratcliffe wrote:
Good post mo!
Another thing (not really related to the OP, but still worth considering) would be converter quality - I've heard older, lower spec interfaces that sounded far better than newer but cheaper models that have "better" specs. A lot of really fine audiophile audio has been recorded at 16/44.1. My wife has my old 20 bit Event/Echo Darla on her work machine, and it sounds wonderful.
Well, Bob Ohlsson has said many a time (along with other luminaries in the field) that converter technology, i.e. the fundamentals of the design, has not changed much, if any at all in the last 20 years. What
has changed is the quality of the
analogue components used in their construction. For the most part, the same converter chips (Asahi Kasei) have been employed by most of the major manufacturers, like MOTU, M-Audio, etc so their potential is largely similar. But once again, these companies seeking to lower production costs and maximize profits have chosen inferior analogue components which are the real causes for any degradation in quality. So, I think it's safe to say that the earlier converter specimens were probably given more attention, component-wise, and thus employed their chips better, resulting in better audio quality. I have a 15 year old purple face Apogee AD8000 that I still use regularly and even though it only has a maximum sample rate of 48kHz, it has a very pleasing sound. In fact, those interfaces typified the sound of the digital revolution during the mid 90's to the early 2000's because they were found in many of the worlds top music and film scoring and mixing studios. Plus, it's built like a tank.
Regarding 16/44.1, I recently attended a symposium put on by Prism and Sadie (now under the same company umbrella) where Graham Boswell, the co-founder of Prism and pioneer of much of Neve's digital systems, demonstrated that even with 16-bit audio we have all the resolution we need, provided the right dithering and noise-shaping are employed. It's a heavy set of concepts, but he explained that
bit depth is not actually related to absolute resolution. If you're interested I'll try explain but for all intents and purposes, I'll just say that down at the noise floor at 16-bit without dither (around -93dBfs), the bits become truncated and the low level information, such as the end of reverb tails, never make it to the first quantization tier and therefore become a square wave. He was able to show that when a low signal is attenuated it does not fade smoothly but ends abruptly in digital distortion - the square wave. By adding dither - randomized low-level digital noise - you basically exchange 3 dB of your noise floor for more resolution when those signals fade out. Thus, your noise floor sits at around -90dB but you get the smooth fade out of the low level signals. A noise shaping filter profile is then added to shift that noise to a region that the ear is less sensitive resulting in boosted resolution without the effect of the added noise. A brilliantly elegant solution.
Cheers ?