adamlloyd83 wrote:
I simply upsample because the recordings seem to sound bigger and more open when I master them at higher sampling rates. I don't really know why. Well, I sort of do, but I just try to use my ears more than try to intellectually justify it. But I'm not talking about plugins that upsample internally, I've never had very great results with that from my experience. I use an izotope resampler and master in Pro Tools in a session at the new sample rate. I seem to get less artifacts from limiters, and again, the recordings sound bigger and seem to have more depth than when I've tried mastering at 44.1k...
Yes, I can imagine that certain limiter algorithms when pushed hard will sound cleaner at higher sampling rates so I do believe there may be justification for it for that reason, if you exclude the insanity of "loudness wars". At the levels I like recordings being mastered at I doubt you'd hear the difference, but the last time anybody mastered at that level was at least 10 years ago. Here's me hoping things might change!
adamlloyd83 wrote:
I spoke with the dude that programmed the Izotope algorithm a while ago and I believe he said that with most modern resampling software, the samples are multiplied to a value that is divisible by both the source rate and the destination rate, so the whole 88.2 > 44.1 being "better math" is actually not true...
I'd say that is definitely not true. The ratio between 88.2 and 96kHz puts much more demand on the interpolation filtering than 88.2 and 44.1kHz. It boils down to the transition band on the filter spec being much narrower than for the 88.2 to 44.1 case. Many of the converters out their have been designed for real time use with low CPU demand in mind and because of that the filter design is often compromised. Maybe the Izotrope algorithm uses direct evaluation of the reconstruction formula so it may not apply to that one, but generally, when considering the converters out there,it does matter, because they are using the upsample by X decimate by Y approach. If you care to look at this site, you can see that many commercial converters actually perform quite poorly:
http://src.infinitewave.ca/
adamlloyd83 wrote:
Could you explain this part a little more for me? >>> "...use Har-Bal on the lowest sampling rate version. Why? Because this will give the best frequency selectivity in Har-Bal (ie. more spectrum control) due to the FIR length being fixed." (especially the last part, spectrum control and FIR length) I mean, how is it different from if I'm working with a song that was originally recorded at 96k, and then I use Har-Bal? I feel like when I look at the upsampled version in HB, compared to the version at 44.1, I see more harmonic detail and can make really subtle, detailed tweaks (this part could be my imagination, I don't know). I've found over the years, as I've trained my ears, that the less drastically I use HB, the better the result (but that HB is still an extremely useful tool for me, for tightening and focusing the tone of the track).
The frequency selectivity of an FIR filter (ie. the inverse bandwidth of the narrowest notch or peak it can realize) is determined by the filter length in samples and the sampling rate. Its basically proportional to the filter length divided by the sampling rate. Hence, because the length is fixed, if you increase the sampling rate the selectivity is reduced.
I'm not sure why you can do better at high rates. Maybe the plots look less confusing? In any case, the place where the loss in selectivity comes into play is at the low frequency end of the spectrum. Above a couple of hundred Hz there's more than enough selectivity to deal with this spectrum with equal authority. It's only below 200Hz that the difference in selectivity will show up.
regards,
Paavo.