Would it though? I mean, it'd obviously take a while to collect enough samples, but 2.4 million samples at 24 bits would only be ~8 MB of data. I suppose it depends on how much window overlap you're attempting?
-Russ
At the risk of derailing this thread, it's not about the size of the data registers as much as the number of calculations we're asking the computer to do. There's a reason FFTs are commonly used for
CPU heating tests (and remember, the whole point is that it's happening in realtime, which is 24 times/second in Smaart's case. In IR mode you can go up to 512k, and you can rock a 4M FFT in REW, but it takes a while.).
Operational complexity for a basic FFT is O(NlogN) so for a 16k FFT, that's just over 69,000 operations vs a 2.4M FFT would need over 15M. Plus all the per-bin thresholding and averaging buffers (between 5 and 7 per engine depending on configuration if memory serves) each holding over a million samples vs 8000, times whatever averaging depth you're using.
If you are having trouble sleeping:
http://www.cmlab.csie.ntu.edu.tw/cml/dsp/training/coding/transform/fft.htmlSo you're talking about increasing the CPU demands of a transfer function engine by about three orders of magnitude over the standard MTW setting. (This is a silly thought experiment because as you pointed out there's no reason I can think of to run a time record that long at virtually 100% overlap, and it's just such a long time record it wouldn't be much practical use for audio purposes.)
Now, that being said - some of the "norms" when it comes to software audio analyzers have historical roots, as I'm sure most of you know, when personal computers were becoming accessible (and I was a wee lad), the CPU power was extremely limited compared to what we have today, and at that time, moving from dedicated hardware DSP cards to personal computers, a lot of decisions were made simply based on what the computer's capabilities were. That's
part of the benefit behind Fixed PPO / MTW / etc. With MTW at 48kHz we're only asking the computer to handle about 800 data points at 24 fps rather than 8000 for a 16k FFT, and we still produce the resolution where we need it (LF), and don't have to deal with the excess resolution where we don't (HF). (It has other benefits, including the fact that FPPO/MTW makes the coherence trace behave in a way that is usefully representative of human perception, but that is another topic entirely.)
Silliness aside there is something worth considering here: On a modern machine, under normal use, Smaart can run multiple TF engines and a live average and still sit comfortably under 10% CPU. So the question becomes: now that modern machines have so much processing power available, what if we were able to re-evaluate some of these norms and allow the analyzer to stretch its legs and use a little more of the available power? What cool stuff would we be able to do? v9 is going to do some really neat things that just wouldn't be practical in the past, although I can't say too much more just yet