So how would you rate the following.
Do you want a simple number that is not attached to anything (let's say 130dB) If so where did that come from?
Let's say that a speaker has a 100dB sensitivity and 1000 watt power capacity.
That is a 30dB rise or 130dB.
HOWEVER, let's say the rated response is -3dB at 30Hz, but that includes a 10dB boost at 30Hz.
That would be 97dB at 30Hz.
So you might "assume" that you would get a 30dB rise so 127dB at 30Hz.
HOWEVER-because you were already adding 10dB to get the -3dB at 30Hz, that means that you can only add 20dB (NOT 30dB) to the 97dB.
So NOW-the max output at 30Hz is 117dB-NOT the 127dB you might "assume"-if you don't take the processing boost into account.
You MUST dig a bit deeper to get the REAL story.
And what if you don't know how much processing was added? Was it 3dB or 15dB?
Does the speaker exceed xmax at those freq with boost?
All sorts of things can be hidden in the "magic processing" that does not show up until you turn the system up and find out where it "stops" putting out.
With all due respect, I think it gets alot easier than your example, if we just stick to measurements for discussing sensitivity and max output.
Besides, the second we start talking wattage ratings to determine max output, it's becomes all bogus anyway...
For sensitivity, I could live with a single number...if that number was simply SPL vs a reference voltage (like you guys do).....and if that voltage and SPL were time-averaged to get a stable look while using band limited pink. I'd also ask for measured average current over the same period, for a more meaningful nominal impedance spec.
I'd say let the manufacturer choose whatever bandpass he wants to claim for the intended use....just discose the exact HP and LP filters being used.
This would help keep the low end f-3 more realistic, as using an unobtainable f-3 would lower the measured SPL vs the reference voltage. The sensitivity spec would suffer from claiming too low a response.
If a manufacturer wants to include peaking or other filters in their processing, fine, just disclose them.
But even if they weren't disclosed, if sensitivity was simply measured as proposed, a 10db boost like you mentioned, would require a corresponding drop in overall voltage level to keep the reference voltage intact. So again, the sensitivity spec would suffer, once again forcing the manufacturer to be a little more circumspect.
Max output could be measured using the exact same filters in place, for both short term max output and AES time ratings.
Just measure average voltage, current, and SPL, ....cold, and then at the end of long term time trials.
Comparing short term cold and AES, would give us real world compression too.
Take this relatively simple method for measuring both sensitivity and max output, and throw in raw magnitude and phase curves, along with cold and hot impedance sweeps....and damn, we'd have some really useful disclosure huh? Who knows, distortion could possibly even make it to the specification's table someday if we could get off to a new start