In the digital domain is not everything represented by just a number? (or well a sequence of bits that represent a number). If that's the case, digital clipping occurs when you've reached the "biggest number" that the system allows. So ultimately, just make the "biggest number" even bigger and then you should have ungodly amounts of internal gain before clipping, and then at output time, just attenuate things back down to reasonable analog output levels. I suppose this would cost more, as instead of having say a 16 bit or 24 bit signal path, you might now need 32 bit and the components and such would be more expensive...but at least you'd have gobs of headroom right?