I hear where you're coming from. But until then, I think it would be prudent to have an analog, well, for analog I did a show a couple of weeks ago where we had over 20 short set acts over the course 5 hours ranging from duos & trios to full bands and talking heads, to a couple of choirs. It would have made my life a ton easier if I had that virtual outboard for all my vocal channels and group compressors along with the verb & delay controls. As it was, I was jumping through layers and plugin screens like mad keeping up with all the changes on the fly. The show would have been much easier to do on an "old skool" analog desk with outboard, but that's not the desk we have anymore. While digital offers so much more overall, in that way it offers far less. It shouldn't be that way.
OK, but as long as I am speaking hypothetically, lets imagine a future when all those wannabe bands show up with their USB thumb drive (or the future equivalent) and plug in their total band repertoire-play list, effects needs (with presets), monitor mixes, frequency response template, yadda yadda.. To perform they wirelessly select their next song from on stage, while you sit back and sip a cool one in the quiet, heated/air conditioned trailer, until tear down. The convergence of recording, practice, and live performance technology works with this new cybernetic assisted mix environment.Of course this is just my wild assed guesses about one possible future and not remotely around the corner any time soon, especially if almost everybody lacks the vision to imagine a different way.I am just pointing out that digital technology offers a power largely untapped to help us with anticipated decision making. If we expect fader moves, perhaps we can write rules for how the faders need to move (again this works best with fixed output targets so it can respond to changing inputs and maintain desired output levels). Right now this is all undefined, so soft. For those who see mixing as high art, this is surely unimaginable. My apologies, I just call it like I see it (in my crystal ball). JRPS: No availability dates in my crystal ball.
but whilst there are bands like Led Zeppelin or the Foo Fighters or Aerosmith, we will all still be in work.Isn't that right HAL.
To an extent I think you are right. To produce the "high art" of mixing though, there is still no computer algorithm that can yet achieve it - though I do appreciate that modern chart music is now so formulaic that it could well have been written by a robot and therefore mixing it is therefore also formulaic... Backing track up, mute the mics in case they actually try to sing, remember to unmute between each "song"... Another Britney gig done.On second thoughts, give that one to the computer, it would be less soul destroying.At the point where AI can decision make and program and create art at the levels of any master in any medium, frankly had better watch out. but whilst there are bands like Led Zeppelin or the Foo Fighters or Aerosmith, we will all still be in work.Isn't that right HAL.
We get to express our creativity and art in different ways. I suspect the 80/20 rule is in play when it comes to mixing duties. Why not offload 80% of the nonsense so you, or the artists on stage can handle the other 20%No offense intended. JRPS: I am rather looking forward to a computer driving my car for me. Even if I can do it better.
Whilst? Have you been reading your thesaurus again, Dave?
mixing a show is not exactly rocket science either (sorry no offense).
What I picture are ultra-linear mics at every instrument and a measurement mic at FOH. A mixer that can be told to "make every mic channel sound like the source" could be great for people who only want to adjust faders up and down in level. Comparing all the mics could make the mixer identify bleed between mics, etc, possibly cancelling it, too.
But for more advanced mixing this isn't a solution, afterall: Remember that much of what we do is actually playing to psychoacoustic phenomenon and how what we hear is affected by other senses. Often we will EQ and compress a signal so that it doesn't sound very "true to the source", but once it goes into the mix it tricks the listener into thinking it sounds like a "full sound" - all while not masking other important sources.
The level of exaggeration/underexaggeration is a very dynamic parameter that changes for every source and occation- so I still see humans making these decisions.
PS: We have long ago found out how to make a computer play a synthesizer. Why are there still musicians on stage? We could just "write rules" for the synthesizer to follow...
Page created in 0.153 seconds with 24 queries.