ProSoundWeb Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 ... 8 9 [10] 11 12   Go Down

Author Topic: "Multi cellular array" vs "Single Source array"  (Read 37917 times)

Stephen Kirby

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3006
Re: "Multi cellular array" vs "Single Source array"
« Reply #90 on: July 29, 2016, 01:52:31 PM »

Having worked with some audio beam forming technologies I can understand the concept and how it would allow multiple drivers to correlate at some point in the audience.  What I don't quite get, is how you get them to correlate at multiple points in the audience.  I know MLA doesn't claim to use beam forming as in Anya and others.  But how else do you deal with multiple arrivals at multiple points in space?  Regardless of whether it's 10 feet in front of the speakers or 100 feet out in the audience.
Logged

Tom Danley

  • Full Member
  • ***
  • Offline Offline
  • Posts: 144
Re: "Multi cellular array" vs "Single Source array"
« Reply #91 on: July 29, 2016, 02:20:54 PM »

Having worked with some audio beam forming technologies I can understand the concept and how it would allow multiple drivers to correlate at some point in the audience.  What I don't quite get, is how you get them to correlate at multiple points in the audience.  I know MLA doesn't claim to use beam forming as in Anya and others.  But how else do you deal with multiple arrivals at multiple points in space?  Regardless of whether it's 10 feet in front of the speakers or 100 feet out in the audience.

Hi Steve
There are two separate ways to examine the issue you raised.  The popular steady state model shows that the different distances add phase shift to each source according to the path length differences between one source and another.  At 0 degrees difference and at every N times 360 degrees difference there is constructive addition while at every odd 180 degree phase difference there is cancelation or destructive interference. When plotted as a polar plot or spherical plot, this is the 3d view or manifestation of comb filtering where at a given frequency, one can move from a cancelation notch to an additive lobe.     It is possible to use delay or phase manipulation so that there is the best combination of summation and cancelation at some point in front or combination of locations.  This is how the sonar and radar arrays work and follow Huygens theorem.

The “other way” to look is not steady state but to look at a transient view. 
Here there is no way to make a single transient arrive and sum coherently into one impulsive event when you’re talking about more than one set of path lengths because being at point A in front one has one set of path lengths to compensate while being at point B in front there are another set of different path lengths to compensate.   
This is how one can have it sound good at the mix position but it’s very different everywhere else because there are different path lengths everywhere else. 
You simply cannot fix this time of arrival or transient issue globally when you have spatially separated sources, only for one location at best.  Much of music is more steady state but things like transients, voice intelligibility and music articulation requires preserving time more closely to replicate the input signal.
Hope that helps
Tom Danley
Logged

Cailen Waddell

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1428
Re: "Multi cellular array" vs "Single Source array"
« Reply #92 on: July 29, 2016, 03:09:30 PM »

Hi Steve
There are two separate ways to examine the issue you raised.  The popular steady state model shows that the different distances add phase shift to each source according to the path length differences between one source and another.  At 0 degrees difference and at every N times 360 degrees difference there is constructive addition while at every odd 180 degree phase difference there is cancelation or destructive interference. When plotted as a polar plot or spherical plot, this is the 3d view or manifestation of comb filtering where at a given frequency, one can move from a cancelation notch to an additive lobe.     It is possible to use delay or phase manipulation so that there is the best combination of summation and cancelation at some point in front or combination of locations.  This is how the sonar and radar arrays work and follow Huygens theorem.

The “other way” to look is not steady state but to look at a transient view. 
Here there is no way to make a single transient arrive and sum coherently into one impulsive event when you’re talking about more than one set of path lengths because being at point A in front one has one set of path lengths to compensate while being at point B in front there are another set of different path lengths to compensate.   
This is how one can have it sound good at the mix position but it’s very different everywhere else because there are different path lengths everywhere else. 
You simply cannot fix this time of arrival or transient issue globally when you have spatially separated sources, only for one location at best.  Much of music is more steady state but things like transients, voice intelligibility and music articulation requires preserving time more closely to replicate the input signal.
Hope that helps
Tom Danley

This is the understanding of physics I have.

The voodoo I don't understand is how a product like MLA can claim phase coherency with the multiple arrival points of multi cellular point source - or whatever it is called.

Tom - I'm not asking you to explain it, simply saying that I understand the physics the way you have explained them, but can not align the marketing copy of MLA with those physics.  I assume that perhaps I don't understand enough yet :).  Of course there is also the possibility that I understand the physics correctly...
Logged

Lee Buckalew

  • Classic LAB
  • Hero Member
  • *
  • Offline Offline
  • Posts: 1384
  • St. Louis, MO area
    • Pro Sound Advice, Inc.
Re: "Multi cellular array" vs "Single Source array"
« Reply #93 on: July 29, 2016, 03:26:44 PM »

But how much "different" can they really be?
I will admit to not knowing as much engineering as some members here but I did watch some of the product info on these systems.
Once the physical orientation of the drivers has been established, what adjustments can be made?
Basically, time/phase-EQ-level between sources are it, unless I am missing something. So the DSP and amplifier end of things are not all that revolutionary.

The most significant difference in the Cellular Drive process, as I am understanding it currently, and others is the modeling, both the models used to describe each individual cabinet and the model of the coverage plane.
Cellular Drive starts with a virtual model of the room created in section,
you then set the coverage requirements and goals and the software calculates the required acoustic source that would be necessary to create the user defined coverage/goals. 
The software then uses the individual cabinet balloon data based on the relative position of each cabinet in the array, tests every combination of inter-cabinet angle to determine what interactions come the closest to creating the previously defined target result.  The software then utilizes elemental equalization to configure each cell to behave, at the listening plane, as required to create the previously defined acoustic model. 

Martin is not modeling to create specific interactions at the cabinet/driver and assuming/expecting that result to create a specific coverage in the room (because it doesn't) they are creating a model that creates a known result along the listening plane by making the individual cells within the array function together as a CDPS.  They don't care what is happening at the driver or cabinet other than for the initial information so that the model can be accurate.  The drivers/cabinets must be designed from the beginning with the physical characteristics required to allow the software to create the required interactions.


Of course, the apllication of the DSP modeling is quite important but it also looks like the pysical layout of components has become very critical to these types of systems.

Physical layout of components is critical as is the actual performance of the drive components.


The EAW info was more in depth and I grasp the concept better than with the limited info on Martin's Cellular Drive.

I would doubt that you will see truly in depth engineering info on the Cellular Drive process since it is patented. 
Others would have to answer that as I am not in the know.   :D


I do like the idea of not curving the array as EAW has done.

A great deal of initial testing was done with O-Line during early development of the Cellular Drive concept.  There is far too much to go into on a web forum but part of that testing showed the limitations of a flat hung array vs. curved in terms of coherence and in terms of the quantity of individual drive cells required.  I do not know enough about what EAW is doing to discuss their processing implementation.


I understand that my comments only brush the surface of what is going on, and I am probably missing something here, but are these 2 systems really that radically different from each other?  Drivers organized a specific way then application of time/phase-EQ and level.
Is it the DSP/ amplifier configuration or the physical layout of components that really set them apart?
Certainly different from the average "line array"

Standing by for more education :)

The approach of the two systems, the starting point itself, is completely different


One very important point is that no system can be utilized to do anything asked of it.  There are some obvious limitations to cellular drive (and some not so obvious) among them the requirement to have enough cells to create the required interactions.



Lee
Logged
Lee Buckalew
Pro Sound Advice, Inc.

Keith Broughton

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3667
  • Toronto
Re: "Multi cellular array" vs "Single Source array"
« Reply #94 on: July 29, 2016, 03:51:31 PM »

among them the requirement to have enough cells to create the required interactions.



Lee
Thanks for that comprehensive response Lee. Appreciated :)

Your quoted test is a very common fault in most of the "conventional" line arrays I see deployed.
Not enough cabinets and not deployed in the correct physical space to get good results.
Logged
I don't care enough to be apathetic

Lee Buckalew

  • Classic LAB
  • Hero Member
  • *
  • Offline Offline
  • Posts: 1384
  • St. Louis, MO area
    • Pro Sound Advice, Inc.
Re: "Multi cellular array" vs "Single Source array"
« Reply #95 on: July 29, 2016, 04:05:27 PM »

Due respect - I've heard MLA, it sounds good, is a tool in the box.  It doesn't sound better or worse than a properly deployed line array to me, but perhaps I am a cretin

To my ears and testing, every line array that I have ever worked with has had some significant comb-filtering issues.  Some are far better than others but all have them.  Much of what we perceive at the highest frequencies as horn throat distortion (that really nasty sound of the EV Manifold Technology boxes that people equated with them "sounding" so loud) is really comb-filtering interactions creating the distortion. 

These interactions are not present with any properly laid out MLA system that I have used.  That said, there are certainly still comb-filter interactions between adjacent arrays such as mains and out-fill. 

That said there are a number of great sounding line array solutions out there.


It has been suggested that in a previous thread that I really need to take a whole day class to properly understand it.  That's not in the cards. 

Then it would seem that you don't care enough about it to want to learn more in depth.  The suggestion was not for 1 day it was to attend an MLA training event which is 3 days.  That is a starting point. 
Suggesting that it is not worth the time to learn more about it in depth but then spending time discussing it is like making comments comparing SMAART and SYSTUNE but being unwilling to take the time to take a training class on those systems.  Like learning SMAART or SYSTUNE the training on an MLA system only begins with the classes,


But don't mistake my eyes glazing over for me not understanding.  My eyes glaze over because I've watched the videos and read the material, and I simply don't have a good enough understanding of what is going on to understand how this array does not have the same problems a traditional line array has.

And yet spending time in a class where you can learn about this directly form the factory people involved and ask direct questions is "not in the cards".  ???


Point source physics are a bit easier to understand and have been around a while so I can tolerate the fanboys when they are right because I understand what's going on under the hood.

And of course point source physics don't actually work when you try to apply them to actual speaker cabinets.  Danley is using physics to allow multiple drivers to behave as a single source from a given horn.  This is not the same thing as the physics model of a point source. 
Utilizing multiple point source models results in the same interactions that are discussed with Cellular Drive and MLA.  With 2 modeled point sources and the smallest of spaces between them you begin to alter the balloon data that is generated.  If you take it beyond theory and actually place that point source into a cabinet you create interactions of the source to the cabinet (reflection, diffraction, etc.).


Of course Martin is under no obligation to convince me and I'm not going to buy one so it doesn't really matter whether I get it.  But I'm always going to think their claims are suspect until it's clear how/if they are able to do what the claim. 

I think that's what the heart of this thread is, understanding the physics of what's going on.  Unfortunately without the participation of some more company's engineers, we may not be able to have that level of discussion.

Sent from my iPhone using Tapatalk

I doubt that there will be any in depth physics discussions of the actual process as it is intellectual property protected by a patent. 

I don't know if there are any factory people from Martin who would/could participate.  I don't know what would be allowed by PSW and what would be allowed by Martin or Loud.
Logged
Lee Buckalew
Pro Sound Advice, Inc.

Russell Ault

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 2499
  • Edmonton, AB
Re: "Multi cellular array" vs "Single Source array"
« Reply #96 on: July 29, 2016, 04:34:26 PM »

I doubt that there will be any in depth physics discussions of the actual process as it is intellectual property protected by a patent. 

Actually, that's the real joy of patents: in return for legal protection, you have to immediately tell the whole world "how you did it" (with enough detail that someone else can do it too when the patent expires).

For example: US 20140348355 - Speaker Configuration.

-Russ
Logged

Stephen Kirby

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3006
Re: "Multi cellular array" vs "Single Source array"
« Reply #97 on: July 29, 2016, 04:56:21 PM »

Hi Steve
There are two separate ways to examine the issue you raised.  The popular steady state model shows that the different distances add phase shift to each source according to the path length differences between one source and another.  At 0 degrees difference and at every N times 360 degrees difference there is constructive addition while at every odd 180 degree phase difference there is cancelation or destructive interference. When plotted as a polar plot or spherical plot, this is the 3d view or manifestation of comb filtering where at a given frequency, one can move from a cancelation notch to an additive lobe.     It is possible to use delay or phase manipulation so that there is the best combination of summation and cancelation at some point in front or combination of locations.  This is how the sonar and radar arrays work and follow Huygens theorem.

The “other way” to look is not steady state but to look at a transient view. 
Here there is no way to make a single transient arrive and sum coherently into one impulsive event when you’re talking about more than one set of path lengths because being at point A in front one has one set of path lengths to compensate while being at point B in front there are another set of different path lengths to compensate.   
This is how one can have it sound good at the mix position but it’s very different everywhere else because there are different path lengths everywhere else. 
You simply cannot fix this time of arrival or transient issue globally when you have spatially separated sources, only for one location at best.  Much of music is more steady state but things like transients, voice intelligibility and music articulation requires preserving time more closely to replicate the input signal.
Hope that helps
Tom Danley
Thank you Tom.  This is pretty much elaborating on what I was asking.  Getting the impulse to be uniform over a large area.  Maybe being a musician, this is important to me.

I know that audiophiles are derided here, but in that world temporal coherence is often referred to as "pace".  Meaning that the leading transients of all the overtones are aligned in time.  One doesn't hear the upper overtones of a bass drum separated from the fundamental.  The more accurate this is, the more the groove or pace of the music is maintained.  Probably why I tend to prefer panel speakers for home listening, although they are obviously not scalable for large area/high SPL SR.
Getting steady state sine waves aligned within the 120* that Merlijn refers to as coupling without significant cancellation is one thing.  It gets a bit tougher when the waveforms have steeper rise times (as most instruments other than a flute do) thus involving higher bandwidth or alignment to higher frequencies.  But getting the initial transient of everything coherent in time is tougher.  How you get it coherent in time at multiple points in space with multiple path lengths is the part I can't get.  Maybe this isn't happening, and getting phase coherence by controlling addition and cancellation over an area is a sufficient improvement that it markedly improves sound quality.
Logged

Roland Clarke

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 841
Re: "Multi cellular array" vs "Single Source array"
« Reply #98 on: July 29, 2016, 05:36:52 PM »

The only thing that makes me at  all nervous about the whole "multicellular" technology is the lack of candour that appears about it.  we all know that even with the best designed horn technology there are limits to off axis response control, that clever manipulation of time, phase, level and frequency response can assist in controlling this still isn't the magic bullet.

Unless I'm getting it completely wrong what it boils down to is that they are doing what most other manufacturers of line array are doing, but taking it down to a driver by driver basis rather than a cab by cab.  To suggest that it's not a line array is, for me, to ignore the fact I doubt it can create a completely coherent wave front any more than any line array can genuinely create a genuine curved wavefront.
Logged

Stephen Kirby

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 3006
Re: "Multi cellular array" vs "Single Source array"
« Reply #99 on: July 29, 2016, 05:54:42 PM »

Roland, I don't think they are claiming a coherent wavefront.  Only some level of coherence at the listening position.  There has to be some level of interference optimized to tune things at the audience's ears.

I used to be involved in holographic optics.  These are done by creating an interference pattern at a point in space where you have placed a recording media.  It takes a lot of wavefront manipulation to get a particular interference pattern that does what you want.  In that case it was a 3 dimensional sort of Fresnel lens based on the change in refraction index in the exposed regions compared to the unexposed regions.  This is probably the closest analog to the MLA that I can think of.
Logged

ProSoundWeb Community

Re: "Multi cellular array" vs "Single Source array"
« Reply #99 on: July 29, 2016, 05:54:42 PM »


Pages: 1 ... 8 9 [10] 11 12   Go Up
 



Site Hosted By Ashdown Technologies, Inc.

Page created in 0.034 seconds with 25 queries.