Perhaps as many as ten years ago I had a realization regarding why things like stereo wide, and mid side processing can widen the stereo image beyond the angle of separation of the speakers reproducing the sound. Although there is an explanation for how re-mixing (in terms of panning) can work when sound is recorded with a combination of a cardioid and figure of eight microphone pair, I am unaware of a general explanation of why a normally panned instrument in a mix can end up sounding wider than the speakers after further processing with mid / side strategies (ie. boosting the side channel and translating back to stereo left / right).
However, after an epiphany from studying head transfer functions and how they can be approximated from a weighted sum of left and right audio signals reproduced by a stereo pair of loudspeakers, it all became clear. Using those measured head transfer functions along with some mathematical model it can be demonstrated why it is possible to pan an instrument wider than the speakers with a simple process of placing a low level inverted image of one channel in the opposing one.
Indeed, it seems clear to me that, at least anecdotally, some mix engineers are aware of this (see for instance the recording of “Parachutes” by “Coldplay”, which uses this to put the acoustic guitar wide, and “Little Snake” by “Francis Dunnery”, which does likewise with an otherwise monophonic recording of guitar and vocal). What is missing is a formal explanation and codification (in a general sense) of the process, in a manner similar to conventional pan controls on a mixing desk.
Whilst I’ve been aware of the reason why this works of that many years it is only now that I have progressed this idea to the stage of actually formalizing and testing it. So now I have a family of plugins that implement the “Wide Pan” algorithm to extend panning beyond the limits of the position of the loudspeakers. Here you can find downloads for the Har-Bal WidePan plugin on various platforms. The algorithm implicitly assumes stereo loudspeakers and listener in a equilateral triangle arrangement (speakers at -30 degrees and + 30 degrees azimuthal angle relative to the listener). To my ears it seems effective in increasing spread by at least 10 to 15 degrees in either direction and possibly more but the wider it gets the less noticeable any further position change becomes. The algorithm was encoded to cater for +/- 90 degree azimuthal angle but it appears clear that for angles beyond +-60 degrees not much change is evident. Note that the algorithm will not be effective for headphone listening in the manner described because it relies on the left to right and visa-versa cross-talk that occurs with listening to loudspeakers in a free space. However, It is unlikely to degrade headphone listening experience and it will similarly have little effect on mono compatibility because for the wide panned scenarios it is only adding a negative image of one channel in the other.
Ultimately I plan to document the entire algorithm as a scientific article and hopefully have it published in an appropriate journal, as well as make the demonstration source code publicly available and completely unencumbered (ie. free of IP restrictions) in the hope that perhaps it may find use in digital mixing equipment and DAWs, although I am totally aware that this may never happen.