Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

Thoughts on stereo field

DSP related issues, mathematics, processing and techniques

Thoughts on stereo field

Postby tulamide » Sat Feb 27, 2016 9:40 am

It's a bit theoretical, and I only need confirmation if the basic thinking is right or advices, if it's wrong.

My assumptions:
If two signals are the same and I send them to the left and right inputs of an audio chip, the result is mono.
If two signals are totally different and I send them to the left and right inputs of an audio chip, the result is binaural.
If two signals are basically the same but with slight differences and I send them to the left and right inputs of an audio chip, the result is stereo.

Regarding the last assumption, if I take that signal and split it, using synchronized low- and hipass filters (sync = same cutoff), then mathematically smooth the low signal's channels towards 0 (for an extreme example, a series of left -1,+0.5 becomes left -0.5, +0.25), I basically turn the low frequency signal towards mono. Accordingly, if I mathematically sharpen the high signal's channels towards max, I basically turn the high frequency signal towards binaural. In result the signal will gain more stereo depth (aka Stereo Imager).

Don't invest too much time to go in depth. First I need to just know if these basic thoughts are in any way correct?
tulamide
 
Posts: 2109
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: Thoughts on stereo field

Postby RJHollins » Sat Feb 27, 2016 10:52 am

Hi T,

The only issue I'd question is the use of the term 'binaural' . Human hearing is binaural.

Maybe only semantics.

To the 2nd part. Again, might be semantics, stereo 'widening' seems a more common description.

I'm not familiar with the programming technique you describe to narrow the LF, widen the HF.
But you did force me off to read up on a topic we've taken for granted.
RJHollins
 
Posts: 1399
Joined: Thu Mar 08, 2012 7:58 pm

Re: Thoughts on stereo field

Postby Spogg » Sat Feb 27, 2016 12:09 pm

tulamide wrote:
If two signals are the same and I send them to the left and right inputs of an audio chip, the result is mono.


I would question this assumtion tulamide. If you have exactly the same signal going to the left and right channel this is still stereo because you have 2 channels, left and right. The illusion our brain creates is that the source appears to be centred in the stereo field. I'm sure you know this, so I think this is just semantics but I know you like accuracy ;)

Cheers

Spogg
User avatar
Spogg
 
Posts: 2453
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: Thoughts on stereo field

Postby tulamide » Sat Feb 27, 2016 12:51 pm

Thanks, RJ and Spogg.

You are right, the words used seem to be wrong. So dual mono is probably the right word for assumption one. But I'm unsure about the other two.
tulamide
 
Posts: 2109
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: Thoughts on stereo field

Postby KG_is_back » Sat Feb 27, 2016 1:05 pm

stereo field may be though of as 2 dimensional vibration. You can use left and right channels to map the singal onto a plane with an stereoscope:
(sorry couldn't find an image under 400px wide...
https://www.meldaproduction.com/images/screenshots/MStereoScope00.jpg?id=101

Left and right channels represent diagonal lines. Mid part is represented by vertical axis and side part is represented by horizontal axis. When both left and right signals are identical and in phase, the signal oscillates along the mid axis and is pure mono. If they are identical, but oscillate out of phase, and signal oscillates along side-axis. Such signal corresponds to mono sound coming directly from a side.
Stereo signal is simply a signal, with both mid and side elements. Ratio of the mid and side element is the stereo-width.
KG_is_back
 
Posts: 1216
Joined: Tue Oct 22, 2013 5:43 pm
Location: Slovakia

Re: Thoughts on stereo field

Postby adamszabo » Sat Feb 27, 2016 2:01 pm

Spogg wrote:
tulamide wrote:
If two signals are the same and I send them to the left and right inputs of an audio chip, the result is mono.


Yes, if two signals are perfectly identical they can be called "mono". For example if your audio file is mono, the exact two signals will come out both left and right speaker, but if you have a stereo file, but still with the exact same signal, you will get the same result.
adamszabo
 
Posts: 452
Joined: Sun Jul 11, 2010 7:21 am

Re: Thoughts on stereo field

Postby tulamide » Sun Feb 28, 2016 7:37 am

Thanks a lot, guys.

KG, this visual explanation was helpful to understand the basics. But now I want more :mrgreen: Do you happen to know where I can find info about the actual calculations, and in a form that I can understand (most engineers use words and signs and explanations that don't help me much, that's my weak spot)? If there is any tutorial or the like that explains it much like Numberphile, that would be awesome!
tulamide
 
Posts: 2109
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: Thoughts on stereo field

Postby KG_is_back » Sun Feb 28, 2016 5:17 pm

tulamide wrote:Thanks a lot, guys.

KG, this visual explanation was helpful to understand the basics. But now I want more :mrgreen: Do you happen to know where I can find info about the actual calculations, and in a form that I can understand (most engineers use words and signs and explanations that don't help me much, that's my weak spot)? If there is any tutorial or the like that explains it much like Numberphile, that would be awesome!


Unfortunately, it all gets crazy once you take into consideration how human hearing actually works.
Here is scheme of human ear:
http://www.yamahaproaudio.com/global/en/training_support/selftraining/audio_quality/chapter4/01_ear_anatomy/
Basically, the inner ear contains a basilar membrane, which is differentially tuned. One end stiff and thin and resonate at high frequencies, while the other end is thick and loose and resonates at lower frequencies. Under the basilar membrane is a row of neural cells (roughly 7000). The cells have several cilia (thin fibre sensors) pointing and touching the membrane. They produce regular neural pulses - the frequency of the pulses depends on how many cilia are irritated by touching the membrane. When the basilar membrane vibrates, it causes the frequency of the neural pulses to oscillate, which is interpreted by your brain as a sound.
Simply put your ear is a set of 7000 microscopic microphones, each being tuned to narrow frequency range. They also have relatively low "sample rate" - in case of high frequencies, you can't correctly tell the phase of the sound.

The real deal is how your brain interprets the signal. Your brain can tell the direction of the sound in several ways:
1. amplitude difference. When a sound is coming from a side, one of your ears is in acoustic shadow and thus the sound is lower in volume. This is particularly true for higher frequencies (>1000Hz), because lower frequencies can pass around your head. This is also how you tell whether sound is coming from forward of back, because ears are generally pointing forwards.
2. Phase difference. Because your ears are pointing in slightly different direction, they pick up sound with different phase. This is particularly true for lower frequencies (<500Hz), because human ear has low time-resolution.
3. Timing difference. The sound hits your ears at slightly different times. Your brain can calculate this time from phase-differences of different frequencies. It is particularly useful with sounds that are transient and frequency-rich.

Your brain can also separate different sounds, that happen to occur simultaneously, by grouping harmonics/overtones together and then analyse them separately. Brain also filters out echoes and most of the reverberation, unless they occur more than cca. 50ms after the original sound (google haas effect), at which point the echo is perceived as a separate sound-source. You can also calculate general shape of the room from the echoes (yes, humans have echolocation), though humans generally are poor at this (exception being many blind people).
It also cross-correlates this data with your memory and with your visual perception and assigns the sounds to particular objects/persons. What actually finally reaches your conciousness is the final interpretation of the sound in more semantic way - not actual frequency/phase/stereo/wave diagram, that your ears are capturing. This is particularly true in case of speech and screams - you neurologically don't even perceive them as sounds unless you are consciously trying to.

All of these effects can be exploited. I once created a plugin which simulates these effects (I can't find it unfortunately). Funny thing was, when I created contradicting events (amplitude pointing, that sound is coming form the left, but time delay pointing it is coming from the right). It was really nauseating and distressful.
KG_is_back
 
Posts: 1216
Joined: Tue Oct 22, 2013 5:43 pm
Location: Slovakia

Re: Thoughts on stereo field

Postby tulamide » Sun Feb 28, 2016 10:57 pm

Thanks for this tutorial!

I can follow a lot of the things. For example, the three points you mentioned. But I can only imagine how I use them to move around in the stereo field. What I still don't understand are the mechanics that actually widen an already existing stereo field? How to actually tweak the source to make that sound from the left coming from far left. From stereo to mono you just do (l+r) / 2, and all is fine. I know that it's not so easy the other way 'round. But there must be some algorithm, right?
tulamide
 
Posts: 2109
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: Thoughts on stereo field

Postby RJHollins » Sun Feb 28, 2016 11:26 pm

I'm sure you may have seen or known this, but here are a couple excerpts:
The Haas Effect



The Haas effect or the Precedence Effect is a Psychoacoustic Effect described by Helmut Haas as the ability of our ears to localize sounds coming from anywhere around us.

In short, our ears determine the position of a sound based on which ear perceives it first and its successive reflections (arriving within 1-35 ms from the initial sound) which, will give us the perception of depth and spaciousness. Pretty simple!

In general, we use our pan knobs to position sounds within the stereo field. Lets discuss panning briefly… If we have a sound coming out of a stereo pair of speakers at an equal volume, our ears will interpret that the sound is coming out from the middle. So, panning is not much more than the amount of volume you send to each speaker… Let’s remember that our ears depend on not only volume, but also on time and frequency differences for the localization of sounds.


The Haas Effect – How To

The concept of the Haas effect can be applied in order to get a wide, open and spacious sound resulting in a more realistic sense of depth. In our example, we’ll use a stereo-delay plug-in to achieve this effect. There are 3 things to remember:

1) Set the delay time on the side where you want to perceive the sound is coming from to ‘0’ (no delay)

2) Set the delay time on the opposite side anywhere from 1 ms – 35 ms. Solo your track and increase the delay time starting from ‘0’ and listen!

3) Watch for a possible loudness increase since you are converting your mono track into a stereo track when you insert the stereo delay plug-in. My suggestion would be to set the ‘MIX’ control on both sides of the stereo-delay to 50%. Adjust your track volume accordingly.

Haas Effect Also called the precedence effect, describes the human psychoacoustic phenomena of correctly identifying the direction of a sound source heard in both ears but arriving at different times. Due to the head's geometry (two ears spaced apart, separated by a barrier) the direct sound from any source first enters the ear closest to the source, then the ear farthest away. The Haas Effect tells us that humans localize a sound source based upon the first arriving sound, if the subsequent arrivals are within 25-35 milliseconds. If the later arrivals are longer than this, then two distinct sounds are heard. The Haas Effect is true even when the second arrival is louder than the first (even by as much as 10 dB.). In essence we do not "hear" the delayed sound. This is the hearing example of human sensory inhibition that applies to all our senses. Sensory inhibition describes the phenomena where the response to a first stimulus causes the response to a second stimulus to be inhibited, i.e., sound first entering one ear cause us to "not hear" the delayed sound entering into the other ear (within the 35 milliseconds time window). Sound arriving at both ears simultaneously is heard as coming from straight ahead, or behind, or within the head. The Haas Effect describes how full stereophonic reproduction from only two loudspeakers is possible. (After Helmut Haas's doctorate dissertation presented to the University of Gottingen, Gottingen, Germany as "Über den Einfluss eines Einfachechos auf die Hörsamkeit von Sprache;" translated into English by Dr. Ing. K.P.R. Ehrenberg, Building Research Station, Watford, Herts., England Library Communication no. 363, December, 1949; reproduced in the United States as "The Influence of a Single Echo on the Audibility of Speech," J. Audio Eng. Soc., Vol. 20 (Mar. 1972), pp. 145-159.)


Just following the thread ...
RJHollins
 
Posts: 1399
Joined: Thu Mar 08, 2012 7:58 pm

Next

Return to DSP

Who is online

Users browsing this forum: No registered users and 3 guests