Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright

What should I make next

For general discussion related FlowStone

What should I make next

Postby BobF » Sun Jul 03, 2016 5:10 pm

Hello gang,

Ok here is another crazy idea. I am working on 3 or so new modules now, that I will post within the next month or two, but after that I have no idea. So the question is what would you all like to see made. Now maybe I can make it and maybe I can'tell, but then maybe someone else will or you will get a few versions of the same thing. So post your request and in the up comming months we will see what we come up with.

Later then, BobF.....
BobF
 
Posts: 598
Joined: Mon Apr 20, 2015 9:54 pm

Re: What should I make next

Postby tester » Sun Jul 03, 2016 9:27 pm

I have a heavy idea to explore. :-]

True/realistic sound spatialization.

To clarify. We are speaking about individual sound objects, that we would like to specify their:

1) size and distribution: is it small sound source or large one, is it single-pointed or multipointed? Rather solid or rather difused?

2) position in vertical axis: generally I would limit the design to sounds placed relatively close to the body, not far away. With sounds close to the body, in binaural setups - it's relatively easy to get a sensation of sound that is above the head or a little below your hips.

3) position in horizontal axis: the most problematic in current binaural designs is back/front sound identification.

4) True spatialization always gives the senser as if the sound was outside the headphones, and not between them.

So it's not the case, where we try to add general space to a general sound, and expect that "everything is in place" around (it's probably impossible), but we rather want to shape a sound source around the listener.

*

If I understand correctly this topic, we probably need two apps.

One for creating binaural/spatial cues (or one that accepts such databases and converts them to flowstowne'ish thing), and one for using spatialization. Perhaps there are now better physical models, and we don't need the capturing app, I don't know.

On the technical side, it will be probably a virtual sphere with multiple points on it. Each point represents a certain type of filter (feed with binaural cuse), and creating the spatial sense would be done via mixing between 3 or more filters and the sound source moved across the space.



*

Long time ago, there was a plugin called Longcat H3D. Id did pretty nice job, but had it's drawbacks too. One of them was lack of polar coordinates. Besides - best results with such binaural/spatial sounds are when sounds are in motion, and automation wasn't the best idea with the H3D. And now - the company is dead. I had to virtualize a whole system with the plugin, to avoid each time web registration.

*

Crazy enough? ;-)

*

Now - there are some issues with binaural approaches. One of them is this. When I record spatial sounds with my own ears (soundman OKMs) - me and many of my friends - can hear very deep vertical spatialization, but other people don't hear that (for them - sound instead of going down - moves far away on the shoulder level or so). This is a recording captured by Zuccarelli. And some of my examples: sample1 and sample2

And some crazy stuff I do: Clay dissolving in water (stereo) and Clay dissolving in water (re-recorded from mobile and via OKMs). :mrgreen:
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
tester
 
Posts: 1786
Joined: Wed Jan 18, 2012 10:52 pm
Location: Poland, internet

Re: What should I make next

Postby RJHollins » Mon Jul 04, 2016 1:01 am

Back in the day, we used a unit called 'Q-Sound' from QSoundLabs. [early 1990's].
From the Wik:
QSound is the original name for a positional three-dimensional (3D) sound processing algorithm from QSound Labs that creates 3D audio effects from multiple monophonic sources and sums the outputs to two channels for presentation over regular stereo speakers.
RJHollins
 
Posts: 1571
Joined: Thu Mar 08, 2012 7:58 pm

Re: What should I make next

Postby Spogg » Mon Jul 04, 2016 8:20 am

tester wrote:I have a heavy idea to explore. :-]

True/realistic sound spatialization.



Fascinating idea, but with some big hurdles:

In real life, when we move our head even slightly, the spacial cues change accordingly. This can't be reproduced with headphones unless we use a VR-type headset which tracks head movement and angulation in all inclinations and directions and adjusts the virtual sound source processing to match. In this way if you turn around 180 degrees left and right become swapped for example.

We all have varying ear shapes and we are used to the colouration and directional effects of our own ears. This places a limitation on binaural in-head recording and replay unless you record within your own ears and replay only for yourself. I can visualise a system that creates a 3D model of one's ears and makes the subtle corrections to match the model but you are still left with the head-movement issue above.

If we use speakers we need a multi-channel speaker setup with surround sound in 3 dimensions to give us vertical as well as lateral auditory cues. This removes the need for head tracking but relies on the listener being in the exact sweet spot to get the required phantom centring effect, as in normal stereo.

However, given these limitations of principle it should still be possible to make a module which comes closer to spaciation than regular stereo and gives a more interesting spacial sound. It would need to make use of the Haas effect and appropriate filter and level tweaks on a per-input basis. I can visualise such a plugin for mono tracks and may even have a bash at it sometime.

Like I said, a fascinating idea!

Cheers

Spogg
User avatar
Spogg
 
Posts: 3358
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: What should I make next

Postby tester » Mon Jul 04, 2016 9:31 am

I'm not speaking about VR. We don't have general market yet for that (in terms of recording designs). I'm speaking about sound realism ("outside headphones" effect) and mainly about deep vertical plane of sounds close to the body, and briefly about sounds moving around, with front/back distinction. Your back is always behind you, isn't it?

As for ear shapes, I think yes and no. I think most folks are missing something crucial. As an example, I did some experiments on my own binaural recordings vs what zuccarelli did, and I noticed some strange thing. I filtered out various bands from these recordings, and. 1) When I'm removing various bands from my binaural files - specific vertical elevation levels disappear (and sound moves far away behind). These vertical coordinates are strictly dependent on bands filtered out, all across the spectra. But. 2) In case of Zuccarelli's file - vertical plane disappears, when I'm filtering out only one single band: 8000-9500Hz. The effect was for me still there when modifying this band by +/- some decibels or by adding some pre/delay for that band. I guess, the spatiality is made of outer ear specs and some in-head specs as well (acoustinc shadowing/filtering).

I don't think we must replicate "exact human ear or sound location per centimeters", I rather think we should focus on creating "realism space wide as possible", thus the experimental design mey not even look like humen ear or head, to give better results. The effect will varry from person to person, but it will be great for almost everyone.

As for speakers - sure, it's mostly for headphones, but dedicated 2-speaker design also should do for some scenarios (tested).
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
tester
 
Posts: 1786
Joined: Wed Jan 18, 2012 10:52 pm
Location: Poland, internet

Re: What should I make next

Postby Spogg » Mon Jul 04, 2016 9:45 am

Interesting!

If you take a mono sound source what do you have to do to it to make it appear to come from above or below and what is the difference between above and below processing? I'm talking about on headphones of course.

Also can you recommend some good further reading on the principles?

Cheers

Spogg
User avatar
Spogg
 
Posts: 3358
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: What should I make next

Postby tester » Mon Jul 04, 2016 11:41 am

In H3D, they used mono-stereo narrowing to make the soundsource rather mono or rather stereo in general. Rather mono sound was easier to place, because it was based on traditional mixing combined with HRTF mixing.

To some degree, as for vertical elevation and even horizontal placement - technically you don't even need a stereo output. Single ear per se - is asymmetric. If you hear a good spatiality on that zuccarelli file - try with using only one ear.

My understanding is this. On one side you have the outer ear size and shape, ear channel, head - the external shadowing. Most of research was focused on that part when I explored this topic, but it was some years ago.

On the other side - there is something happening within head interia (and there was not too much about it). If I get that correctly, ear membrane is bidirectional. Plus - inside your head you have stuff like cochlea. This is enough to add an asymmetric filter/shadowing that produces some acoustic shaping from within.

When doing physical experiments - I would focus on both parts - general ears (I have a silicon pair of them - this is enough to add some sense of "outsideness"), and some mad science head interia, to emphasize other spatiality cues.

As for papers, well... The problem is, I don't trust the "research" in that area. When I was seeking for information (again - it was some years ago) - folks repeated theories of other folks, and there was no innovation, no openness. So at some point I came to the conclusion, that I'd like to work on my own design (but then I had some long-term health problems and had to focus on other things; still recovering).

As for the plugin/app. I suspect it would have to be a sort of set of spheres (few sizes) of polar coordinates, on which multiple points are located. Each point represents a "filter" (or distortion unit), that influences the sound source. When manipulationg sound source - the output sound is probably combined from 3 (single sphere) to more points. But how to make it, and how to make capturing design for getting spatial cuses - it's too much for me right now.
Need to take a break? I have something right for you.
Feel free to donate. Thank you for your contribution.
tester
 
Posts: 1786
Joined: Wed Jan 18, 2012 10:52 pm
Location: Poland, internet

Re: What should I make next

Postby Spogg » Mon Jul 04, 2016 12:26 pm

Here's an interesting article on what's been discussed here:

https://en.wikipedia.org/wiki/Sound_localization

I'm very tempted to experiment with all this...

Cheers

Spogg
User avatar
Spogg
 
Posts: 3358
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: What should I make next

Postby Walter Sommerfeld » Mon Jul 04, 2016 2:36 pm

Hi BobF,

for a long time now i look 2 find a way to create a 'Small Speaker Bass Enhancer'

Stolen Text:
It uses a technology called phantom fundamental which was already known to
pipe organ builder hundreds of years ago.
Psychoacoustics shows, that a fundamental component can be heard even if it is just
represented by its harmonics. An example would be two organ pipes playing at 100Hz and
150Hz where our brain creates a fundamental at 50Hz.
By using that technology small speaker with a cut­off frequency of 100Hz could be used to
create a 50Hz phantom component which is just there because the psychoacoustic part of
our brain extrapolates it.

Cut­Off­Frequenzy [Hz]:
The frequency range gets split up at the cut­off frequency to create harmonics for all tones
below of that frequency.

Original Bass [Hz]:
Adds the original bass below of the cut­off frequency to the output. That is merely useful if
your speaker can make use of that low frequencies.

Extended Bass [Hz]:
Adds the harmonics of the original bass fundamental to the output signal without the
fundamental itself.

Gain [dB]:
There is a need to attenuate the signal because the method increases the output amplitude.

...it makes sense to place a Compressor and Limiter behind the Small Speaker
Bass Enhancer.


and/or this:
...three major technologies of bass enhancement. Harmonic Bass Enhancer uses psycho-acoustics to calculate precise harmonics that are related to the fundamental tones of sound. When these harmonics are combined with the original one, it creates the effect of lower, deeper frequencies. Linkwitz Bass Enhancement technology uses Linkwitz filter to regenerate bass frequency and then combine them with the original. Resurrection Bass Enhancer analyzes the spectrum of Bass and then generates Bass frequency based on psycho-acoustics codes.
User avatar
Walter Sommerfeld
 
Posts: 249
Joined: Wed Jul 14, 2010 6:00 pm
Location: HH - Made in Germany

Re: What should I make next

Postby tulamide » Mon Jul 04, 2016 4:01 pm

Spogg wrote:Here's an interesting article on what's been discussed here:

https://en.wikipedia.org/wiki/Sound_localization

I'm very tempted to experiment with all this...

Cheers

Spogg

3D Sound tech is used a lot in gaming. It's how in a 3D game you can locate sound sources even if you don't have a surround system. The tech itself is definitely low on CPU, because the WebAudio API supports 3D sound and it is used in Web Games via HTML 5.
http://www.html5rocks.com/en/tutorials/webaudio/positional_audio/

I'm not sure if you will ever need this, but here's a simple way to map 3D data on a 2D plane:
Code: Select all
x2d = xOffset + scale * x3d / ( z3d + distance );
y2d = yOffset + scale * y3d / ( z3d + distance );

offset, scale and distance refer to the virtual camera (or in this case, the virtual listener).
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany


Return to General

Who is online

Users browsing this forum: No registered users and 45 guests