Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

Question about audio delays in polyphonic sections

DSP related issues, mathematics, processing and techniques

Question about audio delays in polyphonic sections

Postby Spogg » Fri Oct 28, 2016 12:43 pm

Here’s a question for DSP and Flowstone gurus.

I’ve been playing around with 112dB’s new Cascade synth.
https://www.112db.com/instruments/cascade/

This is a very impressive new idea for me. By carefully auditioning and experimenting with the cascade section, using the on-board impulse and settings for the delays, I’ve managed to successfully reproduce the cascade effect’s signal flow. I know it’s valid because mine responds and sounds the same. In effect it’s a huge diffusion “reverb” that builds at a rate set by the 32 delays (max) with zero overall or stage feedback.

However I can only make it in blue mono. The synth itself has a render system so it creates a voice for every single note and reserves RAM to play it when keys are pressed. They ask you to set the number of keys available on your keyboard, so if you jump from 25 keys to 88 keys you see a huge increase in the render time, and the displayed RAM in use changes accordingly. This is accompanied by a 60% spike in CPU on core 1 of my i7.

In pre-render mode you can set how far into the cascade you will be when a note is sounded and in this mode any change on the on-board synth will trigger a new full render. So it must silently pre-create the whole sound for each note and move a start pointer to a point in the “sample” created. If you don’t pre-render you get the whole build-up as you would in a conventional algorithmic reverb.

In Flowstone I’ve never been able to use an audio delay in a polyphonic section because you get stream interruptions (clicks) with every new note. Presumably FS addresses a new area of RAM each time a note sounds and closes it after the release time has finished.

My question: is there any way around this? I would need an optimised delay up to 10 seconds that can be used in a polyphonic situation with 32 instances of the delay per note. This delay would have to hold on to its RAM allocation for all time. A silent render process could then initiate a playing of say 8 notes to initialise the delays before actual playing.

I think this is probably impossible with Flowstone but if there are any possibilities I’d love to know.

Cheers

Spogg
User avatar
Spogg
 
Posts: 3318
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: Question about audio delays in polyphonic sections

Postby martinvicanek » Fri Oct 28, 2016 7:58 pm

Sounds to me like wavetable synthesis with the wave tables created on the fly in a particular way.
User avatar
martinvicanek
 
Posts: 1315
Joined: Sat Jun 22, 2013 8:28 pm

Re: Question about audio delays in polyphonic sections

Postby Spogg » Sat Oct 29, 2016 8:37 am

martinvicanek wrote:Sounds to me like wavetable synthesis with the wave tables created on the fly in a particular way.


Would it be possible for you to elaborate Martin?
I've done thought experiments on how the synth might be emulated in FS using a sample player system but it's the polyphony that defeats me at the moment.
If I render a sample and play it polyphonically the characteristcs for each note pitched would be different; slower or faster and higher or lower pitched. This is not what the Cascade sounds like and pitch-shifting and/or time-stretching for each note would only be effective across a narrow range and would introduce artefacts not heard in the Cascade synth -it sounds very clean.

The Cascade synth works a bit like the old Mellotron. Every individual note has its own "recording". The pre-render function is a bit like pulling the tape forward for the note-start point, once the recording (render) has been made.

Maybe the answer is to create a drum machine style of synth where each "pad" is a note and the start pointer can be shifted. So that's say 61 individual "pads", one for each key on my 5 octave keyboard and a system that can render 10 second samples in very fast non-real time.If it had to be in real-time it would take 10 minutes to create all the samples sequentially and that's no good.

Before I burn my brains out on this - or give up- have you any suggestions or comments?

BTW i'm developing the mono effect version of this and it's shaping up to be a great creative toy in its own right. Watch out for the Quilcom Waterfall!

Cheers

Spogg
User avatar
Spogg
 
Posts: 3318
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: Question about audio delays in polyphonic sections

Postby martinvicanek » Mon Oct 31, 2016 1:15 am

On second thoght (reading), you have 4 delays with 8 taps each. For the pre-render, you'd have to store the content of the 4 delays for each note. I don't see how you handle pitch bending, though.
User avatar
martinvicanek
 
Posts: 1315
Joined: Sat Jun 22, 2013 8:28 pm


Return to DSP

Who is online

Users browsing this forum: No registered users and 19 guests