Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright

DSP code box - flawed?

DSP related issues, mathematics, processing and techniques

DSP code box - flawed?

Postby tulamide » Mon Jun 01, 2020 7:17 pm

This might cause controversy. I was asking myself, why I have so many difficulties with the dsp code editor for so long. And I think I spotted two main reasons. But first a little bit about me.

I come from the graphics area. Game development, logo design, 3D graphics (using Cinema4D), etc. What's used there a lot is something called "pixel shader". A shader is a specialised small program (mostly running on the graphics cards, for example vertex shaders deal with 3D-geometry), and a pixel shader acesses an endless stream of pixels throughout the lifetime of the application, be it a game, Cinema4D or Photoshop.

A pixel is a 32 bit value (just like an audio sample in Flowstone) and to fill a (in this example) FullHD resolution (a "frame"), you need over 2 million of them. Games and applications mostly run at 60 frames per second (fps), so that's 124,416,000 (124 million) pixels per second. Compare that to the 44,100 samples per second of audio samples, and as a game developer you mildly smile.

I created pixel shaders for DirectX (the interface to the graphics card) of various tasks, I had to take care of speed and efficiency. Of course. For a graphics card the resolution is not so important. More important is, how many shaders are used, and what kind of shaders. Pixel shaders are the most demanding of them, and I once made a test with a very complex shader algorithm, that I applied to over 58,000 objects. I had to turn the framerate down to 30 (so "just" 62 million pixels per second), but that rate was stable. On hardware from 10 years ago!

In short: I know the concept of "one sample at a time", I am trained in creating efficient and speed optimized code, and I dealt with this concept on a much more demanding level.

So why do I have those issues? I think it is two main points. From the examples you can see, that I tend to visualize my algorithms. So far I failed to manage that for dsp code.

And secondly, the dsp editor feels unfinished. I always have the impression that it was planned to let it do much more, and that preperations were made in this regard, that now complicate things. But then it was never finished and left in that awkward state.

In a pixel shader I can directly address the current pixel. I don't need to define a stream, as that is the whole purpose of a pixel shader - working on a pixel stream! So you do something like (call it current, this, self, doesn't matter for this example) current.getRed() and I get the value of the red channel of this pixel.

But that isn't all. Any pixel shader is also aware of the context! I can easily access any pixel of the current frame, just by pixel(x, y).getRed(), for example. This is a read access only, of course. Only the current pixel can be altered. But the good thing about this is, that I can alter the current pixel based on its neighbours! This is used for all kinds of useful stuff, like blur algorithms, color correction and interpolations. No latency is introduced, since it is all one frame that is about to be drawn (just like the underlying buffer in Flowstone, from which the current sample is drawn). I can calculate distances, angles, color predictions and much more just from this little feature. And there is much more. All of it is logical.

The DSP code editor however is illogical. Although the last sample is stored in a buffer, I can't access it. Instead it needs to be stored a second time, inside the DSP editor with a float definition. It also introduces latency, if I need access to other samples than the current. Time consuming, illogical and not efficient.

Given the comparisons, what are your thoughts?


Examples:

ellipse pixel shader
https://www.dropbox.com/s/gvvfwe1m56l1e ... x.png?dl=0

visualization of an algorithm that sets an object a relative distance apart from a circular area that encompasses a defined group of objects
https://www.dropbox.com/s/ydpqvv89n2rgs ... e.png?dl=0

application that builds patterns with self-programmed pixel shaders (and cubic interpolation color range)
https://www.dropbox.com/s/3x98ztyi7vp6222/1.png?dl=0
...and example renderings created with above application
https://www.dropbox.com/sh/qrvcif6mfjv6 ... 4HnBa?dl=0

C-DEX, automatic realtime color palette animation
https://www.dropbox.com/s/7n88imi66jgvs ... n.png?dl=0

a pixel shader applied to over 58,000 objects, running at 30 fps - 8 years ago!
https://www.dropbox.com/s/u6uyqiofbdkqs ... 3.png?dl=0

top down 2D water shader
https://www.dropbox.com/s/dqylddzub1xkqhw/Water.fx?dl=0

the hardware of that time (from 10 years ago)
https://www.dropbox.com/s/5p7hmy0z5yva2 ... o.png?dl=0
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: DSP code box - flawed?

Postby deraudrl » Mon Jun 01, 2020 10:00 pm

tulamide wrote:But that isn't all. Any pixel shader is also aware of the context! I can easily access any pixel of the current frame, just by pixel(x, y).getRed(), for example. This is a read access only, of course. Only the current pixel can be altered. But the good thing about this is, that I can alter the current pixel based on its neighbours! This is used for all kinds of useful stuff, like blur algorithms, color correction and interpolations. No latency is introduced, since it is all one frame that is about to be drawn (just like the underlying buffer in Flowstone, from which the current sample is drawn). I can calculate distances, angles, color predictions and much more just from this little feature. And there is much more. All of it is logical.


I fully admit I am unclear about how FS buffering is performed internally, but if you're comparing a DAW/VST-sized sample buffer with a video frame, I think you've got a conceptual mismatch in your implied definition of "sample".

Ignoring polyphony, FS is operating on a one-dimensional stream, of which the current sample is just that. But in the graphics case, the stream is two-dimensional: each frame is a sample, regardless of how the GPU hardware feeds individual component pixels to your processing. Yes, you can trivially access neighboring pixels of the one you are interested in, but they are components of the same sample. And if you actually wanted to process the video stream over multiple samples (i.e. frames), that would involve latency, same as in the audio world.

(Note: I'm quite willing to believe the way FS exposes its internal buffer to the DSP component is suboptimal, regardless of anything I said above.)
I keep a pair of oven mitts next to my computer so I don't get a concussion from slapping my forehead while I'm reading the responses to my questions.
deraudrl
 
Posts: 239
Joined: Thu Nov 28, 2019 9:12 pm
Location: SoCal

Re: DSP code box - flawed?

Postby adamszabo » Tue Jun 02, 2020 6:14 am

Yeah, I think its just that simple, that in the audio world you have to think sequentially, everything happens after one another. I am pretty sure, if you are developing audio apps in c++, you would have the same buffer code and such like in FlowStone
adamszabo
 
Posts: 667
Joined: Sun Jul 11, 2010 7:21 am

Re: DSP code box - flawed?

Postby tulamide » Tue Jun 02, 2020 11:18 am

deraudrl wrote:I fully admit I am unclear about how FS buffering is performed internally, but if you're comparing a DAW/VST-sized sample buffer with a video frame, I think you've got a conceptual mismatch in your implied definition of "sample".
No, I haven't. But you are slightly off. A frame is nothing else than a block of memory, allocated to store the pixels. It is also known as "framebuffer", if you want to google it.
A frame is for a pixel shader, what a buffer is for the DSP code editor. Flowstone itself even calls this buffer a "frame" (see Ruby frames). I made a framebuffer example a few years ago, and if you look at it, you will notice that I have full access to the buffer that is hidden in the DSP code editor. Furthermore, you will see that I suddenly have no issue at all doing fancy things, like momentary rms (used to calculate LUFS, btw.) and all the things basically, that I can't do with the DSP code editor, since it is so flawed. That's too bad, because Ruby as a scripting language is way too slow to be used for anything else than the proof of concept I published somewhere in "examples".

adamszabo wrote:Yeah, I think its just that simple, that in the audio world you have to think sequentially, everything happens after one another. I am pretty sure, if you are developing audio apps in c++, you would have the same buffer code and such like in FlowStone
Nope. Flowstone is the only app I know of (besides SynthEdit), that does this limited way of dealing with the sound buffer. If you program in C/C++ or use any sound library for C++ (incl. Juce), you always have acces to the sound buffer and can work on any sample in any order you like. You could, for example, smooth a waveform by doing things like "buffer(index) = (buffer(index - 1) + buffer(index + 1) ) * 0.5 (not a thrilling example, but to show the way you'd work with the buffer)

Yes, Flowstone is an exception, not the rule.
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: DSP code box - flawed?

Postby Spogg » Tue Jun 02, 2020 2:40 pm

Very interesting topic!

I don’t have a lot to add, but I do wonder why Ruby can do more with Frames than DSP can with the audio buffer. It just uses a lot more CPU to achieve it.

I would say that Maik has made a lot of great additions to the DSP code for the FS4 alpha, which indicates to me that this could have been done by DSPR before. So I would say the version 3.x DSP code was indeed a “work in progress” more than it being "flawed".One thing that surprised me, when I first started to grapple with DSP, was the lack of conditional jumps, especially since the code gets compiled into assembler before running. I had assumed that it was somehow speed-related, which is why we have to resort to bit masking which is about as unintuitive as knitting, to me anyway.

My own on-going gripe is that FS plugins seem to use something like 10 times the CPU of a similar plugin coded in C++ for example. But that’s another topic of course, as is Open GL support.

Cheers

Spogg
User avatar
Spogg
 
Posts: 3358
Joined: Thu Nov 20, 2014 4:24 pm
Location: Birmingham, England

Re: DSP code box - flawed?

Postby adamszabo » Tue Jun 02, 2020 2:58 pm

Spogg wrote:My own on-going gripe is that FS plugins seem to use something like 10 times the CPU of a similar plugin coded in C++ for example.


Nope, not if you code it in assembler, and optimize everything.
adamszabo
 
Posts: 667
Joined: Sun Jul 11, 2010 7:21 am

Re: DSP code box - flawed?

Postby k brown » Tue Jun 02, 2020 4:46 pm

That would explain why most of Martin's things have such a light load.
Website for the plugins : http://kbrownsynthplugins.weebly.com/
k brown
 
Posts: 1198
Joined: Tue Aug 16, 2016 7:10 pm
Location: San Francisco, CA USA

Re: DSP code box - flawed?

Postby trogluddite » Fri Jun 05, 2020 4:36 pm

There are no buffers (except Ruby Frames or you code them) :o
Thats what "one sample at a time" truely means,. :geek:

Yes, really; not even "hidden" ones for only Myco. The only buffers are VST/ASIO/DS inputs/outputs (we can't access, and would be silly - would byppass upstream/dowstream parts outside DSP code). NO Buffers in-between each primis/module - not needed, all prims do one sample in -> once calculate -> one sample out -> next prim. Whole schematic is compile to one big "function", also one sample in -> once calculate -> one sample out. Only the "hidden" Flowstone "engine" does iterate of array buffers (ASIO/VST) with "compiled" schematic "function" loop body - not for use by the plebs! :lol:

+1 to @deraudrl - "object model" is object''s semantic, not how implementd them... First decide what thing "represents" and use it for - after decide whay kind of code, etc.. Is v. important OOP (and other code) princple.

Semantics:::
ONE sample = ONE Video Frame = "sanpshot" = ONE moment in time.
"DSP Buffer" (not real) = Ruby Frame = Video Clip = "sequence" = many "snapshot"s in chrnoological order.

Implementation:::
Array = storage type = also hash, linkedlist, OOP object, struct, POD = organise "chunk of memory".
All can do snapshot or sequence (or many other things). Benchmark + coder-friendly -> decide wich.

Array has a "time" index makes its a different category (sequence). Ruby "Frame" is rubbich name IMOH (no suprise - it DSPr Ruby API! :lol: ) - Ruby Frame is not analogues to video frame; analogous to a video "clip" - "sequence", not "snapshot".

SynthEdit is the only other "exception"? Yes(ish) - also coding with pretty boxes and lines and "one sample at a time" - not an coindicence. Alos not "only" other - e.g. Lisp, Haskell, Forth, Z80 machine-codes, etc. - make buffer if you wanrt one. Not all is OOP! ;)

No buffers is becase of "one sample" is because of nice things for non-coders and novices and folks who think pictures. It is part to how FlowStone works. It will not change. Maybe it is a "flaw", maybe "compromised", maybe "contraint", maybe "trog and tulamide talk Martian again"! :lol:

To succeed with ASM/DSP, forget buffers. There are no buffers. DSP is buffer-less. Had a bufferectomy. It is bufer-phobic. The one true sample (at a time) lead us to nirvana. :ugeek:

Please excuse my abrupt wordsr - still a bit wobbly head. :?

PS) I have special secret to tell you. FlowStyone is DIFFERENT to C++! :o Yes, really! :o That's why whn you look you see lots difference! Also((is(different(dont(need(parenthis)like(Lisp))and)Haskell))code)))! Also C++ not like Mandarin or Swahili, too! SOMETIMES C++ a bit like C++, tho - haha! :lol:
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
User avatar
trogluddite
 
Posts: 1730
Joined: Fri Oct 22, 2010 12:46 am
Location: Yorkshire, UK

Re: DSP code box - flawed?

Postby tulamide » Fri Jun 05, 2020 5:12 pm

trogluddite wrote:There are no buffers (except Ruby Frames or you code them) :o
Thats what "one sample at a time" truely means,. :geek:

Yes, really; not even "hidden" ones for only Myco. The only buffers are VST/ASIO/DS inputs/outputs (we can't access, and would be silly - would byppass upstream/dowstream parts outside DSP code). NO Buffers in-between each primis/module - not needed, all prims do one sample in -> once calculate -> one sample out -> next prim. Whole schematic is compile to one big "function", also one sample in -> once calculate -> one sample out. Only the "hidden" Flowstone "engine" does iterate of array buffers (ASIO/VST) with "compiled" schematic "function" loop body - not for use by the plebs! :lol:

+1 to @deraudrl - "object model" is object''s semantic, not how implementd them... First decide what thing "represents" and use it for - after decide whay kind of code, etc.. Is v. important OOP (and other code) princple.

Semantics:::
ONE sample = ONE Video Frame = "sanpshot" = ONE moment in time.
"DSP Buffer" (not real) = Ruby Frame = Video Clip = "sequence" = many "snapshot"s in chrnoological order.

Implementation:::
Array = storage type = also hash, linkedlist, OOP object, struct, POD = organise "chunk of memory".
All can do snapshot or sequence (or many other things). Benchmark + coder-friendly -> decide wich.

Array has a "time" index makes its a different category (sequence). Ruby "Frame" is rubbich name IMOH (no suprise - it DSPr Ruby API! :lol: ) - Ruby Frame is not analogues to video frame; analogous to a video "clip" - "sequence", not "snapshot".

SynthEdit is the only other "exception"? Yes(ish) - also coding with pretty boxes and lines and "one sample at a time" - not an coindicence. Alos not "only" other - e.g. Lisp, Haskell, Forth, Z80 machine-codes, etc. - make buffer if you wanrt one. Not all is OOP! ;)

No buffers is becase of "one sample" is because of nice things for non-coders and novices and folks who think pictures. It is part to how FlowStone works. It will not change. Maybe it is a "flaw", maybe "compromised", maybe "contraint", maybe "trog and tulamide talk Martian again"! :lol:

To succeed with ASM/DSP, forget buffers. There are no buffers. DSP is buffer-less. Had a bufferectomy. It is bufer-phobic. The one true sample (at a time) lead us to nirvana. :ugeek:

Please excuse my abrupt wordsr - still a bit wobbly head. :?

PS) I have special secret to tell you. FlowStyone is DIFFERENT to C++! :o Yes, really! :o That's why whn you look you see lots difference! Also((is(different(dont(need(parenthis)like(Lisp))and)Haskell))code)))! Also C++ not like Mandarin or Swahili, too! SOMETIMES C++ a bit like C++, tho - haha! :lol:

You probably did this in a rush. Because it is not true at all!

-No PC in the world (except coming quantum PCs) is able to work on "1 sample at a time" without buffering, because that would require a precise timer of 1/44100 s, which is impossible with the hardware that exists. Apart from that, the PC would be totally busy with just that one process. Of course, so called "double-buffering" is used, and it doesn'T matter if you sync to ASIO buffers or have your own system. Buffer is buffer!

-Your semantics are so off, that I'm sure you never worked on games before. A pixel is a "sample" from the framebuffer, as is a sample from the audio buffer. The word "sample" describes one part of the many, a rendition of an analog sound is divided to. It is not rendering the true analog signal, but takes "samples" at a fixed rate. That's why it's called so. Not so difficult to understand, so I'm really surprised about this!

-A video sequence is a bunch of prerendered scenes, placed in certain order and applied with effects. Has nothing to do with a framebuffer, as for graphics cards and audio software. A video sequence is not comparable to anything in Flowstone.

-The last part won't be commented by me, because it clearly comes from your currently challenged health, and I don't want to be an asshole for having replied to it. We can talk about that again, when you feel better!
"There lies the dog buried" (German saying translated literally)
tulamide
 
Posts: 2714
Joined: Sat Jun 21, 2014 2:48 pm
Location: Germany

Re: DSP code box - flawed?

Postby trogluddite » Fri Jun 05, 2020 7:03 pm

Oh no, of coutse, FlowStone "one sample at a time" not means "at the exact time that the sound happened". Asynchronus (VST/soundcard) need buffers, yes, of course (as I said)! "One sample at a time" mean "one sample goes through all scheamtic in chunk of CPU time" (-ish!) - probably "at once" better name that "at a time". Maybe like this Ruby helps you see how I mean it...
Code: Select all
# You thinks this....
tempBuffer1 = asioInputBuffer.map{|sample| dspFunction1(sample)}
tempBuffer2 = tempBuffer1.map{|sample| dspFunction2(sample)}
tempBuffer3 = tempBuffer2.map{|sample| dspFunction3(sample)}
# ...etc...
asioOutputBuffer = tempBufferN.map{|sample| dspFunctionN(sample)}
# NB) And you would like DSP code ("dspFunction") having acces to "tempBuffers".

# This is FlowStone...
asioOutputBuffer = asioInputBuffer.map do |sample|
  tempSample1 = dspFunction1(sample)
  tempSample2 = dspFunction2(tempSample1)
  tempSample3 = dspFunction3(tempSample2)
  tempSample4 = dspFunction4(tempSample3)
  # ...etc...
  dspFunctionN(tempSampleN)
end
# "begin...end" code-block is whole schematic "compiled".
# DSP codes is function of one-sample argument, one-sample retrun (per link).
# "tempSample" is streamin/steamout/links.
# "asioInputBuffer" and "adioOutputBuffer" is only buffers needed.
# No "tempBuffers" for "dspFunctions", not even "hidden".
# Possible coz all inside "code block" is synchronos (same clock, same CPU thread).

(nice to do codes - easier than human talking!)

I try to exaplain other part better (defining words - I think we use differening meaning for same words, so confuse).

You have complaint - DSP code only has "now" sample, not "past" or "future" sample (BTW: sometimes, I would like this also).

To access "past/future" sample, you need stores in memory with "time" index ("buffer", "frame", "array" - name don't matter).

Have/have-not time dimension = "semantic feature" (context don't matter - things with different names can have it)

"frame" (video) or "frame" (Ruby) = "glosses" (meanings for same word, but context different, so semantics can different - one has "time" index, other don't).

That is how I mean those words - maybe you mean different; that's OK, we can find out how to "translate"! Main thing is, to solve your "complaint", a "storage thing" (whatever name) is need to have a "time" index to reach an "one moment in time thing" ("sample", "pixel", whatever name) from a different time (not just "now" like only DSP has). And I hope Ruby example show why can't happen like this - whole schematic is "one function" for "one sample", not in peices with buffer in-between, just "interface" buffers in the "wrapper".

Sorry if I offended you - truly was trying to be joking around. Sens of humour failure -so no jokes ths time! :oops:
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
User avatar
trogluddite
 
Posts: 1730
Joined: Fri Oct 22, 2010 12:46 am
Location: Yorkshire, UK

Next

Return to DSP

Who is online

Users browsing this forum: az-terisk and 35 guests