tulamide wrote:DSP editor not offering the buffer. My point was, that it should do so.
Well, "should" is an opinion, so I have to allow you that! Bur DSPr long ago decided that "one sample at a time" is easier to understand for novices, non-coders, analogue-electronics engineers, etc. - and the whole architecture is now built around that. DSPr have always felt very strongly that SM/FS should be a "sandbox" where users do not know or see "technical details" (I used to argue a lot with Malc about that!). Also, like my Ruby example, "one sample" can eliminate many temporary buffers/arrays, so can be more efficient and use less memory (you store old samples only if you need them). Both styles have advantages and disadvantages.
Also you said "the" buffer (singular), as Spogg did too. I wonder if this is part of the misunderstanding. As my Ruby meant to show, to have DSP code buffers, you must have a buffer for every streamin/streamout - the DS/ASIO buffer alone can never be enough (can't even be "updated" by each component, as there may be parallel audio paths needing the original data). Also, to have buffers with "future" samples doesn't mix well with "single sample" - the schematic should be "all buffer" or "all single sample" - mixing them together will add latency while buffers are filled and emptied (see below about Ruby Frames).
tulamide wrote:The framebuffer you access with Ruby is the buffer as supplied by the sound driver (ASIO/DirectSound)
No, this is not so - and in fact, Ruby Frames bring the problem of mixing "buffered" with "single sample". It can't be the ASIO/DS buffer, because there may be many components upstream of the mono2frame - it has to be an "inter-component" buffer. Not only is there copying, but the copying happens "one sample at a time" - equivalent to a DSP code writing each sample into a mem. This is why using Ruby Frames incurs a latency (yes, and easy to prove with a simple schematic) - because the buffer cannot be passed to Ruby until it is completely filled. The User Guide quote is badly phrased IMHO - it mixes up "wrapper" concepts (request of a buffer) with "inside sandbox" concepts (storage of the samples) - though, to be fair, the process is so indirect that it can't be said so easily as one sentence.
tulamide wrote:But Flowstone makes use of SSE.
SSE is just an example of SIMD (Single Instruction Multiple Data). The "Multiple Data" can be whatever the programmer chooses. Yes, it was marketed as an aid to stream processing (hence the SSE/MMX names), and stream processing uses buffers - but SIMD instructions are a form of parallelisation, not buffering or block processing. In FS the "Multiple Data" is four samples from the same moment in time, so not buffer related.
I must stress that I am not guessing here. I worked closely with Malc for many years as a Beta-tester, and I learned a great deal about the inner workings of FS. You certainly understand buffering well, and your intuitions about the FS engine are very intelligent, there is no doubt about that, but they apply only to the outer "wrapper" of FS. Inside the "sandbox", everything is conceptually different; it's not just that buffers aren't exposed, they're just not normally used at that level, and to make them available is very awkward (e.g. Ruby Frame latency). Whether buffers "should" be available or not, the simple fact it that, with the current architecture, there would be a high cost for doing it - and not many people have ever asked for it, so it won't be a big dev' priority (we who have used Juce and similar are very few here).
Finally. If you really are still unconvinced, I am happy for you to ask if MyCo will critique my posts. If anyone will know, it is him (and if he doesn't, we're really in trouble!!
) I think he would criticise me for over-simplifiying, and missing a few buffers that do exist (between CPU threads, for example) - but I haven't been wrong yet when he has judged my comments on Slack or I posted bug reports.
BTW: I have used buffered systems too, and I'm curious why you say it makes the code easier to write (setting technical background aside). For example; you mentioned smoothing using sample[n - 1], sample[n], sample[n+1]. In this case...
- At first sample of buffer, sample[n-1] is part of previous buffer. It's gone now.
- At last sample of buffer, sample[n+1] doesn't exist, it's first sample of next buffer ("reading the future").
My experience is that once corner-cases like those are dealt with (e.g. handling buffer transitions), working directly with the buffers isn't much simpler. I find most of the DSP code difficulties are due to lack of branching in SSE rather than lack of buffer access. Different "familiarity" between us, I suppose, and no criticism intended.