martinvicanek wrote:MIDI would be my preferred choice
Likewise; and there are a variety of reasons to expect your results...
- Very low hardware/driver latency and jitter are key selling points for music gear. Not so much for generic keyboards/mice.
- The VST Host (or FS) will be talking directly to MIDI/audio hardware drivers. Keyboard/mouse events are always subject to the many whims of the operating system.
- Once received by a VST host, MIDI events are (usually) time-stamped to define their timing relative to audio (it's part of the VST plugin protocol).
- FlowStone/SynthMaker tries very hard to honour the VST host's MIDI time-stamps (i.e. to keep sample-accurate MIDI sync'). Other events may not have such strict latency guarantees, if any.
TL/DR Version of the explanation ("geekery") that comes later:- To meet the VST specification, MIDI events are usually processed when a specific sample is reached
during the filling of the current ASIO audio output buffer, as specified by a timestamp provided by the VST host.
- "Green" GUI/timer events are simply added to a queue, and get processed whenever there is CPU power available during the "spare time"
in between filling audio buffers.
- Ruby is... erm... more complicated, and very likely to have changed since I last checked!
- It seems unlikely that a GUI event would ever be faster, but even if there are rare circumstances where it could be, MIDI is almost certainly more precise, reliable, and repeatable.
NB) The geekery below applies to early SynthMaker through to FlowStone v3.0.x. I can't see any reason for newer versions to work much differently in principle, but I've yet to play with them and I wouldn't want to underestimate what Maik is capable of!tulamide wrote:... MIDI ... uses the same event system.
Very nearly, but not quite...
There has always been a separation between sample-accurate (high-priority/"time-stamped") events and asynchronous (low-priority/"queued") events. The two kinds run essentially the same code AFAIK, but they have their own dedicated CPU threads, and their priority status usually propagates through the schematic along with their cascade of triggers. [IIRC, some nasty bugs in early SynthMaker versions were caused by getting the threading wrong.]
Why complicate things like this? Because of CPU saving and the nature of the VST protocol...
Incoming MIDI events come via the VST host (or FS), not directly from a MIDI driver. Specifically, for each ASIO audio buffer, there will be a corresponding MIDI buffer spanning the same time range. Each MIDI event in this buffer has a time-stamp - i.e. the index of the sample in the audio output buffer "when" the event should happen.
When generating the audio output from the plugin, audio stream ("blue") calculations are interrupted at each of the time-stamped samples so that the consequences of the corresponding MIDI events can be computed at the right "times" within the current buffer. This is how sample-accurate MIDI is done generically, not just a FlowStone thing. Note that the plugin latency is no different than for processing an audio input, nor is any jitter introduced.
However, there is a price to pay for sample-accuracy: interrupting stream calculations to process events can dramatically slow down the filling of an output audio buffer (the soundcard wants it in a hurry!) If you do a lot of Green/Ruby processing on MIDI events, it can sometimes show up as significant CPU spikes at each event.
In principle, this "time-stamped" system
could also be used for keyboard/mouse/timer events, etc.; but these events have such inherently sketchy timing that it isn't worth paying the price. Such "low priority" events are added to a queue. While an audio buffer is being filled, these events just step aside and wait in the queue. During the "spare-time" between filling audio buffers, FlowStone processes as many items as possible from the queue, according to how much spare CPU power is available.
The main down-sides to this CPU saving are...
(1) How long a low-priority event will wait in the queue cannot be predicted.
(2) Low-priority events cannot change "blue" (streamin) values mid-way through an audio buffer.
(3) So many queued events might get processed during one spell of "spare-time" that "blue" stream components do not "see" all of the rapidly changing "green" values (this is one of the reasons why turbo-charged tickers never work the way people expect!)
[NB: I have demonstrated all of this several times before on the forum.]
I won't complicate things further by bringing Ruby into it; but in essence, prioritising events according to their source allows FS to balance performance against its intuitive graphical programming style. Occasionally it can trip us up, but I think it's a very good compromise on the whole.