If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
FFT-based Audio Analyzer
Re: FFT-based Audio Analyzer
steph_tsf wrote:No idea, really. I'm intrigued. Please tell me ...MyCo wrote:How do you like LTSPice? If you don't know why I ask this, let me know.
I've seen the image in your minilab application, that's taken from LTSpice. It has this unique style
Your LTSpice project is this a working one, or just an idea?
I've done an AVR-8Bit compiler in SynthMaker years ago... it was kind of stupid, but you could connect modules to each other, and then it generated Assembler code, that could get assembled. So my first choice for such a "schematic" editor would be FlowStone, not LTSpice
My communication plan for my multi processor board is quite simple: There is one controller that handles I/O and slow changing data, eg. the green part in FlowStone. It also handles the communication with the codec. The other 8 controllers are only connected to the this main controller, they don't communicate with each other. The main controller knows from the "users schematic" how the sub controllers are connected. So when there is a new codec sample clock, it passes the data from the ADC to the first controller in the "users schematic" sub controllers order. This shifts out its previous processing result at the same time. The main controller can pass some additional data to the sub controller after that. As soon as slave select get's released from the sub controller, it starts processing. The main controller goes to the next sub controller and there happens the same stuff. When it has communicated with all subcontrollers it can do it's slow data processing tasks until the next codec sample clock comes where it outputs the result of the previous run.
This method is very flexible. You can configure each of the subcontroller parallel or in series to each other, or even mixed. Each of the subcontrollers can be equipped with special hardware. eg. not every subcontroller requires external RAM.
To the whole Linux stuff, I can't tell anything. I'm software developer for 15 years now, and never wrote a single line for Linux... or wait... I've submitted a bugfix for the VDR system in my PVR. It's running kubuntu:
http://wiki.reel-multimedia.com/index.p ... Avantgarde
PS: For my project, I would use the WM8731, too. I've used it on my Spartan board as well.
-
MyCo - Posts: 718
- Joined: Tue Jul 13, 2010 12:33 pm
- Location: Germany
Re: FFT-based Audio Analyzer
Just an idea at the moment. I paid the Flowstone licence June 5th 2013 for determining if Flowstone was a better option, as graphical front-end, in such context. I had a weird idea. Say you are editing a schematic using Flowstone. Can a Ruby module within that schematic read the actual netlist, and act as rudimentary compiler for generating assembly code chunks for a Microchip PIC32 or ARM Cortex-M4? Later on, when such compiler can recognize more advanced features in the schematic, it can be encapsulated into a sophisticated DLL called by Flowstone.MyCo wrote:Your LTSpice project is this a working one, or just an idea?
Try presenting the method on the diyAudio form, on the Open Source DSP XOs http://www.diyaudio.com/forums/digital-line-level/195791-open-source-dsp-xos-44.html. Over there, start reading the posts from Abraxalito (Richard Dudley) and Steph_tsf. The latter is myself. I'm in correspondence with Abraxalito. The idea of Abraxalito is to embed tiny DSPs into the cinch sockets that you need to connect, for forming a digital crossover arrangement, just as illustration. Think in terms of LEGO bricks dedicated to audio. Say you have one digital source, stereo on a S/PDIF Cinch. Your Cinch will embed a tiny DSP, possibly a LPC1113FBD48 (twin SSP) or a Microchip PIC32MX1 (twin I2S) (or MX2 adding USB) conveying 16-bit audio at 44.1 kHz. You also need to output a networked SERIAL at 44.1 kbaud/s, for interchanging parameters with other AudioBRICKS. That first AudioBRICK will do the global equalization, room correction, baffle step correction, psychoacoustics, remote control receiver, etc. If you want to implement a 3-way XOVER, you purchase three more AudioBRICKS. Say between 15 eur and 25 eur for an AudioBRICK, sold through Sparkfun and Watterott. This way we start, producing two different PCBs :MyCo wrote:My communication plan for my multi processor board is quite simple [ ...] This method is very flexible. You can configure each of the subcontroller parallel or in series to each other, or even mixed. Each of the subcontrollers can be equipped with special hardware. eg. not every subcontroller requires external RAM.
- AudioBRICK Digital_In (S/PDIF_in + Toslink_in + IR RC receiver + µC + networked SERIAL + I2S_out)
- AudioBRICK Analog_Out (I2S_in + µC + networked SERIAL + stereo DAC with volume control )
Later on we may produce :
- AudioBRICK Digital_Out (I2S_in + µC + networked SERIAL + I2S_out + S/PDIF out + Toslink_out)
- AudioBRICK NOSDAC (I2S_in + µC + networked SERIAL + Non-OverSampling DAC with volume control)
- AudioBRICK Booster (I2S_in + Cortex-M4 µC at 150 MHz + networked SERIAL + I2S_out)
The AudioBRICK Booster is not a stripped-down AudioBRICK Digital_Out. It embeds a more powerful µC, for executing FIRs filters, or elaborate adaptative psychoacoustics. Or processing the audio signal in the frequency domain using a real time FFT followed by a iFFT. Or executing some elaborate upsampling or downsampling. You can use as many boosters as you need, connecting them in series (the output of one, feeding the input of the next one), or in parallel (processing a same audio stream, in different ways).
My suggestion is to never solder the Cinch and the Toslink on the PCB. This way the modules remain affordable, flat and lightweight, so they don't generate unnecessary shipping costs. We can ship them worldwide using an ordinary, padded envelope. If the customer needs a Cinch or a S/PDIF, he can order it separately from Sparkfun or Watterott.
When do we start this, actually?
I'm very excited about this, especially with Flowstone becoming the graphical programming front-end for this.
We will create the corresponding AudioBRICK Flowstone components, delivered through a library.
Using Flowstone, we drop various AudioBRICK components on the worksheet.
We connect their audio streams using Blue links.
How to represent the semantic of the networked SERIAL bus? It is a bidirectional multipoint serial bus. Green links, perhaps? This is compliant with a new message arriving (trigger). The incoming message can be considered as an Array. The Green connectors involving the SERIAL bus may thus be Arrays. Inside the Flowstone module, we find an Array parser implemented using Ruby code, reading the Array using the (@in) method, just as usual.
When a SERIAL bus message needs to be sent, a Ruby code elaborates an Array, and sends it using the (output "output_name",array_name) method, just as usual.
The infrared remote receiver is a particular Flowstone component, kind of simplified Wiimote. We can create it, showing the layout of the associated remote control. When clicking a button of the graphic representation of the remote control, this particular Flowstone component outputs an integer or an Array of integers. The AudioBRICK Digital_in has a dedicated input for that. This way we can also simulate, program and debug all remote control actions.
This is really, a little disruptive innovation.
Not sure about the name yet.
Need to find the correct resonance with Audio, Tilera, Meccano, Lego, Domino, Kewlox, Arduino, Teensy, Quickstep, Modulart, MiniDSP, Breadboard, etc.
Any idea welcome.
Steph
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
Just to clarify that, my project is not going to be a pure effect board. It's main target is to being a synth environment. That's why I need 9 controllers. For effects you could basically use one single M4F, because it is very powerfull. Also I don't like those "you have to buy more for doing more" mentality. The problem with this is the intercommunication takes way too much ressources. Sure you can use DMA, but you basically need at least 2 two way serial communications and you'll also need error correction (because you don't know how the user connects this stuff), so this whole thing doesn't lock up.
I just need a master and this is responsible for the whole system. The other controllers are just dumb slaves. And all interconnections have a fixed length and therefor fixed properties (resistance, inductance, capacitance). I just have to callibrate my clockrate once, and it works perfect everytime. I can also calculate exactly the delay of every subcontroller, so I can match exactly the sample block it is working to the sample block that is currently being transfered to the DAC.
The benefit of having a fixed system also is, that your programming environment knows everything about it (like the arduino). There are no options that you have to check, because the ressources are allways the same.
One big problem that your multimaster setup will have is: How do the masters identify itself. Or other way arround: How do you send a signal from board x to board y and don't actually talk to board z? And how do you programm all of the boards, in a simple way?
I've been working on this idea for 5 years or more. Never found the time to actually go for it. And most of the time I was dissapointed, because the controllers availlable weren't powerfull enough. The STM42F4 changed that, because they are powerfull and the price range is very good.
I just need a master and this is responsible for the whole system. The other controllers are just dumb slaves. And all interconnections have a fixed length and therefor fixed properties (resistance, inductance, capacitance). I just have to callibrate my clockrate once, and it works perfect everytime. I can also calculate exactly the delay of every subcontroller, so I can match exactly the sample block it is working to the sample block that is currently being transfered to the DAC.
The benefit of having a fixed system also is, that your programming environment knows everything about it (like the arduino). There are no options that you have to check, because the ressources are allways the same.
One big problem that your multimaster setup will have is: How do the masters identify itself. Or other way arround: How do you send a signal from board x to board y and don't actually talk to board z? And how do you programm all of the boards, in a simple way?
I've been working on this idea for 5 years or more. Never found the time to actually go for it. And most of the time I was dissapointed, because the controllers availlable weren't powerfull enough. The STM42F4 changed that, because they are powerfull and the price range is very good.
-
MyCo - Posts: 718
- Joined: Tue Jul 13, 2010 12:33 pm
- Location: Germany
Re: FFT-based Audio Analyzer
With Flowstone used as GUI, the setup is easy and intuitive. The geometry gets determined by the way you physically connect the Bricks on your breadboard. They can be in parallel, in series, or both. Using the supplied Flowstone Bricks components, you duplicate the breadboard arrangement on Flowstone worksheet. You connect the available audio streams using Blue links. Each brick is physically equipped with a DIL-4 switch, defining its address from 0 to 15. On Flowstone worksheet, you provide such info. Flowstone knows thus everything about your particular hardware and physical audio interconnections, input and outputs.MyCo wrote:How do the masters identify itself? How do you send a signal from board x to board y and don't actually talk to board z? And how do you programm all of the boards, in a simple way? I've been working on this idea for 5 years or more.
In order the Bricks to be able to read and exchange parameters, a serial bus is used on the breadboard. It can be I2C, or any simple collision-proof half-duplex protocol that you can emulate using a single or twin GPIO pin, that you read and write at Fs (44.1 kHz). Your Fs interrupt starts with 25 lines of code dealing with this, not an issue for a PIC32 or ARM Cortex-M4.
The master is the µC sitting in the audio input Brick. The master receives the infrared remote control codes on a pin port. It decodes the infrared code. It transforms it into messages that he sends, as master, on the serial bus.
All slaves are listening, responding and executing the messages. This way using the remote control you can control the volumes and plenty of other settings. If there is USB on the master µC, that USB can be used for exchanging more elaborate messages with Flowstone. This way, the master can tell Flowstone the remote control codes he is processing.
In all this, there is still no fundamental requirement like Flowstone needing to elaborate and send executable code to the various µC. In all this, the various µC can get loaded with some generic software, featuring the required flexibility using a set of parameters.
Experienced PIC32 or Cortex-M4 programmers can create a DSP application inside the µC. In parallel, they can create the associated Flowstone component, with some Ruby code inside, dealing with the messages.
It is only much later on, kind of Stage 2, that more elaborate messages may be designed, involving code upload.
What do you think about such incremental method? Does it meet your requirements?
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
steph_tsf wrote:What do you think about such incremental method? Does it meet your requirements?
No, this would be far to slow for the stuff that I had planned. And when you connect them in parallel, how do you merge the signals again? When they are sharing the same bus, you have to provide collission management. That's not an easy task... and it takes a lot of wait states, that you can't use for any processing without loosing synchronization with the other controllers.
My plan puts the code that is needed at the place where it is needed. And there is nothing else in the controller as read input, process, store in output buffer. That can be done by DMA, the controller itself has nearly no work to do for the input and output except, reading from memory and writing to it. So for your main processing, you'll get ~3000 instructions at 48kHz.
-
MyCo - Posts: 718
- Joined: Tue Jul 13, 2010 12:33 pm
- Location: Germany
Re: FFT-based Audio Analyzer
Did some housekeeping. The FFT-based Audio Analyzers got streamlined and updated.
Three sorts of time-domain magnitudes filters (executed before the dB conversion) are now available:
- bruteforce AVG (AVG routine using Ruby code from MyCo)
- IIR 1st-order Lowpass (IIR routine using Green code from MyCo)
- IIR BiQuad Lowpass Butterworth (IIR BiQuad routine using Green Code, derived from above)
See the three attached .fsm
Todo list:
- add a phase plot
- improve the X axis labels, especially in Log X axis, getting a 1-2-5-10 progression as major step
- in Log X axis, starting from a critical frequency, the draw engine needs to operate differently for avoiding drawing lines that are less than a pixel step in the X axis, which is a waste of CPU resources
Three sorts of time-domain magnitudes filters (executed before the dB conversion) are now available:
- bruteforce AVG (AVG routine using Ruby code from MyCo)
- IIR 1st-order Lowpass (IIR routine using Green code from MyCo)
- IIR BiQuad Lowpass Butterworth (IIR BiQuad routine using Green Code, derived from above)
See the three attached .fsm
Todo list:
- add a phase plot
- improve the X axis labels, especially in Log X axis, getting a 1-2-5-10 progression as major step
- in Log X axis, starting from a critical frequency, the draw engine needs to operate differently for avoiding drawing lines that are less than a pixel step in the X axis, which is a waste of CPU resources
- Attachments
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ gain only_IIR BiQuad Q 0.707.fsm
- (151.48 KiB) Downloaded 1371 times
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ gain only_IIR.fsm
- (78.83 KiB) Downloaded 1316 times
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ gain only_AVG Ruby MyCo.fsm
- (174.79 KiB) Downloaded 1334 times
Last edited by steph_tsf on Sat Jun 29, 2013 2:21 am, edited 2 times in total.
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
I'd like to help you. The idea of using Flowstone for setting up an array of inexpensive PIC32 or ARM Cortex-M4 must be investigated right now. I'm suggesting this. Create a new thread on this forum. Attach a .fsm as example. Ask the community how an array of PIC32 or Cortex-M4 can be persuaded to execute in realtime, with their own physical audio inputs and outputs, what Flowstone is actually simulating.MyCo wrote:No, this would be far to slow for the stuff that I had planned. And when you connect them in parallel, how do you merge the signals again? When they are sharing the same bus, you have to provide collision management. That's not an easy task... and it takes a lot of wait states, that you can't use for any processing without loosing synchronization with the other controllers.
Indeed, your .fsm is there for getting Flowstone, defining hence simulating the behaviour of the array.
Allow a USB connection for establishing a proper communication between Flowstone and the array.
I guess you don't need audio over USB isn't?
USB would only serve for command and status between Flowstone and the array, isn't?
I don't know if you realize, obeying the above paradigms ensures simplicity and scalability.
You need to profile your application, and determine what is the required baud rate on the common serial bus.
If the application can cope with a relatively slow communication speed like 44,100 baud, you are okay, and the system will remain as simple as my description above.
If the application requires megabits per second on the common serial bus, an array of inexpensive PIC32 or ARM Cortex-M4 is not an appropriate architecture.
I'm fully aware that the architecture I'm promoting here, is not appropriate for materializing something like a modular digital Formant, the famous Elektor Synthesizer.
Each µC only features one I2S-in. Thus, each µC can combine maximum two external digital audio channels.
Each µC only features one I2S-out. Thus, each µC can physically split an existing channel, into maximum two external digital audio channels.
Those are indeed severe architectural limitations.
Basing on such architecture, a 3-way digital Xover (splitting two physical channels into six physical channels) needs a parallel approach. Four chips are needed:
- one as front-end, delivering some processed stereo audio like a global equalization, as intermediate signal
- one as bass channel, extracting the bass content and equalizing the bass speaker
- one as medium channel, extracting the medium content and equalizing the medium speaker
- one as high channel, extracting the treble content and equalizing the tweeter
Four STM32F4 chips, each costing about 10 eur, this is not a major issue.
Four PIC32MX2 chips, each costing less than 3 eur, wait a minute, this sounds optimal indeed.
If you manage to sketch something better, please tell us. This will be highly attractive.
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
Phase plot now added.
Added a correlator (easy as this is a Flowstone primitive).
D.U.T.is a fully parametrable IIR BiQuad.
See attached .fsm
Added a correlator (easy as this is a Flowstone primitive).
D.U.T.is a fully parametrable IIR BiQuad.
See attached .fsm
- Attachments
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ IIR BiQuad Q 0.707.fsm
- (266.22 KiB) Downloaded 1351 times
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ IIR BiQuad Q 0.707y.jpg (24.87 KiB) Viewed 29725 times
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
Added a delay compensation for the phase plot.
See attached .fsm
See attached .fsm
- Attachments
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ IIR BiQuad Q 0.707y.jpg (25.29 KiB) Viewed 29715 times
-
- FFT-based Audio Analyzer_GreenLines LinLogF_ IIR BiQuad Q 0.707.fsm
- (259.62 KiB) Downloaded 1309 times
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: FFT-based Audio Analyzer
I just discovered the LPC43xx series of ARM controllers... what a beast! It has two ARM cores, each running at 204MHz. NXP targets this monster especially for audio and video, and has some great examples for it. It can do 8 Channel audio (and maybe even more).
Its internal structure of the 2 cores is exactly how I planned my project. In their examples they setup the M0 core to do all the I/O stuff, and the M4 core is only used for audio processing.
And the best thing: This thing is cheap!!! ~6€ for the cheapest one (but it has no flash). So this will be my new favorite.
Its internal structure of the 2 cores is exactly how I planned my project. In their examples they setup the M0 core to do all the I/O stuff, and the M4 core is only used for audio processing.
And the best thing: This thing is cheap!!! ~6€ for the cheapest one (but it has no flash). So this will be my new favorite.
-
MyCo - Posts: 718
- Joined: Tue Jul 13, 2010 12:33 pm
- Location: Germany
Who is online
Users browsing this forum: Google [Bot] and 54 guests