Wind is normally the enemy of sound recordists, but going through some recordings from last year I found this recording of ex-hurricane Ophelia from the 16th October 2017. Ophelia had been pretty nasty originally and was still bad when it got to Ireland.
I recorded it in Glastonbury in the south-west, by finding a sheltered spot and pointing the mic in a windshield at a bunch of trees, which made a good recording given the wind. The key was that I had good shelter at the mic, but the trees were exposed to the full force of the wind.
The storm dragged up a load of Saharan dust, making the sky the sickly yellow in the pic.
I went to an open day in October run by the mind people, makers of the Vilistus EEG interface. It was an opportunity to see this in action and ask questions – the day was £85 which wasn’t too bad, there were about five other people there. It was run in some anonymous hotel near a football ground in Birmingham just off the M6, and led by Stephen Clark who knew the product well.
It was an interesting day, the Vilistus 4 box is a digitising interface but analogue signal conditioning is done in the sensor boxes, which add some cost to the overall system. Their default software looks fine even for Mind Mirror since it seems to have the filter bank in it, the extra costs for the Mind mirror package probably involves extra training. You seem to get the vilistus pro software with the box. I haven’t seen any of the units come up on Ebay.
I learned that the interface between the Vilistus interface and the computer is OpenEEG P3, which was good to know, and Stephen did warn that a lot of the older OpenEEG code from the OpenEEG project made the assumption that there were only 6 active slots rather than following the protocol specification which allowed the source to say whether there are 6 or 8 slots of data. Vilistus use 8 slots, so code assuming 6 would barf.
He did say the existing API would allow the Vilistus Pro software to continually dump the values of the filter slots to a text file that could be read by a program to display the output on LEDs, obviously I would get to build the interface and write the program 😉
The Vilistus Pro software did show correlations well – most clearly on a display where they showed heart rate against a trigger for breathing in and out. The heart rate slows a teeny bit on breathing out relative to breathing in, although this effect fades with age – it was clear on the 25 year old student and not really visible on a 50-something lady on the course. EEG was tough to get going in the course, although it was demonstrated using disposable electrodes on the forehead. This isn’t the optimal placement for Mind mirror but you can’t use disposable electrodes on areas of the scalp covered by hair.
The trouble is this rig would be about £1200 all in, and I’m not yet sure I am £1200 interested in the Mind Mirror. I did get a much better feel for using this in the field, and I’m aware that while I have been able to solve the digitising side of things using the PIC, I still need to solve the EEG diff amp, and solve the electrode problem.
Vilistus seem to have solved a lot of that, but even the electrode set is ~£200, so the bundle would be the way to go. One to mull over really, to work out whether I want the functionality or the engineering challenge. I could probably knock off £500 going DIY if the development went OK, but experience shows only one to two PCB fails or wrong turns can wipe out the savings on a one-off project where there’s a COTS solution.
We are using 160901 to make compost tea. Although the temperature has fallen to ambient, it’s still a bit early. This is only seven weeks old, and it’s apparent that while all the green material and plant material has gone and isn’t recognisable for what it is, the woodchip takes longer to break down. As such it will be mainly bacterial, the fungi take longer to develop. Fungi are better at decomposing woody material. But sometimes it is not worth letting the perfect be the enemy of the good.
Looking back at the success we had with the beans there is some latency of a few months between putting the compost organisms out, compared with the more modest results after only a couple of months, so we want to get this out now to do its work over the winter. Continue reading “Making compost tea”
The Fonnereau Way has been used since the mid-1800s, although it’s been the subject of a fight when a incoming resident at the Westerfield end tried to block it up and have it stopped on several occasions. Network Rail has also had it in for the pedestrian level crossing but have also failed to have it struck off.
Becoming a housing estate will clearly change this part of the Fonnereau Way, so I walked this to capture some pictures and soon to be historical sounds from the route. The farmland is intensively farmed and heavily sprayed as I’ve observed a few times, it’s quite possible that being turned into a housing estate may actually increase the biodiversity. Although the birds will be persecuted by hundreds of domestic cats and the gardens will no doubt be tiny, the farmland doesn’t support that many birds at the moment.
The Fonnereau Way starts from Christchurch Park, but I started where the changes will be made, where it crosses Valley Road. In the local plan all vehicle access will be from Henley Road rather than Valley Road.
and it’s a noisy place. It gets better quickly as the old path threads its way past some sports facilities and the playing fields
before reaching farmland
There are a few birds in the farmland, but to be honest the urban Brunswick Road Rec has more diversity to my ears, the birds are few and far between
I started redecorating the lab, so the EEG project is now relegated to an Autumn/winter project 😉 Which is a shame as I’d got close to replicating the Mind Mirror system in Open EEG and getting a hardware gizmo set up using a PIC. The best laid plans of mice and men…
It’s basically a single channel digital oscilloscope, but it works with Picotech’s Picoscope software, which has all sorts of features that are new to me, like software RS232 decoding, click to set trigger levels, and long persistence simulation.
I have a decent Tek 2245A analogue scope, which computes frequency and voltage levels from cursors on the traces,
This is now very old , from 1989. It does most of what I want/need, and most of my design career I worked with analogue ‘scopes, with the logic analyser as a separate piece of gear. However, despite its measly 100kHz bandwidth the Pico did show me some of the attraction of a more modern approach. Every so often I’ve toyed with the idea of getting a Chinese scope, something like Rigol 2000 series or similar. So far I haven’t cracked. There’s a lot to be said for a standalone scope, but I wonder if the combination of my regular analogue bench scope together with a Pico will be even better.
but it would be a terrible thing to do to give this to a beginner. I could only make this thing trigger properly because I’ve used analogue scopes for years and had some feel for what should happen – all too often on the FPGA scope if the vertical trigger wasn’t in range you simply don’t get to see anything useful at all, so you can’t see which way to shift the trigger point. And the user interface is revolting. Too much clickety-click of two separate left-centre-push-right buttons for my liking.
Picoscope is far better thought out although it still suffers from the problems of not enough control of input sensitivity and offset as a regular bench scope. But it, and the associated DC coupled arbitrary waveform generator will be a great tool for testing the OpenEEG filters at sub-audio frequencies. And unlike the typical fly-by-night USB scopes, the software supports legacy models back to when Pico started, because that is of course always the problem with any hardware that depends on a piece of software running on some other device – it easily becomes orphaned before its service life is over. See pretty much any hardware made by Apple that is more than three or four years old 😉
The DrDAQ does pretty much all that I want for the EEG work, but the AWG doesn’t support frequency sweep mode which is a shame. I’d need to go for something like the 2206B at £250 to get that. In that case I’ll probably do it the old way and set up the AWG to output a single frequency and step through the frequency range. What isn’t clear is the frequency resolution of the AWG.
The Meare at Thorpeness is only three feet deep and even a light breeze seems to rock these boats making a lot of noise.
A nice place in the summer – not so rammed with people as nearby Aldeburgh can be, and the boating lake is fun. Easy reach of the beach, too. The lake gets a good view of the whimsical House in the Clouds water tower
The Peter Pan-themed lake and the House in the Clouds are the creation of Scottish barrister Glencairn Stuart Ogilvie at the start of the 1900s
Now I have convinced myself that I can get a version of the OpenEEG hardware to run into EEGmir, I want to how see if I can reproduce one of the Cade-Blundell filters. I have an analogue simulation from earlier, and I want to see if I can reproduce this in EEGmir. The filter specification protocol in EEGmir is the same as in Fiview from Jim Peter’s site1, and since that displays the transfer function it looks like a good place to start.
a tale of linux graphical display woe…
The windows version doesn’t run, beats me why. So I try it on Linux. My most powerful Linux computer is an Intel NUC but because Debian is hair-shirt purist and therefore snippy about NDAs and proprietary drivers, I think it doesn’t like the graphics drivers. It was tough enough to get the network port working. Xserver and VNC is so deeply borked on that. If something is stuffed on Linux then it’s reload from CD and start again because I haven’t got enough life left to trawl through fifty pages of line noise telling me what went wrong. So I’m stuck with the command line. So I try fiview on the Pi, and this fellow sorts me out on tightVNC and the Pi which is a relief, trying to get a remote graphical display on a Linux box seems to be an endless world of hurt, and I only have a baseband video monitor on the Pi console.
Simulating the 9Hz Blundell filter
I already have SDL 1.2 on the Pi, so it goes. Let me try the 9Hz channel, which was the highest Q of the Cade-Blundell filters. If you munge the order and bandwidth specs you get fc=9Hz BW=1.51.
Converting that to Fiview-speak that is
fiview 256 -i BpBe2/8.22-9.72
which in plain English means simulate a sampling rate of 256Hz bandpass Bessel 2nd order IIR between 8.22 and 1.51. So let’s hit it.
Unfortunately the amplitude axis is linear, which is bizarre. Maybe mindful of their 10-bit (1024 level) resolution OpenEEG didn’t want to see the horror of the truncation noise and hash. I can go on Tony Fisher’s site (he wrote the base routines Jim Peters used in fiview) and have another bash
Running the analogue filter with the same linear frequency display I get
which shows the same response2. H/T to the bilinear transformation for that. I had reasonable confidence this would work, I did once cudgel my brain through this mapping of the imaginary axis of the s plane onto the unit circle when I did my MSc. Thirty summers have left their mark on the textbook and faded the exact details in my memory 😉 But I retained enough to know I’d get a win here.
It’s not strictly exactly the same because of the increasing effect of the frequency warping of the bilinear transformation as the frequency approaches fs/2. But in practice given the fractional bandwidth of the filters the warping only has an effect in giving the upper stopband a subtly different shape in the tails, I struggle to see it here. ↩
Now I can get signals into the OpenEEG modP2 format, the next stage is to qualify the filtering used within eegmir and to put an antialiasing filter in front of the ADC. The sampling rate is only 256Hz, so the highest frequency possible is 128Hz. Anything else will alias down, particularly frequencies +/- 50Hz of 256Hz, which will be aliased down to 0-50Hz and corrupt my area of interest. This includes the fourth, fifth and six harmonics of the 50Hz power frequency and the second harmonic of the 100Hz full-wave rectifier ripple tossed onto the powerline by every switched-mode power supply in the neighbourhood.
OpenEEG are good enough to put their schematic up on the Web, so I simulated their antialiasing filter.
Hmm, colour me underwhelmed. At a 10-bit resolution the steps are 1/1024, so quantisation noise is 20×log(1/1024) or about -60dBFS. So you’d like to be 60dB down at fs/2 of 128Hz, which is where I’ve drawn the line. We are at, …drum roll…, -16dB by then. At least the crap there gets aliased to the high frequencies, but by fs we are at -26dB. Nice try, but no cigar. I guess that’s the price I pay for saving myself the grunt of lining up all those analogue filters. TANSTAAFL and I get to try harder here. At least there are only two of these filters.
Elliptic filter design
The obvious way here would be to get an elliptic filter and target a notch at fs/2 and another at fs. I had thought there would be an online calculator by now, but perhaps nobody makes analogue filters any more1. So it’s back to the Williams book. It’s all about the ratio between stopband and passband. The stopband is non-negotiable at fs/2, say 120Hz so hopefully a notch will be dropping just beyond that into 128 Hz. I have flexibility on the passband, the Mind Mirror goes up to 38Hz, say I choose a passband cutoff of 60Hz, I get a steepness of 2. I’m easily prepared to take a passband ripple of 0.3dB (p=25%)2 so I am after a C ?order 25 ?theta
From Table 2-2 I want Θ=30° for my steepness of 2, so I want a C ? 25 30 filter, with only the order to determine. I’d really like that to be 3 rather than 5 😉 Sadly I look up C 03 25 283 and the stopband is only 30dB. Shifting Θ=20° would give me a steepness of 3 and a stop of 40dB, so my passband comes down to 40Hz
A C 05 25 32 would give me a stop of 60dB, I will give some of that up in component tolerances, but it’s better than 16dB and gives me some chance to fight all that mains rubbish, so let’s take a look.
It’s not bad. I’d probably want to shift the corner frequency down by 5Hz. It’s good that it isn’t anywhere near as sensitive to component values as the Cade-Blundell bandpass ones were, the shifts due to preferred values were significant but the traces are close. For comparison the original OpenEEG line is in blue. The filter is complex, but not terrible, I can take solace that this is the quid pro quo for not having to line up all those 54 filter centre frequencies 😉 Continue reading “Modding the OpenEEG analogue to digital converter and comparing with OpenBCI”
So far I have inched my way to making a Mind-Mirror compatible EEG in a theoretical way, but to make it work in real life I need a way of getting signals into the machine. You can buy a board made by Olimex for a reasonable £50, you get optoisolation and everything, and it’s probably the most cost-effective way. Trouble is I don’t know that EEGmir works yet, so I want to do it cheaper, and also now. A Microchip PIC16F88 will do the job here, and I have a few 🙂
I tinkered using this SPBRG calculator to find a suitable crystal to run the PIC16F88 at to match both the 256Hz sampling rate and the baud rate. The first run of EEGmir showed me nothing at all.
Inquiring further it seems the Raspberry Pi gets shirty about a 3% baudrate error at 57600 baud. I set up a test PIC to pump out an endless string of As, and when I brought up minicom they showed up as Ps. This is not good.
I needed to go find a 3.6864 MHz crystal, which lets you get down to 0% error at 57.6k, and by a fortunate stroke of luck fc/4 divides down integer-wise to 256Hz. Nice. So I did that, sending a bunch of As in the data frames to the Pi, after padding down the 5V TTL signal from the PIC.
Mincom showed the As OK from the test PIC, but it wouldn’t let go of the TTY until I rebooted. EEGmir comes up and shows me a load of gobby stuff about data errors. Pressing F12 shows it is assessing jitter
and telling me I have a sampling rate of 325Hz. The nice thing about hardware is you can get a second opinion. Sometimes it’s the smoke pouring out of something, but here it’s in the frame rate of the signal, as I gave myself a sync pulse on a spare PIC pin to synchronise my scope to. So I appeal the outrageous assertion that I am running too fast
and get handed down the verdict of guilty as charged, I did screw up. And I didn’t wait for the camera to focus.
Let’s look on the bright side. This PIC is sending out data at the right baud rate, sort of the right number of frames, too damn fast. And EEGmir is reading from the Pi serial port and struggling manfully to make some sense of it. The (256Hz) on the jitter display even gives me hope it might adapt if I choose to run at 128Hz. Oh and I find that the escape key is the quit command in EEGmir, which saves having to go find the PID and do a kill-9 PID on it, which always feels a bit bush league.
The sampling rate error is because I failed to wait for the TMR1 to time out which I was using to define the frame rate, doing that fixed the sampling rate, it’s now 256.04 according to EEGmir. Still hollering about data errors, so I probably failed to understand the OpenEEG2 protocol somehow. Continue reading “OpenEEG2 ADC”
In my library/Google trawl I turned up EEGMIR which is to be found here. This uses regular C code to run the IIR filters, the implication is this is a digital implementation of analogue filters, probably achieved by transforming the s plane to the z plane and predistorting the response. This would save me heroic amounts of tweaking analogue filters. If I could run it on a Raspberry Pi, i could get my Mind Mirror 1 LED display by extending the display code and using the GPIO.
But first I need to characterise the program, compile it on the PI and get it working. And the program is 14 years old… I’m not a C guru though I have used the language, not professionally but in its bastardised form for the Arduino, and I’m not a DSP guru either. So I’m hopelessly way out of my depth. I do like the way Jim Peters took an interesting approach to the amplitude display of the bands, downconveritng the bandpass with fc to make a direct-conversion receiver to DC. When you can do this with an IQ demodulator it works better than the amateur radio hardware implementation. But first things first. Does it compile?
Compiling eegmir on the Pi
I get a new Raspberry Pi B+ V2, and a copy of jessie-lite. If you are starting form scratch use a regular Pixel Jessie install. it’s a graphical program though it looks ugly, so you need the Xwindows system.
to install Pixel. And X. That’s why you should have started with a full install. EEGMIR is a graphical display program, nearly everything else I use Pi for is command line. I don’t normally bother with the desktop on a Pi because I run these guys headless.
do ./mk a
compiler screams, I need something called SDL. Due to the age of the program SDL2 doesn’t work. Install SDL 1.2
=== page_bands.c
tmp-linux/page_bands.c: In function ‘draw_signal’:
tmp-linux/page_bands.c:335:4: error: label at end of compound statement
no_more_data:
^
FAILED
Hmm. I’m in trouble now, I look at Jim Peters’ code page_bands.c and he makes a leap out of some nested loops
//
// Draw the signal area
//
static void
draw_signal(PageBands *pg, int xx, int yy, int sx, int sy, int tsx) {
int tb= 1; // Timebase -- samples/pixel
int a, b;
[...]
if (oy0 < sy && oy1 >= 0) {
if (oy0 < 0) oy0= 0;
if (oy1 >= sy) oy1= sy-1;
vline(xx + ox, yy+oy0, oy1-oy0+1, pg->c_sig1);
}
}
}
no_more_data: // <- COMPILER MOANS ABOUT THIS
}
}
I’m in pretty deep trouble here. I don’t really understand what’s going on. I invoke the spirit of the Big G on the error message and I am educated like so
case5:// here you need to add statement //if you don't want to do anything simple break statement will work for youbreak;
to lob in a break statement after that no-more-data: label. I am hacking, I’m not proud of it but sometimes you have to try and keep the wheels running to make progress 😉 . Compiler is now happy with a modest amount of bellyaching
=== fidlib/fidlib.c
In file included from fidlib/fidlib.c:622:0:
fidlib/fidmkf.c:151:1: warning: conflicting types for built-in function ‘csqrt’
csqrt(double *aa) {
^
fidlib/fidmkf.c:175:1: warning: conflicting types for built-in function ‘cexp’
cexp(double *aa) {
^
I throw caution to the winds and run the program. It now comes up but spits bricks on the command line
pi@raspieeg:~/eegmir/eegmir-0.1.12 $ ./eegmir
eegmir: Unable to open serial device: /dev/ttyS0
maybe need to detach ttyS0. You do that with Raspi-config, turn off terminal output but keep the hardware enabled, Still moans about ttyS0. That’s because on the Pi this should be ttyAMA0
I change ttyS0 to ttyAMA0 in eegmir.cfg
it now responds, though glacially slowly on Xwindows, to the F2 (MM) and F3 (display test) and F4 (exponential frequency map) and F10 (jitter calc). I take the hit and run it on a real composite video display. My cable was a camcorder cable so I needed to use the right audio cable. Ain’t Google marvellous.
Responsiveness is much improved. My addition of the break statement has not obviously borked the program. In Googling there was talk of some versions of gcc letting the empty statement after a label pass and some versions getting shirty, maybe this was different 14 years ago.
I observed the lack of settling to zero on the IIR filters in the low frequencies, which corroborates the feeling i got reading about the effects of truncation of the filter coefficients being worse close to the sampling frequency and close to zero. After all, I can absolutely dead-certain guarantee that the input is digital silence, because there is no input.
The jitter test screen on F10 moans at me that it can’t work out the jitter. Can’t really argue with that, because there is no input. I need to go fix that next.
Pressing F11 gives me
So I jack a pair of cans across the audio output of the Pi and I get to hear what sounds to me like 1kHz tone
Jim Peters GPL2 it so I have retained the same license on Github
Conclusion – it works in principle
So far I surmise that I haven’t mortally wounded the program by tossing in that arbitrary break statement and that it will run on a Pi. I have no idea of if I have enough MIPS for a decent performance. A Raspberry Pi 2 has 4,744 MIPS whereas a 2003 vintage Pentium 4 had 9,726 MIPS, since I am using a Pi B+ which is less than the pi 2 I may be short of processing grunt. But for that I need a signal.
Rummaging around looking for the HDMI to VGA adapter I had in the loft I found a Pi 2 sitting unloved, so I swapped the B+ for a Pi2 for an instant hardware upgrade. There is a comparison of the performance of the B+ and the 2 here. The program is more responsive now, so I do the whole
and then recompile, this time it recompiles all the program components, so I figure something changed under the hood to get all those four cores working for me. I get the same griping about the conflicting types.
I find out how to boost the bar gain, to take a better look at that suspected truncation noise in the low frequency filters. That’s the b key followed by a number
This doesn’t really trouble me, that’s lifted by 100 times. I will do gain control in the analogue domain and the Mind Mirror did not eq individual channels or do any other levelling other than master gain. But it shows that the 0.75Hz Mind mirror channel could be ‘interesting’ to add. Truncation noise seems to get worse as you get to fs/2 and to 0. fs/2 is 128 Hz so I am well away from that, I could benefit from halving fs, and is something to bear in mind in the hardware design, and testing if the software will adjust.