Last week, on October 9 I went for a few hours to London to participate in UBhave conference. It was a kick-off event for UBhave project on the theme of “Making Multidisciplinary Research Work”.
UBhave is a really cool project (http://ubhave.org/) focused on mobile phones and social networks used for behavior changing interventions. Can you make people exercise more, socialize, or generally improve their quality of life thanks to access to the data about them, ability to do almost-real-time research, and direct feedback channel (in a form of smartphone apps, sms, social network posts etc.)?
Some great speakers at the conference (http://ubhave.org/conference2012/), including Prof. Kevin Patrick from UCSD (advisor of Ernesto Ramirez @e_ramirez, a well-known Quantified Self life hacker) and Dr Niels Rosenquist from Mass General, co-author of one of the coolest social media analysis projects ever (Pulse of the Nation).
It was really fun to see the common ground for both my projects (Smartphone Brain Scanner and SensibleDTU, more about the latter soon). We are living in times when brain scanning people and social network people can (start to) talk the same language. Über-exciting.
Android doesn’t like USB hubs in general and I couldn’t make it talk to two Emotiv EPOC dongles (that use hidraw module) using any hubs I have tried. But where there’s a will there’s a way: using a USB keyboard that includes hub and connecting the dongles to the keyboard, before connecting it to the phone/tablet, makes all devices (keyboard and multiple usb dongles) get recognized and properly mounted in the system as /dev/hidrawX. Sun keyboard type 7 is easy to take apart and supports 3 devices. It ain’t pretty, but doesn’t require external power and works fine.
Smartphone Brain Scanner 2 now supports using multiple headsets on a single device. This can be used for synchronized data acquisition or gaming on a single device. And for some other stuff that I should be soon able to share.
Finally, a little peace in-between periods of craziness.
Two months ago SmartphoneBrainScanner2 went open source and found home at http://code.google.com/p/smartphonebrainscanner2/ The source is there, the wiki is there and even some bugs start appearing. It’s awesome to see how the project starts to be alive, escaping your own machine into the wild.
The code is open. As much as it can: there is a single binary blob, required if you want to use Emotiv Epoc hardware. Having all the algorithms and pipelines open is extremely important, not only because you want people to be able to contribute to your project. To be serious about the results, black-boxes are seldom a good idea, especially if software and methodology is far from being mature.
What can SmartphoneBrainScanner2 currently do? Well, it allows you to build applications in Qt (4.x but 5.x should also work) that will run on various platforms/devices. Our focus is on mobile devices, primarily Android. Yes, we use Qt for Android and except for occasional bugs we find, everything works amazingly well. With the intensification of work from Digia on Qt for Android and iOS, we are starting to feel that the initial decision about going Qt was the future-proof one.
How have those two months of having SBS2 on google code been? We went live July 25, and since then we had 971 visitors. Not bad for a, let’s be honest, niche project. And it seems that the attention is getting less dependent on the announcements (spikes in the early phases).
Of course, it is not really the website traffic that counts for the project, but how it is used. And lot of really fun stuff is currently being done with SBS2. Different hardware, new algorithms and applications. And I hope life will be calm enough to write about them.
Connecting a low-cost off-the-shelf neuroheadset directly to a mobile phone or a tablet and performing real-time signal analysis and visualization would be pretty much science fiction 5 years ago. Yes, 5 years ago we had Nokia N800, Nokia N95, and iPhone. And tablets were simply a different thing than what we have now.
Today we can do it. The technology is there in the form of affordable EEG headsets (think emotiv or neurosky), powerful mobile devices (mobile phones, tablets, and everything in between, from 3 to 10 inches). The software is there: hackable mobile OSes, multiplatform framework Qt (so we don’t have to rewrite everything for every single device). And knowledge is there, sophisticated and fast methods of analyzing EEG signals. All pieces seem to be in place?
There is something more: motivation. Why do we feel it is worth to spend months of work of not-so-dumb people to develop mobile solution instead of simply using standard high-quality (and quite pricy) setups?
There are several reasons for that. Some of them are even pretty good.
Portable in Greek means ‘better’. Well, no, not really. Portability for us is about setup that can be easily deployed pretty much everywhere and is self-contained. No need for power, network connection or furniture (naturally, up to some sane limits).
Mobile setup allows the user to move around without spaghetti of cables dangling around him. Standing, talking, walking around is very difficult with classical EEG setup (although of course possible).
Cheap setup with off-the-shelf components lowers the entry level for researchers, enthusiast, and eventually regular users. One doesn’t need to be an EEG expert to start playing with such setup just for fun. Or if one is a researcher, buying 30+ EEG headsets and the same amount of mobile phones or tablets suddenly doesn’t sound so crazy.
Real time approach is necessary for many end-user oriented techniques, such as neurofeedback or brain-computer interfaces (bci). There is no time to analyze the data offline for minutes or hours: the results must be calculated and delivered right on the spot. This itself creates many unique challenges.
Sophisticated analysis of the EEG signal means going beyond looking at single electrodes in time or frequency domain. As I have mentioned in my previous post, one of the approaches we are using is source reconstruction, where we try to work with actual brain activity, not just the measured signal.
Hack-away approach to the whole system comes from the belief that proper end-user applications can already be created using the setup. We do not make plugins to MATLAB or dependency-heavy pipelines: everything is written in Qt, can be compiled for any platform and will run as long as we can deliver raw packets from the USB dongle (or when the dongle disappears in the future, deliver packets directly via Bluetooth). SmartphoneBrainScanner2 exposes both raw data but also higher level extracted features (e.g. reconstructed sources) so you can plug into the data stream wherever you find it suitable.
Using low-cost mobile setup naturally poses many challenges. There are few electrodes and those that are there are not placed in an optimal way for many applications. Signal gets much more noisy once we allow users to move around. We need to estimate certain parameters (e.g. noise) in real time, instead of simply doing estimation on the whole signal in the post analysis. And so on.
But on the other hand, we currently have several BSc and MSc students working with the system, designing and implementing simple interfaces for neurofeedback, conducting simple experiments confirming established (and some not-so-well-established) paradigms. Folks who never worked with EEG systems before, received a short crush course and could start hacking away. This is a great experience.
There is no denying that what we are doing is somewhere down there inspired by the story of Kinect: sensor developed using really serious research, that was supposed to be a gaming controller but instead is best known for enabling researchers and hobbyist to do this amazing human-computer interaction systems. Kinect didn’t really bring anything technologically new to the table: systems of accurate skeleton tracking, voice recognition or depth sensors were available on the market for a long time. But what Kinect did bring was the ridiculously low entry point: buy one and start playing. Buy 10 if you feel like that. Work with raw data or let the software do tracking for you.
This lowering of the entry barrier is the real value we are trying to create in this project. This includes software, algorithms and definition of novel approaches to EEG experiments. From the researchers perspective, SmartphoneBrainScanner2 is a lab in a pocket: a self-contained inexpensive mobile solution that can deliver stimuli, collect responses and provide framework for real-time analysis of the data. On multiple subjects. Hobbyists and all-around hackers will be able to use it to quickly create brain-computer interfaces, that will work on different time scales: starting from game-like control in the window of milliseconds (e.g. mu suppression) all the way up to slow changes in the user state (e.g. relaxation) taking minutes or hours. And finally users, hopefully sooner than later will be able to use the applications to interface with the machines or as a mean of self-improvement (neurofeedback training). Not in a lab, hour a day for a week but at home, as much and for as long as needed. Current work in neurofeedback looks very promising [Zoefel2011], but the really interesting question is: what will happen after weeks or months of training ourselves?
In the next part I will write about the software we are creating and hopefully will be able to give some idea when everything will go fully open. The fun is just beginning so stay tuned.
[Zoefel2011] Zoefel, B., Huster, R. J., & Herrmann, C. S. (2011). Neurofeedback training of the upper alpha frequency band in EEG improves cognitive performance. NeuroImage, 54(2), 1427-31. Elsevier Inc. doi:10.1016/j.neuroimage.2010.08.078
Just one year ago our team at Technical University of Denmark (DTU) started Project Smartphone Brain Scanner. We obtained one Emotiv Epoc gaming neuroheadset and asked ‘can we make it fully mobile?’. This is how it has begun.
If you managed to escape all the hype we got about this in the media, check out in the press tab.
The first platform of choice was the Nokia N900 mobile computer, a fully fledged portable Linux box (that can even make calls). Thanks to community efforts, a kernel with USB hostmode was already there, allowing us to utilize the Epoc dongle. I compiled a hidraw module and voilà: data was flowing in 32 bytes packages to /dev/hidraw1.
Data from the Emotiv Epoc is encrypted. It is part of the business model of the company, to sell different licenses (i.e. decryption keys) with different dongles, giving access to different levels of data (game controller, extracted features, raw EEG). As a university, our first and subsequent sets were purchased as Research editions, giving us access to everything in the signal stream (including raw EEG data). But only on Microsoft Windows.
Initially we used parts of the code from the Emokit project. Some enterprising folks broke the encryption scheme and published the keys. This allowed us to access decrypted and demultiplexed data on the phone, initially using Python and later C++. This was awesome, but just the beginning.
Raw data from EEG (here 14 channels, 128 Hz) is just… well, numbers flowing in. A wide range of EEG analysis methods exist (notably collected in Matlab toolbox EEGLAB.). How much could we do on the mobile phone and in real-time?
Carsten Stahlhut, my colleague (currently post-doc) does remarkable work in the field of source reconstruction of brain activity. To explain in very simple terms: the current we measure with an EEG electrode is a sum of all activations in the brain, propagated through the brain itself, scalp, skin etc,. So what we measure with electrode A doesn’t necessary originate in the place in the brain closest to this electrode. The main task is to reconstruct the sources of the signal in the brain, instead of focusing just on the measured current. Carsten’s thesis is a good read if this sounds interesting (and you don’t mind some badass math).
Source reconstruction is computationally expensive. We are talking a 1028-vertices brain model (every vertex is a potential source of the signal), matrices of 1028×1028 and larger, real-time AES decryption, recording data, visualization, and so on. We need to overclock the N900 (again, power kernel is an amazing community effort) as much as we can, going up to 1.15GHz. Some phones do not like it so much and become unstable; luckily we have plenty to choose from. Still, some simplification of the calculations are implemented, everything to pursuit this hard real-time operation of the system.
In the next installment I will try to explain why we pursued this mobile, real-time, and complex approach to EEG at all. Why not just settle either on simple 2-3 electrodes portable systems, or big-and-serious but not mobile solutions? Stay tuned!