Glass: EEG

IMG_20140114_133229

Resistance is futile…

Today we checked if Glass can be used with the hardware of Smartphone Brain Scanner (Emocap) without much interference.

The setup was with a full 14-channel cap and recording on an Android tablet, with a 20Hz blinking pattern showed on a video on a computer screen that participant attended through the Glass (roughly matching the Glass screen size).

The resulting power spectrum from O2 electrode shows a nice peak around 20Hz when stimulus (flicker) was on and no peak with stimulus off. There is no no significant difference between Glass being off and on (idle).

Glass should be well compatible with EEG recording, and since we get a nice strong signal from a single electrode from a visual stimulus, we should be able to build something really interesting.

Next step: stimulus directly from Glass  (and maybe user response?).

glass_spectr_off

Glass off.

glass_spectr_on

Glass on.

Glass: Gadget

What is Glass about?

As Thad Starner (@ThadStarner), technical lead on the project puts it, Google as a company is about delivering the information to the user as quickly as possible. Preferably even before this information is requested.

Autocomplete in Search, priority inbox in Gmail, Google Now, Knowledge Graph; all those are about reducing the time between the information is wanted or needed and presented to the user.

Glass is the next iteration. Your computer is powerful but slow: if you are walking down the street, you will not pull the laptop from your backpack to check time or email. But when the attachment will not open on your phone, you will end up using your laptop. Different devices, different tasks.

Glass is insanely limited. The screen is not good for anything beyond pre-chewed information. You can literally watch the battery percentage going down in front of your eyes. Input is hard and error-prone.

And it is supposed to be limited. Think about Glass as a hardware interface for notification bar, Google Now, and Search. It is not for browsing the web or even email inbox. The content is either requested or pushed to the user, but in a very condensed way that requires only a quick glance.

This extraction of knowledge that is required here is the real bottleneck of the system. Understanding complex speech is hard. Returning results that are relevant is hard. Presenting information in a condensed way is hard. Thus, building good Glass is hard.

I am visiting my parents for Christmas in Poland. Trying navigation, it was impossible to get Glass to navigate to the place I wanted, Polish street names being impossible to spell. So there I was, sitting with my glass, smartphone, and laptop, not being able to tell Glass what I wanted. I ended up quickly putting together a Mirror API app with a sole purpose of starting navigation to the street I wanted.

Functionally, Glass does not do anything your phone would not do. It does not mean it is useless, your laptop can do anything your phone can do, but because of the phone’s form factor, you still use it on the go, during meetings, in the toilet. But when this email is turning out to be just too long to type comfortably on the virtual keyboard, and your laptop is standing right there, you switch.

As Glass is not aspiring to replace phone, it is competing with being faster than reaching into your pocket, unlocking the phone, and doing search, or checking email, or taking a picture. If Glass fails to recognize your query the first time, you would be better off using your phone to do the search. Tough competition. If the screen does not activate on the first horse-like head jerk, it would be more efficient to check time on your watch.

There are certain use cases where the Glass form factor is a killer. When having it sitting hands-free or being always-on really makes the difference. Taking pictures or recording videos is fun, you can snap casual stuff for which you would normally not feel worth taking your phone out and you get the POV. It is not really for professionals, they have been using POV cameras for years; if you are doing serious bungee jumping, there is a good  chance you have a pro camera for that. For pictures and videos, Glass is for casual users quietly going with them through the day.

Navigation is fun. It is cool to have a stopwatch in the corner of your eye. Or a compass. Or quickly check if the email you just received is worth pulling your phone out. Or ask about the movie you are watching. Those things generally work, and when they do, they feel like magic. But if they fail on the first try, you either continue pushing it (after all you have paid serious money for this thing), or you give up and use your phone. And every time this happens, you think: it is not ready, not good enough.

Glass as a gadget is not ready for prime time. The problem it solves is too small and the solution not perfect enough. There are however other interesting things to do that I am studying and will describe. After all, it is a device with uplink and downlink directly into our eyes. Until we get into our brains (also working on it), it gets us as connected as we can be. There are some fun things we can do with it.

Glass: Beginning

me_with_glass

I finally got myself Google Glass. One of the gadgets I’m really excited about, as new smartphones, tablets, or even smartwatches have not been really bringing anything new to the table recently. This one is different.

I want to write about the experience of using it as a personal device, the way it was designed. But I am even more excited to explore the privacy dimension; I do not think it changes the objective privacy concerns significantly, but it makes people react strongly and start asking ‘are you recording me’?  Can we learn from those reactions about the privacy in general? After all, recording the video is just one possible, not even that insanely interesting, channel.

And can we do something truly better with Glass and other wearables? By better, I mean not finding cat images faster, but improve the way we work together as a society. So far we haven’t been very successful with all the communication channels, with a smartphone in almost every pocket (in developed countries that is). Do instant access into people’s eye changes anything? Will it solve problems of epidemics, global warming, and hunger?

Talk by Farrah Mateen on Neurology and Technology for Low-Income Populations

If you happen to be in Copenhagen check out this talk by Farrah Mateen (MD, PhDc Massachusetts General Hospital & Harvard Medical School, Boston) about Neurology and Technology for Low-Income Populations.

The talk will take place on Monday, October 7. at 2pm.

The place is DTU, Building 324, room 030.

Abstract:

Neurological disorders, including stroke, epilepsy, dementia, and head trauma, heavily burden people in low- and middle-income countries and account for a high burden of morbidity and mortality globally. Technologies for brain disorders have lagged behind innovations for other health problems. This lecture will discuss some of the possible points of technological intervention in low-income settings, including electroencephalogram (EEG), rapid diagnostic tests, and enhanced telecommunications for education, health care provision, and research on brain disorders in least developed settings.

 

Everyone is welcome!

SmartphoneBrainScanner2 on BB Playbook

Recently I got BB Playbook from RIM (thanks Ash) to play around and see how research-friendly it is. Here in the lab we pay close attention to the developments around BB10: we like open, hackable and mainstreamed platforms. As folks at RIM were out of BB10 Dev units, they sent me Playbook, just to touch the base and check where the platform is and where is it going.

One of our large projects, the one where I do most of the coding, is SmartphoneBrainScanner2. It’s a framework for building real-time, multi-platform apps for EEG (check out other posts on this blog). So the very first thing I did with Playbook (after updating it…) was to try to compile and run an example SBS2 app on it.

The framework we use is entirely written in Qt4, the example app visualizing brain cortical activity uses OpenGL ES2.0 for rendering. The framework is growing fast, but as we try to keep the number of platform-specific hacks to minimum (and you do always have hacks in the mobile applications world…) the app compiled cleanly and nicely with BB10 native toolchain. There was some learning curve about the package structure (where to put the large files we use to keep math models or how to do icons), but otherwise it just worked ™: Brain on Playbook. Not much to write, code that was originally begun for Nokia N900 still compiles for Android and QNX (and hopefully it will remain like that for BB10).

BB Playbook and Nexus 7 running SmartphoneBrainScanner2

Although the software compiled pretty much out of the box, Playbook does not support USB hostmode which we use for connecting USB dongle for EEG neuroheadset. Streaming the data from the network however, the system worked perfectly fine. So if we ever see BB10 devices supporting USB hostmode (and HIDRAW), SmartphoneBrainScanner2 will automagically become available on the new platform. The power of Qt in its fullest.

My general impressions from playing with the NDK and putting stuff on the device were quite positive. Plus for using standards to do stuff (such as transfers over SSH), but the whole procedure could be more streamlined (keeping SSH session active in one terminal, transferring stuff in the other, such things).

For research we are interested mainly in two things: hackability and support for sensors. I’m hoping for BB10 to have a robust architecture and to behave as RTOS should. If we could do proper low-delay audio processing on it with integration with EEG, this would be fantastic.  And if we could deploy applications collecting (various) sensor data in a stable manner over weeks or months, we would probably buy a few hundred units right away (only recently we have purchased 200 Samsung Galaxy Nexus for an experiment).

For teaching mobile development, we need platforms that are fairly stable, offer streamlined development, support multi-platform SDK and give access to various stuff on the device (including raw sensor data, background processes etc.). Currently we teach Android for all these reasons, but to be perfectly honest, I’m growing so tired of the sometimes ridiculous bugs/features in Android Java (my favorite so far) and Google attitude of don’t-fix-just-run-forward (it may be possible that audio latency has been fixed for some devices in Android 4.1. Yup, after all these years, we may see decent audio arch in Android). Qt on Android is not yet ready to be taught in the classroom, but boy, I would love to go back to teaching QML on embedded devices. We tried it once with N9s and it was so much fun for everyone.

So RIM, please do it right. And think about your presence in research and teaching. You want to get to them while they are (professionally) young…

UBhave in-N-out

Last week, on October 9 I went  for a few hours to London to participate in UBhave conference. It was a kick-off event for UBhave project on the theme of “Making Multidisciplinary Research Work”.

UBhave is a really cool project (http://ubhave.org/) focused on mobile phones and social networks used for behavior  changing interventions. Can you make people exercise more, socialize, or generally improve their quality of life thanks to access to the data about them, ability to do almost-real-time research, and direct feedback channel (in a form of smartphone apps, sms, social network posts etc.)?

Some great speakers at the conference (http://ubhave.org/conference2012/), including Prof. Kevin Patrick from UCSD (advisor of Ernesto Ramirez @e_ramirez, a well-known Quantified Self life hacker) and Dr Niels Rosenquist from Mass General, co-author of one of the coolest social media analysis projects ever (Pulse of the Nation).

It was really fun to see the common ground for both my projects (Smartphone Brain Scanner and SensibleDTU, more about the latter soon). We are living in times when brain scanning people and social network people can (start to) talk the same language. Über-exciting.

Multiple Emotiv EPOC headsets

Android doesn’t like USB hubs in general and I couldn’t make it talk to two Emotiv EPOC dongles (that use hidraw module) using any hubs I have tried. But where there’s a will there’s a way: using a USB keyboard that includes hub and connecting the dongles to the keyboard, before connecting it to the phone/tablet, makes all devices (keyboard and multiple usb dongles) get recognized and properly mounted in the system as /dev/hidrawX. Sun keyboard type 7 is easy to take apart and supports 3 devices. It ain’t pretty, but doesn’t require external power and works fine.

Smartphone Brain Scanner 2 now supports using multiple headsets on a single device. This can be used for synchronized data acquisition or gaming on a single device. And for some other stuff that I should be soon able to share.