We have just published a paper in Nature Scientific Reports, The Strength of the Strongest Ties in Collaborative Problem Solving (http://www.nature.com/srep/2014/140613/srep05277/full/srep05277.html).
The paper shows that networking does not improve team performance. We showed that for teams of knowledge workers, only their strongest ties (best friends or people you spend a lot of time with) had an effect on performance. None of the weaker, networking type ties, impacted performance. We also showed that a team’s strongest ties are the best predictor of how the team will perform. They predict performance better than any other factors we looked at such as the technical abilities of its members, how knowledgeable they are about the topic at hand, and even their personality. In fact, once you account for a team’s strongest ties none of these other factors matters.
When solving problems in a competitive environment, it does not matter what or how many people you know; the only thing that matters are your strongest ties.
In my line of research (social-and-not-only networks) use of Twitter is pretty common. We use it to announce and learn about talks, articles, and smaller pieces of work (such as this post). Nothing fancy.
A few days ago, out of curiosity, I created a paid Twitter campaign to have one of my tweets—containing a link to a recently published paper—promoted. Disclaimer: I have paid for the campaign from my private funds.
The promoted tweet was pretty bare-bone
This was my first experience with creating a campaign on Twitter, so I didn’t do any optimizations. The goal was also not to actually bump the views/downloads of the paper (although I had been ready to be surprised). I simply dropped 17 twitter handles as a seed, all of them researchers from the field and general science people and institutions. This resulted in Twitter estimating the audience of 566k, including my followers, users like them, and users like followers of any of the 17 accounts. This was before choosing required targeted location.
I initially set the bid really low to $0.50. It is important to note, that Twitter charges you for the interactions with your tweet, not the impressions. The bid is for how much you are willing to pay for the click, if the user decides to do so. The higher the maximum bid, the more users you can eventually reach, but at some point the money will run out. As I decided to go all-in and spend $40 on the fun, I never stood a chance to reach hundreds of thousands of users. Maybe if I didn’t put the link to the paper there (which would limit the engagement options), but this would probably defeat the purpose.
After including location targeting, for which I could only choose UK and US, Twitter estimated I will be able to win bids and have my tweet appear for 2k users. This is how it happened:
You can see a pretty standard pattern of my tweet immediately making its way to the low-hanging fruit users, flattening through the day, peaking right after 5 pm, gaining some impressions over the evening, and finally dying. It has reached the audience for which the bids were low and didn’t want to move any further. In that period, 10 out of 2,032 users clicked the link…
Not ready to give up, and with $35 still to burn, I increased the max bid to (a relatively high) $2.50. Immediately (at the red line), my tweet was back in the game, peaking at around 1,700 impressions in the first hour. At around 10pm I ran out of money.
After increasing the bid, I reached another 2,260 users (before running out of money). In the first 2k users however, I got 10 engagements. In the second 2k I got 36 engagements. This was not a proper experiment, I did not do A/B testing, but it seems pretty clear that the Twitter users are not all equal in their engagement with particular content and the bid value reflects it. Personally, I would be very interested as a user to learn about the bids that were placed to get into my timeline.
Because I was not charged for the impressions, my tweet was inserted into relatively high number of timelines. It is arguably useless, if there is no resulting interaction, but at least I got to learn how Twitter sees the demographics of what I understand as my research target population. Without the null population I don’t know how interesting are those results, next time I should run similar campaign but promoting the latest Justin Bieber’s album, just to get the baseline. The interests Twitter did spit out do not strike me as particularly telling. F/M ratio is sadly what I would expect, so it may be accurate.
Finally, after the campaign, Twitter suggest how to improve the next handover of my money to them, including suggested seeds to add. No surprises here, seems I ended with more popular rather than hardcore-science crowd.
The most interesting learning for me was the reaction of some of the folks who encountered my promoted tweet.
Have just seen a "promoted" tweet linking to a PLOS ONE paper promoted by the first author. So that's a thing now…—
Micah Allen (@neuroconscience) June 04, 2014
Paying for having (tweets about) paper promoted does feel a bit strange. On the other hand, we often pay higher publication fees to ensure Open Access publication, so why not pay to disseminate the existence of such paper? With the concept of altmetrics getting stronger, such practices could however result in making the (highly) paying authors to have more impact. As far as I can tell, PLOS did not pick up my promoted tweet as ‘shares’. Not sure if it influenced (or had potential to) any other metrics.
Putting an add about a paper on a building or on TV would be ridiculous. Promoting it on Twitter (and other similar media) is totally within a reach of a regular first author. And does not feel that ridiculous?
It was an interesting experience, I would say I got my $40 worth. There may be a potential in doing something around understanding our research target population piggybacking on Twitter’s analytics by doing such campaigns. Oh, and I’m trying very hard not to promote the tweet linking to this article, just to make a meta-point.
And there is a larger discussion lurking in the shadows, about paying to promote your research in social media. Would you?
Today we checked if Glass can be used with the hardware of Smartphone Brain Scanner (Emocap) without much interference.
The setup was with a full 14-channel cap and recording on an Android tablet, with a 20Hz blinking pattern showed on a video on a computer screen that participant attended through the Glass (roughly matching the Glass screen size).
The resulting power spectrum from O2 electrode shows a nice peak around 20Hz when stimulus (flicker) was on and no peak with stimulus off. There is no no significant difference between Glass being off and on (idle).
Glass should be well compatible with EEG recording, and since we get a nice strong signal from a single electrode from a visual stimulus, we should be able to build something really interesting.
Next step: stimulus directly from Glass (and maybe user response?).
What is Glass about?
As Thad Starner (@ThadStarner), technical lead on the project puts it, Google as a company is about delivering the information to the user as quickly as possible. Preferably even before this information is requested.
Autocomplete in Search, priority inbox in Gmail, Google Now, Knowledge Graph; all those are about reducing the time between the information is wanted or needed and presented to the user.
Glass is the next iteration. Your computer is powerful but slow: if you are walking down the street, you will not pull the laptop from your backpack to check time or email. But when the attachment will not open on your phone, you will end up using your laptop. Different devices, different tasks.
Glass is insanely limited. The screen is not good for anything beyond pre-chewed information. You can literally watch the battery percentage going down in front of your eyes. Input is hard and error-prone.
And it is supposed to be limited. Think about Glass as a hardware interface for notification bar, Google Now, and Search. It is not for browsing the web or even email inbox. The content is either requested or pushed to the user, but in a very condensed way that requires only a quick glance.
This extraction of knowledge that is required here is the real bottleneck of the system. Understanding complex speech is hard. Returning results that are relevant is hard. Presenting information in a condensed way is hard. Thus, building good Glass is hard.
I am visiting my parents for Christmas in Poland. Trying navigation, it was impossible to get Glass to navigate to the place I wanted, Polish street names being impossible to spell. So there I was, sitting with my glass, smartphone, and laptop, not being able to tell Glass what I wanted. I ended up quickly putting together a Mirror API app with a sole purpose of starting navigation to the street I wanted.
Functionally, Glass does not do anything your phone would not do. It does not mean it is useless, your laptop can do anything your phone can do, but because of the phone’s form factor, you still use it on the go, during meetings, in the toilet. But when this email is turning out to be just too long to type comfortably on the virtual keyboard, and your laptop is standing right there, you switch.
As Glass is not aspiring to replace phone, it is competing with being faster than reaching into your pocket, unlocking the phone, and doing search, or checking email, or taking a picture. If Glass fails to recognize your query the first time, you would be better off using your phone to do the search. Tough competition. If the screen does not activate on the first horse-like head jerk, it would be more efficient to check time on your watch.
There are certain use cases where the Glass form factor is a killer. When having it sitting hands-free or being always-on really makes the difference. Taking pictures or recording videos is fun, you can snap casual stuff for which you would normally not feel worth taking your phone out and you get the POV. It is not really for professionals, they have been using POV cameras for years; if you are doing serious bungee jumping, there is a good chance you have a pro camera for that. For pictures and videos, Glass is for casual users quietly going with them through the day.
Navigation is fun. It is cool to have a stopwatch in the corner of your eye. Or a compass. Or quickly check if the email you just received is worth pulling your phone out. Or ask about the movie you are watching. Those things generally work, and when they do, they feel like magic. But if they fail on the first try, you either continue pushing it (after all you have paid serious money for this thing), or you give up and use your phone. And every time this happens, you think: it is not ready, not good enough.
Glass as a gadget is not ready for prime time. The problem it solves is too small and the solution not perfect enough. There are however other interesting things to do that I am studying and will describe. After all, it is a device with uplink and downlink directly into our eyes. Until we get into our brains (also working on it), it gets us as connected as we can be. There are some fun things we can do with it.
I finally got myself Google Glass. One of the gadgets I’m really excited about, as new smartphones, tablets, or even smartwatches have not been really bringing anything new to the table recently. This one is different.
I want to write about the experience of using it as a personal device, the way it was designed. But I am even more excited to explore the privacy dimension; I do not think it changes the objective privacy concerns significantly, but it makes people react strongly and start asking ‘are you recording me’? Can we learn from those reactions about the privacy in general? After all, recording the video is just one possible, not even that insanely interesting, channel.
And can we do something truly better with Glass and other wearables? By better, I mean not finding cat images faster, but improve the way we work together as a society. So far we haven’t been very successful with all the communication channels, with a smartphone in almost every pocket (in developed countries that is). Do instant access into people’s eye changes anything? Will it solve problems of epidemics, global warming, and hunger?
If you happen to be in Copenhagen check out this talk by Farrah Mateen (MD, PhDc Massachusetts General Hospital & Harvard Medical School, Boston) about Neurology and Technology for Low-Income Populations.
The talk will take place on Monday, October 7. at 2pm.
The place is DTU, Building 324, room 030.
Neurological disorders, including stroke, epilepsy, dementia, and head trauma, heavily burden people in low- and middle-income countries and account for a high burden of morbidity and mortality globally. Technologies for brain disorders have lagged behind innovations for other health problems. This lecture will discuss some of the possible points of technological intervention in low-income settings, including electroencephalogram (EEG), rapid diagnostic tests, and enhanced telecommunications for education, health care provision, and research on brain disorders in least developed settings.
Everyone is welcome!
Recently I got BB Playbook from RIM (thanks Ash) to play around and see how research-friendly it is. Here in the lab we pay close attention to the developments around BB10: we like open, hackable and mainstreamed platforms. As folks at RIM were out of BB10 Dev units, they sent me Playbook, just to touch the base and check where the platform is and where is it going.
One of our large projects, the one where I do most of the coding, is SmartphoneBrainScanner2. It’s a framework for building real-time, multi-platform apps for EEG (check out other posts on this blog). So the very first thing I did with Playbook (after updating it…) was to try to compile and run an example SBS2 app on it.
The framework we use is entirely written in Qt4, the example app visualizing brain cortical activity uses OpenGL ES2.0 for rendering. The framework is growing fast, but as we try to keep the number of platform-specific hacks to minimum (and you do always have hacks in the mobile applications world…) the app compiled cleanly and nicely with BB10 native toolchain. There was some learning curve about the package structure (where to put the large files we use to keep math models or how to do icons), but otherwise it just worked ™: Brain on Playbook. Not much to write, code that was originally begun for Nokia N900 still compiles for Android and QNX (and hopefully it will remain like that for BB10).
Although the software compiled pretty much out of the box, Playbook does not support USB hostmode which we use for connecting USB dongle for EEG neuroheadset. Streaming the data from the network however, the system worked perfectly fine. So if we ever see BB10 devices supporting USB hostmode (and HIDRAW), SmartphoneBrainScanner2 will automagically become available on the new platform. The power of Qt in its fullest.
My general impressions from playing with the NDK and putting stuff on the device were quite positive. Plus for using standards to do stuff (such as transfers over SSH), but the whole procedure could be more streamlined (keeping SSH session active in one terminal, transferring stuff in the other, such things).
For research we are interested mainly in two things: hackability and support for sensors. I’m hoping for BB10 to have a robust architecture and to behave as RTOS should. If we could do proper low-delay audio processing on it with integration with EEG, this would be fantastic. And if we could deploy applications collecting (various) sensor data in a stable manner over weeks or months, we would probably buy a few hundred units right away (only recently we have purchased 200 Samsung Galaxy Nexus for an experiment).
For teaching mobile development, we need platforms that are fairly stable, offer streamlined development, support multi-platform SDK and give access to various stuff on the device (including raw sensor data, background processes etc.). Currently we teach Android for all these reasons, but to be perfectly honest, I’m growing so tired of the sometimes ridiculous bugs/features in Android Java (my favorite so far) and Google attitude of don’t-fix-just-run-forward (it may be possible that audio latency has been fixed for some devices in Android 4.1. Yup, after all these years, we may see decent audio arch in Android). Qt on Android is not yet ready to be taught in the classroom, but boy, I would love to go back to teaching QML on embedded devices. We tried it once with N9s and it was so much fun for everyone.
So RIM, please do it right. And think about your presence in research and teaching. You want to get to them while they are (professionally) young…