Posted on 14.08.11
Feature written for The Escapist.
It takes a village to raise a child, and in the case of the Pandora, it takes a community – a friendly, supportive, and above all patient community – to create an ambitious handheld gaming device.
“Pandora’s Box” (The Escapist, Issue 307):
When I ask Michael Mrozek, aka EvilDragon – and henceforth ED – what he likes best about the Pandora, the open-source gaming handheld he helped create, he says, “The community.” Not the unit’s 10+ hours of battery life or its beautiful hi-res screen or the wonderfully tactile D-pad or the twin analog nubs or the Linux OS with full X desktop or the amazing amount of homegrown software sprouting up by the day, but the community. “Seriously. And all the nice people helping us out, the devs and everyone else … you can’t thank those guys enough. They have become close friends for me.”
The Pandora nearly didn’t make it. With a gestation longer than an elephant’s, its development has seen DS incarnations come and go. Rashly billed by its creators for a release back in 2007 – well before the DSi hit stores – production only recently hit its stride in the last few months, when the 3DS became everyone’s favorite new toy. Worse, many pre-orders placed in late 2008 at just over $300 have yet to be fulfilled. The Pandora was beleaguered by many production snags: unreliable suppliers, faulty parts and the inexperience of the team made a slow process slower. Because of the excessive gap between payment and delivery, PayPal canceled all pre-orders and credit card companies enforced refunds. But the team’s openness – and the understanding that these are just a bunch of guys with a crazy idea and day jobs – incites goodwill, and pre-orders were re-ordered. Those still waiting are rewarded with photos of stacked LCD cables, nubs, screens, batteries, cases and a video of the kitchen table around which these pieces are assembled by hand.
Continue reading at The Escapist >>
Posted on 14.08.11
Science From the Archives #2: part of a series written for the I, Science blog that revisits science articles from 10 years ago.
While some of us await free WiFi, decade-old technology changes lives in WiFi-free parts of the world.
When nobody’s fussing about WiFi in schools and when there’s a lull between reports on the injurious effects of mobile phones I suspect few of us give much thought to the fizzing ocean of radio waves crashing over us silently and invisibly, continuously and ceaselessly. We take it for granted that the phone in our hand can access any of a myriad of data services on demand. If we spare a thought at all for the ubiquity of the wireless carriers it’s when they’re suddenly absent and our phone becomes an oddly functionless device.
The jargon-strewn story of mobile data networks is one best told at bedtime. But most of us are anyway well aware of the essential plot points in this dynastic creep of technical standards: the original usurper 2G holding on to the crown far longer than people intended, handing sovereignty to siblings 2.5G and 2.75G before finally giving out to the true successor 3G, whose benificent reign made possible the rise and rise of the iPhone. Now we hear tell of the mighty 4G – capable of surpassing all that went before and awaiting coronation.
Looking back at an article in Scientific American from 10 years ago I was surprised to learn how the history of mobile data networks could have taken a very different – and far more community-spirited – course.
In the August 2001 piece on the future of mobile telephony “the cryptically named 802.11b” – less cryptically known as Wi-Fi – is heralded as a “dark-horse challenger” to the established dynasty. Sure, Wi-Fi has since become almost as ubiquitous as mobile data coverage in its way, but that’s the point: what this article was envisaging was Wi-Fi supplanting the XG alternatives.
In 2001 wireless internet was “still a geek thing, requiring fiddling, configuring and tolerance for imperfections”. But in those days of early adoption, idealistic collectives around the world, including consume.net from London, had a wonderful vision of a shared, openly-available wireless internet: “The dream is that if everyone sticks a base station in the window, anyone will be able to access the Net from anywhere in town”. In such a world, as long as you weren’t too far from some kindly person’s window, you’d never need that 3G connection.
Back at the turn of the millenium when phones were phones, WAP was wheeled out by network providers in a coordinated fit of hyberbole. WAP (wireless application protocol) was a technical standard for the provision of mobile data services – such as stripped-down, no-frills web pages – over slower network connections (mostly still 2G at the time). But it was pitched as a technology that would bring the full wonder of the web to a mobile handset. Inevitably, the reality of using a tiny, monochrome screen fell far short of the thrilling cybertopia apparently awaiting those who surfed the BT Cellnet. In this context readily available Wi-Fi for phones should have taken off in a flash.
But of course people never put base stations in their windows. Far from it: we keep access to our wireless internet well secured. What if the strangers next door clogged up our bandwidth watching reruns of The Apprentice on iPlayer? Today there’s even a strong legal incentive to keep swashbuckling piggybackers off our network in case they get up to some pirating in our name.
Sadly, that dream shared by consume.net is almost quaint by current standards. But such a vision would only have been practical in densely populated areas anyway – and areas that had access to broadband internet in the first place. In thinking about all of this I was led to coverage of the recent Activate “summit” in London. This event brings together innovators in mobile technology for a series of largely philanthropic talks. This year also saw the inaugural (H)activate, a rapid prototyping exercise in which intrepid participants are invited to develop in two days a mobile technology application that can change the world. The winning team built Safe Trip, an app to help people at risk of trafficking easily keep in contact with friends, family and support agencies.
But it was the talks that made me reflect on how much of our connected lives we take for granted. As South African entrepreneur Herman Heunis pointed out, “the reality is that the mobile phone will be for many people in Africa the only connection to the internet for many, many years” – the phone is “the remote control of their universe”. Heunis talked about how MXit – a messaging / social-media platform running on millions of phones – had to accomodate many of the old mobile technologies. “What you must remember is that 90+% of all phones in Africa are not smartphones and it will remain like that because phones are not just dumped into a dustbin – they are handed down from father to mother to child …”
Anna Kydd, director of the SHM Foundation, has similarly found that older mobile technologies can still be lifechanging in the emerging world. She described a pilot study carried out with a group of people in Mexico living with HIV/AIDS: “what we found was that there were very high levels of social isolation and stigma so, although HIV/AIDS medication is free for everyone in Mexico, the quality of life is very low because they have very little chance to ever exchange information with other people living with the condition”. Because of Mexico’s centralised health service some of the participants from rural areas were travelling 8 hours to a clinic and had little or no contact with others in a similar situation.
Kydd found that simply exchanging text messages was enough to improve the situation of most of her participants. During the study, 40 participants sent an amazing 250,000 text messages in 3 months. Levels of anxiety and depression were seen to decrease significantly and by sharing information the patients’ knowledge of their medical treatment improved. Many of those who took part in the study would not have attended a support group had it been face to face but the anonymity of the mobile network encouraged intimacy. Kydd quoted the promising words of one of her participants: “I felt very sad and depressed and did not want to take my medication but listening to my friends in a group I started to feel happier and excited about life.” Kydd is now set to run a pilot in the UK with up to 1000 people taking part.
We’re not as willing to share as some had once hoped, but our need to connect means we’ll find a way whatever the technology.
Posted on 03.08.11
Science From the Archives #1: part of a series written for the I, Science blog that revisits science articles from 10 years ago.
In August 1999 Physical Review Letters, one of the most prestigious journals in physics, published a report from a US team at Lawrence Berkeley National Laboratory announcing the successful synthesis of the heaviest atom ever observed – element 118. This team of nuclear scientists were claiming to have created a new kind of matter, to have observed a substance never before seen. Then barely two years later, 10 years ago this week, New Scientist reported that the team had retracted their results.
The trouble started when two other labs – the Centre for Heavy Ion Research in Germany and the Riken Institute in Japan – were unable to replicate the findings. By convention, a new discovery in the field won’t be accepted (and the element in question not officially named) until the original result can be independently corroborated. But synthesising an unstable element is no simple endeavour, requiring rare expertise, vastly expensive machinery, and luck. Perhaps these other labs were just being unlucky.
It was when the original Lawrence Berkeley team failed repeatedly to reproduce its own findings that serious doubts set in – followed by a thorough internal enquiry. Eventually, the investigating committee were to find that data from the original experiments had been altered – allegedly by Dr Victor Ninov, a highly respected scientist – to make it look as if element 118 really had made a fleeting appearance.
Synthesising an element such as element 118 involves throwing together atoms of other elements in a particle accelerator in the hope that they’ll briefly stick, forming the new unstable element for a tiny fraction of a second. The atoms thrown together need to have the right numbers of protons in their nuclei – for instance, lead (with 82 protons) and krypton (with 36 protons) might be fused to make element 118. Since the synthesised element would only stick around for an instant, evidence for its existence is sought by looking for signs of its decay chain – a series of elements each decaying into the next before something relatively stable is reached. What the researchers claimed they had found was a chain from element 118 through elements 116, 114, 112, all the way to seaborgium, element 106, whose most stable isotope has a whopping half-life of nearly 2 minutes. The problem was that Dr Ninov was the only member of the team at the time with the specific expertise to interpret the raw data. All empirical evidence from the actual experiments went through this one person.
In its report the investigating committee registered surprise at this arrangement: “The committee finds it incredible that not a single collaborator checked the validity of Ninov’s conclusions of having found three element 118 decay chains by tracing these events back to the raw data tapes.”
But this is often how things work in small teams at the cutting edge of science. Very few people are experts in everything. Collaborative science depends enormously on honesty and mutual trust. As the New York Times noted in an article on the scandal, “Everyone was working from the numbers Dr. Ninov had gleaned from his own analysis. No one felt a need to go back and examine the original raw data.” (Perhaps equally surprising was the revelation that the team were also depending on software that they knew to be buggy: “The initial suspect was the analysis software, nicknamed Goosy, a somewhat temperamental computer program known on occasion to randomly corrupt data”, the New York Times reported.)
Element 118 was legitimately synthesised several years later in 2006 by collaborating scientists from Dubna in Russia and Lawrence Livermore National Laboratory in the US. However, because of that need for independent corroboration the discovery has only recently been approved by the International Union of Pure and Applied Chemistry. It is now official that at least three atoms of element 118 once existed for the briefest of instants. Part of the reason for the delay is likely to be because of the need to work with californium, used in the 2006 synthesis. “Not many labs in the world either want to work with it or have the capabilities to work with it”, observed Mark Stoyer, one of the US researchers.
This is now the way of things at the cutting edges of science, and of physics in particular: very few individuals, in increasingly few laboratories around the world, have the capabilities to carry out ground-breaking research. What does this mean for the collaboration and the corroboration so essential to science?
A couple of weeks ago the internet was all of a twitter with reports of a breakthrough at the Large Hadron Collider. Had the elusive Higgs been spotted? It is quite possible that we really are close to pinning down, like an exotic butterfly, this decade’s most desirable specimen. But when Cern scientists report a blip in the detectors we must bear in mind – as they themselves obviously do – all of the things other than the Higgs that might explain it, such as a flawed model, human error, or some form of interference. The Guardian’s Ian Sample revealingly discusses on his blog the difficulty of writing about this kind of event in the mainstream media: reporting a “potential glimpse” with all the necessary caveats just isn’t exciting news. Though he chose his words carefully, his piece in the Guardian still had to appear under the headline “Cern scientists suspect glimpse of Higgs boson God particle”, which is just what Sample had been trying not to say.