All bodies are getting assistance from technology all the time, yet some are stigmatized. Abler is one woman's quest to rectify this.
Without technology, the human body is a pretty limited instrument. We cannot write without a pen or pencil, nor eat hot soup without a bowl and, perhaps, a spoon.
And yet, only certain technologies are labeled "assistive technologies": hearing aids, prostheses, wheelchairs. But surely our pens and pencils, bowls and spoons assist us as well. The human body is not very able all on its own.
My curiosity about how we think about these camps of "normal" and "assistive" technologies brought me to Sara Hendren, a leading thinker and writer on adaptive technologies and prosthetics. Her wonderful site, Abler, was recently syndicated by Gizmodo. I talked to her about why crutches don't look cool, where the idea of "normal" comes from, and whether the 21st century might bring greater understanding of human diversity.
Neil Harbisson is the first person on the planet to have a passport photo that shows his cyborg nature — in his UK passport, he's wearing a head-mounted device called an eyeborg. The color-blind artist says the eyeborg allows him to see color, and he wants to help other cyborgs like himself gain more rights.
Anyone who has ever gotten a passport photo knows Harbisson has accomplished something that once seemed bureaucratically impossible. Other people with cyborg headgear, like Steve Mann, have had their gear forcibly removed and been refused entrance into buildings for wearing devices on their heads. But with a passport photo that shows the eyeborg as part of Harbisson's face, somebody trying to rip his augmentation off would be committing a violent crime equivalent to injuring his face.
Dezeen has a fascinating interview with Harbisson, where he talks about how his body adapted to the device he now thinks of as an integral part of himself.
A computer program called the Never Ending Image Learner (NEIL) is now running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them. And as it builds a growing visual database, it is gathering common sense on a massive scale.
NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information: cars often are found on roads, buildings tend to be vertical, and ducks look sort of like geese.
“Images are the best way to learn visual properties,” said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. “Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”
Since late July, the NEIL program has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
Every month, Idle Screenings will beam a strange new video artwork to your unused screen.
The first-person dash through the maze of thick, red brick walls. The tangle of pipes in space, forever growing and folding. Those flying toasters. The screensavers of the 1990s are strangely memorable, although I guess it makes sense when you consider that the 1990s were the peak of the CRT era, a time when screens actually needed saving. Today’s LCDs, impervious to burn-in, don’t really need the things, and when you do actually see them, they’re often just the anodyne defaults–anodyne abstract art, maybe, or rainforest scenes. Idle Screenings, a free app for Mac and Windows, brings back some of that good old After Dark weirdness. Every month, it will beam a strange new video artwork to your unused screen.
“We were interested in using a screensaver for this purpose because it seemed that this was a kind of maligned and shelved technology,” says Mitch Trale, the app’s co-creator. “We thought it would be good to animate this disused space, and to present an alternative to the shareware relics available today.” After all, if we have to live in a world full with screens, why not live in a world full of screens showing art?
You won't find a more brazen declaration of techno-utopian libertarian fantasy than this start-up founder's speech VIDEO-
When Silicon Valley executives start borrowing metaphors from “The Godfather” maybe we should start to pay closer attention. On Oct. 19, while laying out his vision for the techno-utopian future, Balaji Srinivasan, the co-founder of a genomics company that does DNA testing, compared Silicon Valley’s impact on the established power centers and industries of the United States to that infamous scene in which the Mafia convinces an L.A. studio boss to give a coveted movie role to a friend of la famiglia.
“By accident, we put a horse head in their bed,” said Srinivasan, with a slight smile.
Think about that, for a second. Srinivasan, in the course of explaining why he thinks the technological elite could and should opt out of American politics, cited the murder of a horse by ruthless mobsters as a definition of Silicon Valley disruption. It’s hard to read that message as anything else but, do what we say, or else.
Srinivasan didn’t stop there. Silicon Valley’s “hit list,” he argued, had already knocked off newspapers and the music industry. Next up: “We’re going after advertising, television, book publishing.” Higher education “is next in the gunsights.” That’s three lethal metaphors, brought to you by a man arguing that Silicon Valley should secede from the United States.
We're all cyborgs now: at least 11 companies are battling for your face.
If French startup Optinvent is right, what we all really want is a bigger screen directly in front of our eyes. Sure, Google Glass may be the best-known example of a heads-up display, but Kayvan Mirza says that Glass simply doesn’t cut it.
Over breakfast at Blue Bottle Coffee on Wednesday, Optinvent’s CEO demonstrated a mockup of the company’s new ORA-S for Ars. Unlike Glass, the ORA-S, as currently designed, is a large and very industrial plastic pair of sunglasses with the viewing prism mounted directly in the field of view. Glass’ prism, by contrast, sits just above the natural line-of-sight and has no other lenses to get in the way.
“It’s much bigger in terms of display size than Google,” Mirza told Ars, noting that the ORS-A has a 16:9 aspect ratio and a field of view of 25 degrees. “It’s got three times the surface area. It’s much brighter and has higher resolution.”
Optinvent is one of a slew of bigger-funded companies believed to be working on similar wearable computing devices (for instance, Microsoft and Samsung are both getting in on the game). The French company, based in the northwestern city of Rennes, has been working on wearable computing since 2007, and it has held related patents for years, long before Google announced Glass.
These days, wearable computing is big business. In 2012, the total value of the market for such tech, ranging from hearing aids to wrist-worn fitness devices, reached nearly $9 billion. According to IHS Global Insights, an industry analysis firm, that is expected to reach at least $19 billion by 2018.
It's an uncomfortable truth but scientists say most people have an ingrained racial bias. Now a team has shown that a short stint in a virtual world could reduce it, but could this have a longer lasting effect?
Racism is an issue that still pervades many societies.
In England and Wales, there have been 106 fatal racist attacks since the killing of teenager Stephen Lawrence in 1993 according to the Institute of Race Relations. It also reports thousands of racist incidents recorded by the police each year.
The issue is complicated by the fact that many biases are ingrained over long periods of time.
Scientists have now found that this ingrained racial bias was reduced when participants were immersed in a virtual body of a different race.
To test their implicit racism, a team led by Mel Slater at the University of Barcelona gave participants what's called an implicit association test several days before the experiment. They were given the same test again after their experience in virtual reality.
It was only the participants who had been placed in a dark virtual body that showed this decrease.
Another unrelated study had similar results. A team found that when a dark virtual rubber hand was stroked at the same time as the participant's own (out of sight) hand was touched, implicit racism subsequently decreased. This work was led by Manos Tsakiris at Royal Holloway University of London.
Both teams say it's promising that two separate experimental settings show this effect.
As technology expands our communicative reach, new opportunities to be rude inevitably arise. Some people overreact to this incivility by turning to uniform and mechanical etiquette rules, hoping to make things better by constraining choices and limiting situational judgment. But for societies that value diversity and autonomy, general mandates—like expecting everyone to turn off their cell phones in theaters—only work in exceptional cases.
The implications of such a decision would be profound.
Net neutrality is a dead man walking. The execution date isn’t set, but it could be days, or months (at best). And since net neutrality is the principle forbidding huge telecommunications companies from treating users, websites, or apps differently — say, by letting some work better than others over their pipes — the dead man walking isn’t some abstract or far-removed principle just for wonks: It affects the internet as we all know it.
Once upon a time, companies like AT&T, Comcast, Verizon, and others declared a war on the internet’s foundational principle: that its networks should be “neutral” and users don’t need anyone’s permission to invent, create, communicate, broadcast, or share online. The neutral and level playing field provided by permissionless innovation has empowered all of us with the freedom to express ourselves and innovate online without having to seek the permission of a remote telecom executive.
But today, that freedom won’t survive much longer if a federal court — the second most powerful court in the nation behind the Supreme Court, the DC Circuit — is set to strike down the nation’s net neutrality law, a rule adopted by the Federal Communications Commission in 2010. Some will claim the new solution “splits the baby” in a way that somehow doesn’t kill net neutrality and so we should be grateful. But make no mistake: Despite eight years of public and political activism by multitudes fighting for freedom on the internet, a court decision may soon take it away.
Today, there is almost no anonymity online. Many people strive for the opposite, in fact — total publicity as it concerns their professional goals, copyrighted materials, and intellectual property. In our contemporary world with new value systems, it just doesn’t make sense to hide your intellectual property. The very fact of stopping a new idea from implementation doesn’t make sense. Perhaps, it could even be considered a crime in the future. However, we aren’t speaking to the abolition of the copyright or its infringement.
Against the backdrop of the new developments and opportunities in today’s information-centric culture, copyright registration can be an obsolete mean to an ineffective end. In many cases, it’s even a limiting factor for industry development, and oddly enough, infringes on the rights of authors. Our current intellectual property system benefits corporations by complicating the process of protecting the rights of content creators. In an era where opportunities and innovations abound our system is almost a tragic comedy.