The book business is merging into the magazine business as more publishers sell literature via subscription to highly targeted clusters of readers. High-profile literary studio Plympton is leading the charge with its $5-a-month iOS service Rooster.
One way to address our farming challenges is through various high-tech practices collectively known as precision agriculture. Precision farmers use technology such as self-steering tractors and aerial drones to find ways of more efficiently using water, fertilizer, and other resources. Farmbot is an open source precision agriculture machine, meaning anyone can take the designs and build their own.
Why an app that reminds you to text your partner might not be the best idea. If you’re looking to add a digital spark to your relationship this Valentine’s Day, you can download the new app Romantimatic. Romantimatic will send you scheduled reminders to contact your significant other and give you pre-set messages to fire off. The pre-set messages include simple, straightforward classics like “I love you” and “I miss you.”Or maybe that doesn’t sound appealing. It sure doesn’t to me. In that case, I recommend you follow my lead: Take a solemn oath before the Greek god Eros and vow to never, ever go this far down the outsourced sentiment rabbit hole.
Move over, fingerprints, iris scans and facial recognition, because a new form of biometric identification may soon be joining you – body odor. According to scientists at Spain's Universidad Politécnica de Madrid, peoples' unique scent signatures remain steady enough over time to allow for an ID accuracy rate of approximately 85 percent.
First of all, why is another form of biometric ID even needed? Well, as the researchers point out, people can sometimes be reluctant to place their finger or eye up to a scanner, particularly if they've got a criminal record. Odor-reading sensors, however, could conceivably just give those people an unobtrusive sniff as they passed by.
The university is now developing a system in collaboration with tech firm Ilía Sistemas. Although it's still not as accurate as police bloodhounds when it comes to identifying people by their smell, and despite potentially odor-altering factors such as changes in test subjects' diets, its margin of error is currently down to about 15 percent.
Students have created a "wearable" book that enables you to feel the characters' feelings as you read the story
Researchers at Massachusetts Institute of Technology have created a "wearable" book which allows the reader to experience the protagonist’s emotions.
Using a combination of sensors, the book senses which page the reader is on and triggers vibration patterns through a special vest.
"Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations" the researchers said.
The vest contains a personal heating device to change skin temperature and a compression system to convey tightness or loosening through airbags.
The vest also changes vibrations to match the mood of the book.
Google is on a shopping spree, buying startup after startup to push its business into the future. But these companies don’t run web services or sell ads or build smartphone software or dabble in other things that Google is best known for. The web’s most powerful company is filling its shopping cart with artificial intelligence algorithms, robots, and smart gadgets for the home. It’s on a mission to build an enormous digital brain that operates as much like the human mind as possible — and, in many ways, even better.
Yesterday, Google confirmed that it has purchased a stealthy artificial intelligence startup called DeepMind. According to reports, the company paid somewhere in the mid-hundreds of millions of dollars for the British outfit. Though Google didn’t discuss the price tag, that enormous figure is in line with the rest of its recent activity.
The DeepMind acquisition closely follows Google’s $3.2 billion purchase of smart thermostat and smoke alarm maker Nest, a slew of cutting-edge robotics companies, and another AI startup known as DNNresearch.
Google is looking to spread smart computer hardware into so many parts of our everyday lives — from our homes and our cars to our bodies — but perhaps more importantly, it’s developing a new type of artificial intelligence that can help operate these devices, as well as its many existing web and smartphone services.
Though Google is out in front of this AI arms race, others are moving in the same direction. Facebook, IBM, and Microsoft are doubling down on artificial intelligence too, and are snapping up fresh AI talent. According to The Information, Mark Zuckerberg and company were also trying to acquire DeepMind.
Microsoft researchers have built an elevator that guesses whether an approaching person wants to get on.
Microsoft researchers have enabled elevators in a company building to detect the likelihood that a person walking by will want to board it. The camera in a Microsoft Kinect — positioned in the ceiling — tracked for months the behaviors of people who got on the elevators vs. those who bypassed the elevators on their way to a nearby cafeteria. That data fed an artificial intelligence system, which taught itself to identify the behaviors indicating who wanted to board an elevator and who didn’t. Soon the elevator doors would automatically open as a person intending to board approached.
The project was shepherded by Eric Horvitz, the co-director of Microsoft’s research lab in Redmond, Wash., which houses some of the company’s 1,100 scientists and engineers. Horvitz’s team has begun a second phase of the project experimenting with human-like interactions between elevators and the people riding them.
“Something as stodgy and old-fashioned as an elevator could have really cute gestures and curiosities and say ‘Are you coming?’ with a door motion,” Horvitz explained.
HOW WOULD THE WORLD BE LIKE IF ONE COULD SEE THROUGH THE EYES OF ANOTHER? WOULD IT HELP US TO UNDERSTAND EACH OTHER? WOULD IT HELP US TO UNDERSTAND OURSELVES?
THE MACHINE TO BE ANOTHER is an Open Source Art investigation on the relation of Identity and Empathy that has been developed on a basis of low budget experiments of Embodiment and Virtual Body Extension.
Designed as an interactive performance installation, the ‘Machine’ offers users the possibility of interacting with a piece of another person’s life story by seeing themselves in the body of this person and listening to his/her thoughts inside their mind.
The performer is someone interested in sharing a story about his/her existence. This role can be assumed by an actor interpreting a real situation, or rather it may be taken by any person (eg. from the public) who is interested in sharing some episode about his or her life. In any case, the stories that are told by the performer will be experienced by another person, say the user.
(Credit: BYU Photo) BYU engineer Dah-Jye Lee has created an algorithm that can accurately identify objects in images or video sequences --- without human
BYU engineer Dah-Jye Lee has created an algorithm that can accurately identify objects in images or video sequences — without human calibration.
“In most cases, people are in charge of deciding what features to focus on and they then write the algorithm based off that,” said Lee, a professor of electrical and computer engineering. “With our algorithm, we give it a set of images and let the computer decide which features are important.”
Humans need not apply
Not only is Lee’s genetic algorithm able to set its own parameters, but it also doesn’t need to be reset each time a new object is to be recognized — it learns them on its own.
Lee likens the idea to teaching a child the difference between dogs and cats. Instead of trying to explain the difference, we show children images of the animals and they learn on their own to distinguish the two. Lee’s object recognition does the same thing: Instead of telling the computer what to look at to distinguish between two objects, they simply feed it a set of images and it learns on its own.
For as long as we’ve been imagining emotionally intelligent machines, we have pictured something at least mildly resembling the human form. From George Lucas’ C-3PO to the recently-developed Robokind Zeno R25, our vision for robotic companionship has typically involved two arms and two legs. Taking a different approach is inventor of the EmoSpark console Patrick Levy Rosenthal, who aims to bring artificial intelligence to consumers in the form of a cube small enough to fit in the palm of your hand.
The EmoSpark console is a 90 x 90 x 90 mm (3.5 x 3.5 x 3.5 in) Wi-Fi and Bluetooth enabled cube that interacts with a user’s emotions using a combination of content analysis and face-tracking software. In addition to distinguishing between each member of the household, the device uses custom developed technology that Rosenthal says enables it to differentiate between basic human feelings and create emotion profiles of not just everybody it interacts with, but also itself.
“While the technology behind face-tracking is well established, what we've done differently is use it to track and process different emotions," Rosenthal tells Gizmag. "The EmoSpark Cube contains a unique chip invented by myself called the Emotional Processing Unit. This allows the cube to build up its own Emotional Profile Graph (EPG) as it interacts with its users. The cube saves all this information and, just like a fingerprint, will over time will keep an emotional print of each family member with which it interacts.”
An artificial-intelligence system has learned to spot the telltale language people use when lying in court or in fawning online book reviews
LAWYERS and judges use skill and instinct to sense who might be lying in court. Soon they may be able to rely on a computer, too.
An AI system trained on false statements is highly accurate at spotting deceptive language in written or spoken testimony. It can also be used to weed out fake online reviews of books, hotels and restaurants.
The system is the work of computational linguists Massimo Poesio at the University of Essex in Colchester, UK, and Tommaso Fornaciari at the Center for Mind/Brain sciences in Trento, Italy. It is based on a technique called stylometry, which counts how often certain words appear in a passage.
The method is often applied to determine who wrote a piece of text, but software can employ it to pick out deception instead. The strategy is to seek out the overuse of linguistic hedges such as "to the best of my knowledge", or overzealous expressions such as "I swear to god".
"But all previous studies had used deceptive texts created in the lab," Poesio says. "What has been missing was a system that could work on real-world lies."
So he and Fornaciari trained a machine learning system by feeding it Italian courtroom depositions and statements by defendants known to have committed perjury. The researchers say it is now nearly 75 per cent accurate at indicating whether a defendant or witness is being deceptive. "We can achieve an accuracy that is way above chance," says Poesio.
Cognitive computers — machines capable of learning, rather that simply following programming — may one day be able to mimic human brains. But first, they're being used to invent chocolate burritos and Swiss-Thai asparagus quiche. IBM, a leader in the field of cognitive computing who has been working with DARPA since 2008 on a project to create a computer that thinks as people do, has partnered with the Institute of Culinary Education to take the IBM Food Truck on a tour round the United States. On its travels, top chefs will be serving meals dreamed up by a computer.
It comes down to dopamine, one of the brain’s basic signaling molecules. Emotionally, we feel dopamine as pleasure, engagement, excitement, creativity, and a desire to investigate and make meaning out of the world. It’s released whenever we take risks, or encounter novelty. From an evolutionary standpoint, it reinforces exploratory behavior.
More importantly, dopamine is a motivator. It’s released when we have the expectation of reward. And once this neurotransmitter becomes hardwired into a psychological reward loop, the desire to get more of that reward becomes the brain’s overarching preoccupation. Cocaine, widely considered the most addictive drug on the planet, does little more than flood the brain with dopamine and block its reuptake (sort of like SSRI’s block the reuptake of serotonin).
Video games are full of novelty, risk-taking, reward-anticipation, and exploratory behavior. They’re dopamine-production machines dressed up with joysticks and better graphics. And this is why video games are so addictive.
Dutch scientists have developed the world's smallest autonomous flapping drone, a dragonfly-like beast with 3-D vision that could revolutionise our experience of everything from pop concerts to farming.
"This is the DelFly Explorer, the world's smallest drone with flapping wings that's able to fly around by itself and avoid obstacles," its proud developer Guido de Croon of the Delft Technical University told AFP.
Weighing just 20 grammes (less than an ounce), around the same as four sheets of printer paper, the robot dragonfly could be used in situations where much heavier quadcopters with spinning blades would be hazardous, such as flying over the audience to film a concert or sport event.
The Explorer looks like a large dragonfly or grasshopper as it flitters about the room, using two tiny low-resolution video cameras—reproducing the 3-D vision of human eyes—and an on-board computer to take in its surroundings and avoid crashing into things.
And like an insect, the drone which has a wingspan of 28 centimetres (11 inches), would feel at home flying around plants.
"It can for instance also be used to fly around and detect ripe fruit in greenhouses," De Croon said, with an eye on the Netherlands' vast indoor fruit-growing business.
"Or imagine, for the first time there could be an autonomous flying fairy in a theme park," he said.
With a chip under your skin, you can do everything from unlocking doors to starting motorbikes, says Frank Swain, who has been trying to get his own implant.
A few years ago, I perched on the edge of my bed in a tiny flat, breathing in a cloud of acetone fumes, using a scalpel to pick at the corner of an electronic travel card. More than 10 million Londoners use these Oyster cards to ride the city’s public transport network. I had decided to dissect mine. After letting the card sit in pink nail polish remover for a week, the plastic had softened enough that I could peel apart the layers. Buried inside was a tiny microchip attached to a fine copper wire: the radio frequency identification (RFID) chip.
My goal was to bury the chip under my skin, so that the machine barriers at the entrance to the Underground would fly open with a wave of my hand, as if I was some kind of technological wizard. But although I had the chip and an ex-Royal Marines medic willing to do the surgery, I failed to get my hands on the high-grade silicone I’d need to coat the chip to prevent my body reacting against it. Since then, people have used the technique I helped popularise to put liberated Oyster chips in bracelets, rings, magic wands, even fruit, but the prize for first London transport cyborg is still up for grabs.
The person who does will find themselves inducted into the community of “grinders” – hobbyists who modify their own body with technological improvements. Just as you might find petrol heads poring over an engine, or hackers tinkering away at software code, grinders dream up ways to tweak their own bodies. One of the most popular upgrades is to implant a microchip under the skin, usually in the soft webbing between the thumb and forefinger.
On a romantic comedy that posits (virtual) romance as a commodity.
..Even though we spend more time staring into screens than into a lover's eyes, it's hard to believe that anyone would ever choose to be in a relationship with a machine. The makers of such technologies will need to present such relationships as normal, familiar, and it seems plausible that they will turn to vintage objects and skeuomorphic iconography such as we see in the technology of Her to do so. Still, I predict that the backlash to this evolved form of human-computer interaction will be even more fervent than the current rhetoric of digital dualism, which preaches that the URL and the IRL spheres are and should be separate. We'll be urged to disconnect, go outside, and be with real people in real life. It will be difficult to accept that this is real life, and real love too, and that the other is not. Her unapologetically explores the uncomfortable truth of our coming reality.
Umoove has developed innovative face tracking technology that allows users to navigate a game on their iOS device just by facing in the direction they want to go...Giving new meaning to "tilt to steer," Israeli tech startup Umoove has developed face- and eye-tracking software for mobile devices that translates gentle head tilts and nods into in-game movements. The company has released the Umoove Experience, a free app for iOS that demonstrates the technology, but hopes third party developers will integrate the technology into their own titles on both iOS and Android devices.
Since its inception, Oculus' virtual reality headset has been hailed as a tool for immersive gaming. But as its presence at Sundance shows, the cinematic applications are undeniable.
PARK CITY, Utah – I never thought I’d ever say this, but I’m onstage with Beck. He’s wearing his usual hat-and-blazer combo, and covering one of my favorite David Bowie songs. Out past the crowd is a full choir — a few faces I recognize because they played with Beck during last year’s Station to Station rolling art extravaganza — and a massive musical ensemble. People are cheering and taking photos. It’s incredible. Then I look down. Instead of seeing knees or feet, I see Beck’s Chelsea boots.
That’s when my brain reminds me I’m not actually on stage.
Instead, I’m sitting in a chair at the Sundance Film Festival’s New Frontier installation, wearing an Oculus Rift virtual reality headset. I’m watching a retooled version of the 360-degree interactive video of Beck’s live performance of “Sound and Vision” that he and Chris Milk made for Lincoln last year. My I’m-a-rockstar dream is shattered, but it’s possible that this might actually be cooler than performing with a folk hero — I get to have all the fun of performing without worrying about singing off-key or being incapacitated by stage fright.
“The first time I tried Chris Milk’s Beck experience in VR, it fundamentally changed the way I thought about, frankly, audio in VR,” says Nate Mitchell, Oculus’ vice president of product, “and the impact a live concert could have on me in virtual reality.”
It’s not just concerts. All kinds of filmed entertainment, from documentary films to CGI masterpieces, are going to change. When the first Oculus prototype popped up in summer of 2102, everyone raved about how it would revolutionize the way we play videogames. But it’s got all the components to change the way we watch films, or create an entirely new kind of visual experience. That’s why Mitchell and the Oculus hit Sundance: They want to know what filmmakers can do with their system.
“Games are our passion,” Mitchell says, “but when you take it and show it to people here, they’re like, ‘I have something. I have an idea and I want to take people someplace new.’”
Measuring a 27-dimensional quantum state is a time-consuming, multistage process using a technique called quantum tomography, which is similar to creating a 3D image from many 2D ones.
Researchers have instead been able to apply direct measurement to do this in a single experiment with no post-processing. In a new paper they demonstrate direct measurements of the quantum state associated with the orbital-angular momentum.
The direct measurement technique offers a way to directly determine the state of a quantum system. It was first developed in 2011 by scientists at the National Research Council Canada, who used it to determine the position and momentum of photons. Last year, a group of Rochester/Ottawa researchers led by Boyd showed that direct measurement could be applied to measure the polarization states of light. The new paper is the first time this method has been applied to a discrete, high dimensional system.
Such direct measurements of the wavefunction might have appeared to be ruled out by the uncertainty principle – the idea that certain properties of a quantum system could be known with precision only if other properties were known poorly. However, direct measurement involves a "trick" that makes it possible.
Precision sensors made from carbon nanotubes mimic animal whiskers
Robot lovers, rejoice: The world is one step closer to “robocat.” Many mammals use special hairs on their faces to feel for unseen objects. Researchers realized artificial whiskers could help robots sense the world around them, but until now, attempts at whiskerlike sensors have been bulky and inefficient. Using cutting-edge materials, a team of researchers has now developed electronic whiskers with a sensitivity and size mimicking their natural counterparts. The team coated flexible strands of silicon rubber with a mix of long chains of carbon atoms, called carbon nanotubes, and tiny bunches of silver molecules, called silver nanoparticles. The carbon nanotubes added flexibility and durability while the silver nanoparticles added a way to measure small changes in strain on the whiskers. As each whisker flexes, the electrical resistance inside changes. By running a current through the whisker, the researchers measured the change in resistance and, therefore, the amount of flex. This design proved 10 times more sensitive than previous efforts, with each whisker capable of detecting the pressure equivalent of a dollar bill resting on a table, the researchers report online this week in the Proceedings of the National Academy of Sciences. The team says its techniques could one day help engineers create better wearable electronics, such as flexible heart monitors, and better sensors for robots. Until then it might be worth brainstorming names for your robotic kitty.
A handful of companies are developing algorithms that can read the human emotions behind nuanced and fleeting facial expressions to maximize advertising and market research campaigns.
Back when Google was first getting started, there were plenty of skeptics who didn’t think a list of links could ever turn a profit. That was before advertising came along and gave Google a way to pay its bills — and then some, as it turned out. Thanks in part to that fortuitous accident, in today’s Internet market, advertising isn’t just an also-ran with new technologies: Marketers are bending innovation to their needs as startups chase prospective revenue streams.
A handful of companies are developing algorithms that can read the human emotions behind nuanced and fleeting facial expressions to maximize advertising and market research campaigns. Major corporations including Procter & Gamble, PepsiCo, Unilever, Nokia and eBay have already used the services.
Companies building the emotion-detecting algorithms include California-based Emotient, which released its product Facet this summer, Massachusetts-based Affectiva which will debut Affdex in early 2014, and U.K.-based Realeyes, which has been moving into emotion-detection since launching in 2006 as provider of eye-movement user-experience research.
Parents will soon be able to track, log and judge the every move of their offspring. God help those children. By Rory Carroll
For those who think the NSA the worst invader of privacy, I invite you to share an afternoon with Aiden and Foster, two 11-year-old boys, as they wrap up a Friday at school. Aiden invites his friend home to hang out and they text their parents, who agree to the plan.
As they ride on the bus Foster's phone and a sensor on a wristband alert the school and his parents of a deviation from his normal route. The school has been notified that he is heading to Aiden's house so the police are not called.
As they enter the house, the integrated home network recognises Aiden and pings an advisory to his parents, both out at work, who receive the messages on phones and tablets.
The system also sends Foster's data – physical description, address, relatives, health indicators, social media profile - to Aiden's parents, who note he has a laptop. Might the boys visit unsuitable sites? No, because Foster's parental rating access, according to his profile, is limited to PG13, as is Aiden's.
Foster spots a cookie jar and reaches in. Beep beep! His wristband vibrates to warn him the cookies contain gluten, and he is allergic.