Japanese high school student Saya has flawless skin, shiny hair and her uniform is always clean. That's not surprising since her parents are computer graphics artists who worked together to design their own virtual child.
When Saya was first unveiled around October last year by creators Teruyuki and Yuka Ishikawa, she blew viewers away. She's so realistic -- from her perfectly crafted features and the ultra-high detail skin to the way the light catches each individual strand of hair. People were scratching their heads in amazement.
This month has seen Saya get an upgrade of sorts, bringing her into the moving world. This video of Saya shown at this year's CEATEC, or Combined Exhibition of Advanced Technologies, shows her smiling and nodding at the camera.
It's hard to stop saying "she" and "her" instead of "it." That's because Saya's CGI-designer parents have effectively bridged the uncanny valley. The term, coined by Japanese robotics professor Masahiro Mori, refers to our natural sense of unease around things that are almost-but-not-quite human. It doesn't appear with Saya. Teruyuki and Yuka Ishikawa have somehow reached the other side of the valley and created a character that's both appealing and fascinating to look at.
Last year a new Japanese celebrity burst onto the scene. But "Saya" was a different kind of star, because she is the product of a Tokyo computer lab. And like all "parents", her creators have big ambitions for her, writes the BBC's Yvette Tan.
"'I think I've seen her somewhere' or 'She looks like someone I know' are what people usually say when they see Saya," says Yuka Ishikawa, one half of the husband and wife graphic artist team behind Saya.
When the couple first posted pictures of the hyper-realistic schoolgirl online last year, it was a revelation about what can be achieved with computer design.
Her slightly askew school tie, heavily fringed hair, freckled skin and teenage pout left thousands trying to work out whether or not she was a real person.
Is the throwaway era about to end? The past half century has given us toasters that are irreparable after a minor fault, T-shirts that quickly shrink or fade, and vacuum cleaners that need replacing after a few years. “Planned obsolescence” means old smartphones may perform worse after necessary updates, and products ranging from clothing to spectacles are regularly redesigned to encourage new purchases.
However the Swedish government’s plan to reduce the rate of VAT levied on repair work from 25% to 12% is the latest sign that Europeans are beginning to question the “take, make and throw away” culture of consumerism that lies at the heart of industrialised economies.
In France, planned obsolescence is now punishable by two years’ imprisonment with a fine of up to €300,000. Spain recently became the first country to set a target designed to increase reuse. Meanwhile Germany’s environment agency, the UBA, has commissioned research on the lifetime of electrical goods in order to develop strategies against obsolescence. An EU end to throwaway culture
These policies need to be understood in the context of European Union initiatives aimed at advancing sustainability, notably on waste and the “circular economy”, in which materials are kept in use for as long as possible and ultimately recycled. For instance the Waste Framework Directive, approved in 2013, requires each member state to produce a waste prevention programme. To its credit, the UK’s then-coalition government was the first to do so. The minister overseeing waste policy, Dan Rogerson, even proclaimed that “products should be designed … with longer lifetimes, repair and reuse in mind”.
A 2015 EU Action Plan added to the momentum, committing the European Commission to investigate the extent of planned obsolescence and take action where necessary.
And the EcoDesign Directive, which has primarily been used to address energy efficiency, is also to be applied to product lifetimes. The directive already requires vacuum cleaners sold in the EU from September 2017 to have motors designed to last at least 500 hours. Other products may soon be subject to similar requirements.
Is it safe to assume that a gold medalist at the Olympics practiced more than a silver medalist—and that a silver medalist practiced more than a bronze winner? Definitely not, according to a new analysis, which looked at nearly 3,000 athletes. The study found that although becoming world class takes an enormous amount of practice, the success of elite athletes cannot be predicted based on the number of hours they spend in careful training.
In 1993 Swedish psychologist K. Anders Ericsson published a highly influential paper that suggested performance differences between mediocre musicians and their superior counterparts—as determined by the evaluations of their professors—were largely determined by the number of hours they spent practicing. He would later publish work extending his theory to other pursuits, including sports, chess and medicine. Ericsson emphasized that there was no upper bound to the effect that deliberate practice had on success in these areas—the world’s best athletes, musicians and doctors were simply the ones who practiced the most. His work would eventually be popularized by journalist Malcolm Gladwell and others as the “10,000-hour rule,” which suggests that top performance in virtually any field is simply a matter of putting in 10,000 hours of work.
But a new study published in Perspectives on Psychological Science shows—as others have—that deliberate practice is just one factor that makes world sports champions. “More or less across the board, practice will improve one’s performance,” says Brooke Macnamara, a psychologist at Case Western University and lead author of the study. At a certain level of success, however, other factors determine who is the absolute best, she says.
Macnamara and her colleagues analyzed 34 studies that—put together—had tracked the number of hours 2,765 athletes had practiced. Those studies also recorded the athletes’ achievements, as determined by either objective measure such as a race time, expert rating of performance or membership in elite groups. For sports at all levels, including athletes performing at a state level or in clubs, deliberate practice could explain 18 percent of the differences in achievement between athletes. But when the researchers looked only at the very best competitors—those who had competed in the Olympics or other world competitions—differences in the number of hours they had practiced explained just 1 percent of the difference in their performance at sporting events. “This suggests that practice is important to a point, but it stops differentiating who’s good and who’s great,” Macnamara says. At the national and global level, a poorly understood mixture of genetics, psychological traits and other factors influence performance.
Vance Bergeron was once an amateur cyclist who rode 7,000 kilometres per year—much of it on steep climbs in the Alps. But in February 2013, as the 50-year-old chemical engineer was biking to work at the École Normale Supérieure in Lyons, France, he was hit by a car. The impact sent him flying through the air and onto his head, breaking his neck. When he woke, he learnt that he would never again move his legs on his own, and would have only limited use of his arms.
Confined to bed for months while his body did what healing it could, Bergeron began to look for a way back to cycling. He started to study neuroscience, with an emphasis on research into robotic prostheses that could turn people like him into 'cyborgs': combinations of human and machine. He learnt that some of these prostheses used a technique known as functional electrical stimulation (FES) to deliver electrical signals to atrophied limbs or the stumps of missing ones, causing the muscles to contract and restoring some function.
As soon as Bergeron had recovered enough to use a wheelchair, he took that idea back to the lab, where he switched his research focus to neuroscience. Using himself as a guinea pig, he and his team worked out how to stimulate the nerves in his legs so that his muscles would flex and pedal a bike. “I have become my own research project and it's a win–win,” he says.
Even with regular exercise sessions to build muscle, Bergeron's artificially stimulated legs have produced at most 20 watts of power, barely one-tenth of the 150–200 watts produced by an average cyclist. But he and his team are building the FES controller and electrodes into a carbon-fibre recumbent tricycle that he hopes will help him to do better—and perhaps win a medal on 8 October, when he takes his machine to Zurich, Switzerland, to race against other FES cyclists in the Cybathlon: the first cyborg Olympics.
Breathable clothing is important for soldiers looking to avoid heat stress and exhaustion, but in some situations, added protection is needed against biological and chemical agents. Current protective equipment struggles to effectively offer both at once, but now scientists at Lawrence Livermore National Laboratory (LLNL) have developed a material that begins to bridge the gap, using carbon nanotubes to actively block contaminants while still allowing water vapor to escape.
The material is a flexible polymer membrane containing an array of aligned carbon nanotubes (CNTs), which function as extremely tiny pores. The key to how they block biological agents is simple: these tubes have a diameter of under 5 nanometers, which is 5,000 times smaller than a human hair, and crucially, less than half the size of most bacteria and viruses. Sweat, in the form of water vapor, can easily escape from the wearer's skin through these pores, yet bacteria are just too big to get in.
"We demonstrated that these membranes provide rates of water vapor transport that surpass those of commercial breathable fabrics like GoreTex, even though the CNT pores are only a few nanometers wide," says Ngoc Bui, the lead author of the paper.
During filtration tests, the nanotube membranes were exposed to liquid solutions containing dengue virus and successfully kept the bugs out, even when the material was wet.
After robot cars and robot rescue workers, US research agency Darpa is turning its attention to robot hackers.
Best known for its part in bringing the internet into being, the Defence Advanced Research Projects Agency has more recently brought engineers together to tackle what it considers to be "grand challenges".
These competitions try to accelerate research into issues it believes deserve greater attention - they gave rise to serious work on autonomous vehicles and saw the first stumbling steps towards robots that could help in disaster zones.
Next is a Cyber Grand Challenge that aims to develop software smart enough to spot and seal vulnerabilities in other programs before malicious hackers even know they exist.
"Currently, the process of creating a fix for a vulnerability is all people, and it's a process that's reactive and slow," said Mike Walker, head of the Cyber Grand Challenge at Darpa.
This counted as a grand challenge, he said, because of the sheer complexity of modern software and the fundamental difficulty one computer had in understanding what another was doing - a problem first explored by computer pioneer Alan Turing.
We’ve seen how design can keep us away from harm and save our lives. But there is a more subtle way that design influences our daily decisions and behavior – whether we know it or not. It’s not sexy or trendy or flashy in any way. I’m talking about defaults.
Defaults are the settings that come out of the box, the selections you make on your computer by hitting enter, the assumptions that people make unless you object, the options easily available to you because you haven’t changed them.
They might not seem like much, but defaults (and their designers) hold immense power – they make decisions for us that we’re not even aware of making. Consider the fact that most people never change the factory settings on their computer, the default ringtone on their phones, or the default temperature in their fridge. Someone, somewhere, decided what those defaults should be – and it probably wasn’t you.
Another example: In the U.S. when you register for your driver’s license, you’re asked whether or not you’d like to be an organ donor. We operate on an opt-in basis: that is, the default is that you are not an organ donor. If you want to donate your organs, you need to actively check a box on the DMV questionnaire. Only about 40 percent of the population is signed up to be an organ donor.
In other countries such as Spain, Portugal and Austria, the default is that you’re an organ donor unless you explicitly choose not to be. And in many of those countries over 99 percent of the population is registered. A recent study found that countries with opt-out or “presumed consent” policies don’t just have more people who sign up to be donors, they also have consistently higher numbers of transplants.
Just when you thought your favourite childhood science hero, Sir David Attenborough, couldn't get any more awesome, some genius over at the Lovin Dublin Facebook page has edited his pithy narration over the top of Pokémon Go game play.
The result is both hilarious and nostalgic, seeing as the game is probably the closest thing today's generation will get to actually sleuthing wild animals National Geographic-style.
The video really has to be seen to be appreciated, but the best part of all is when even Attenborough is sick of goddamn Zubats.
"Bats, with their fluttering zigzag flights, are not easy targets," he explains in the footage above. "That is one bat that will not return to the roost tonight."
For everyone (anyone?) who hasn't tried playing Pokémon Go just yet, it's not too late.
It might just be a game, but it's reportedly helping people to treat their depression and anxiety by getting them out of the house and socialising.
Plus, you get to walk around your neighbourhood, phone in hand, pretending to be a zoologist on the hunt for the next rare species. And, if you're lucky, you might even find it.
Iris recognition, retina scanning, fingerprints, voice recognition—all of these show promise. But there’s one security tool that’s secure, effortless and available now. It’s a special kind of face recognition that’s available on some Windows 10 computers—those equipped with an Intel RealSense camera, such as the Surface Pro 4.
RealSense is actually a sophisticated set of three sensors: one each for infrared, color and 3-D perception. Some laptops come with the RealSense camera built in or you can buy one as an external gadget that plugs into your computer’s USB jack.
The feature is called Windows Hello. Actually, Hello can log you into your PC using fingerprint, iris orfacial recognition—but the facial thing is by far the most convenient. Once it’s set up, when you sit down in front of your computer, it recognizes your face and logs you in instantly. You can’t fool it with a photograph, a 3-D model of your head or even an identical twin. Thanks to the infrared camera, you can log yourself in even in the dark.
Why do people seek out information about an ex's new relationships, read negative Internet comments and do other things that will obviously be painful? Because humans have an inherent need to resolve uncertainty, according to a recent study in Psychological Science. The new research reveals that the need to know is so strong that people will seek to slake their curiosity even when it is clear the answer will hurt.
In a series of four experiments, behavioral scientists at the University of Chicago Booth School of Business and the Wisconsin School of Business tested students' willingness to expose themselves to aversive stimuli in an effort to satisfy curiosity. For one trial, each participant was shown a pile of pens that the researcher claimed were from a previous experiment. The twist? Half of the pens would deliver an electric shock when clicked.
Twenty-seven students were told which pens were rigged; another 27 were told only that some were electrified. When left alone in the room, the students who did not know which ones would shock them clicked more pens and incurred more jolts than the students who knew what would happen. Subsequent experiments replicated this effect with other stimuli, such as the sound of fingernails on a chalkboard and photographs of repulsive insects.
The drive to discover is deeply ingrained in humans, on par with the basic drives for food or sex, says Christopher Hsee of the University of Chicago, a co-author of the paper. Curiosity is often considered a good instinct—it can lead to new scientific advances, for instance—but sometimes such inquiry can backfire. “The insight that curiosity can drive you to do self-destructive things is a profound one,” says George Loewenstein, a professor of economics and psychology at Carnegie Mellon University who has pioneered the scientific study of curiosity.
Morbid curiosity is possible to resist, however. In a final experiment, participants who were encouraged to predict how they would feel after viewing an unpleasant picture were less likely to choose to see such an image. These results suggest that imagining the outcome of following through on one's curiosity ahead of time can help determine whether it is worth the endeavor. “Thinking about long-term consequences is key to mitigating the possible negative effects of curiosity,” Hsee says. In other words, don't read online comments.
"For a period of time, SD card data recovery has been a tough problem to deal with, but now, it is no longer difficult."
SD Card is short for Security Digital Card, also known as Security Digital Memory Card. It is actually a new generation of memory device based on the semiconductor flash memory device. SD card has been widely using in a variety of portable devices, including digital cameras, personal digital assistants (PDA) and multimedia players.
In 1999, Panasonic put forward the concept of SD card, and Toshiba and SanDisk completed the substantive development. In 2000, these three companies finished the establishment of SD Association (Secure Digital Association, briefly called SDA), and it attracted a large number of companies to participate in this organization: IBM, Microsoft, Motorola, NEC, Samsung, and so on.
For years, scientists have been experimenting with "biobots." Example include insects fitted with various electronic systems that can harvest kinetic energy from their wings, those that use the Xbox's Kinect interface to follow a set path, and those put to work mapping building interiors. Now, engineers at Washington University in St. Louis are developing a method to tap into the highly-tuned olfactory system of locusts, using them like tiny cyborg sniffer dogs to detect the smell of chemicals used in explosives.
Leading the project is Baranidharan Raman, Associate Professor of Biomedical Engineering at WUSTL, who has previously conducted research into the sensory systems of locusts. Those studies determined how the insects' brains light up in response to olfactory stimuli, and found that even when clouded by other smells, locusts are able to single out odors they've been trained to identify.
Since the locusts' natural chemical-sensing system is far more powerful than any artificial ones, Raman plans to harness the power of that nose, attaching miniature electronics to the insects that monitors their brains and determines, through their neural activity, which chemical compounds the locusts are detecting.
A concept for remote-controlled “avatar” humanoid robots, presented by ANA, Japan’s largest airline, was named one of the three “top prize concept” finalists at XPRIZE’s recent inaugural Visioneers event.
The ANA AVATAR Team, led by scientist Harry Kloor, PhD, presented an ambitious vision of a future in which human pilots would hear, see, talk, touch, and feel as if a humanoid robotic body were their own — instantaneously across the globe and beyond. Ray Kurzweil is Co-Prize Developer.
The avatar concept, an extension of the avatar portrayed in the movie Avatar, involves haptics, motion capture, AR, VR, artificial intelligence, and deep machine learning technologies.
“The ANA team presented a compelling AVATAR XPRIZE concept,” said Dr. Peter H. Diamandis, executive chairman and founder of XPRIZE. “The Avatar XPRIZE will initiate a revolution in the way we work, travel, explore and interact with each other.”
“The top three Visioneers teams, with prize concepts in the areas of cancer, avatars, and ALS, have been certified by XPRIZE as ‘ready to launch’ and we look forward to working with the teams to finalize the prize designs, secure the necessary funding, and launch one or all of these world-changing competitions,” Marcus Shingles, CEO of XPRIZE told KurzweilAI.
Other top concepts were presented by Deloitte and Caterpillar for conquering cancer and ALS, respectively. The Visioneers Summit is intended to help the XPRIZE strategic approach evolve and advance its prize designs.
In an age when digital information can fly around the connected networks of the world in the blink of an eye, it may seem a little old timey to consider delivering messages by hand. But that's precisely what Panasonic is doing at CEATEC this week. The company is demonstrating a prototype communication system where data from one person to another is transmitted through touch.
There's very little information on the system available, but Panasonic says that the prototype uses electric field communication technology to move data from "thing-to-thing, human-to-human and human-to-thing." Data transfer and authentication occurs when the objects or people touch, with digital information stored in a source tag instantaneously moving to a receiver module – kind of like NFC tap to connect technology, but with people in the equation as well as devices.
When writing the screenplay for 1968's 2001, Arthur C. Clarke and Stanley Kubrick were confident that something resembling the sentient, humanlike HAL 9000 computer would be possible by the film's namesake year. That's because the leading AI experts of the time were equally confident.
Clarke and Kubrick took the scientific community's predictions to their logical conclusion, that an AI could have not only human charm but human frailty as well: HAL goes mad and starts offing the crew. But HAL was put in an impossible situation, forced to hide critical information from its (his?) coworkers and ordered to complete the mission to Jupiter no matter what. "I'm afraid, Dave," says the robot as it's being dismantled by the surviving astronaut.
If humans had more respect for the emotions that they themselves gave to HAL, would things have turned out better? And will that very question ever be more than an experiment in thought?
The year 2001 came and went, with an AI even remotely resembling HAL looking no more realistic than that Jovian space mission (although Elon Musk is stoking hopes). The disappointment that followed 1960's optimism led to the "AI Winter"—decades in which artificial intelligence researchers received more derision than funding. Yet the fascination with humanlike AI is as strong as ever in popular consciousness, manifest in the disembodied voice of Samantha in the movie Her, the partial humanoid Eva in Ex Machina, and the whole town full of robot people in HBO's new reboot of the 1973 sci-fi Western Westworld.
Today, the term "artificial intelligence" is ferociously en vogue again, freed from winter and enjoying the blazing heat of an endless summer. AI now typically refers to such specialized tools as machine-learning systems that scarf data and barf analysis. These technologies are extremely useful—even transformative—for the economy and scientific endeavors like genetic research, but they bear virtually no resemblance to HAL, Samantha, Eva, or Dolores from Westworld.
The augmented-reality game "Pokémon Go" may be the hottest thing in mobile gaming right now, but new advances in computer science could give players an even more realistic experience in the future, according to a new study. In fact, researchers say a new imaging technique could help make imaginary characters, such as Pokémon, appear to convincingly interact with real objects.
A new imaging technique called Interactive Dynamic Video can take pictures of real objects and quickly create video simulations that people, or 3D models, can virtually interact with, the researchers said. In addition to fueling game development, these advances could help simulate how real bridges and buildings might respond to potentially disastrous situations, the researchers added.
The smartphone game "Pokémon Go" superimposes images onto the real world to create a mixed reality. The popularity of this game follows a decades-long trend of computer-generated imagery weaving its way into movies and TV shows. However, while 3D models that can move amid real surroundings on video screens are now commonplace, it remains a challenge getting computer-generated images to look as if they are interacting with real objects. Building 3D models of real items is expensive, and can be nearly impossible for many objects, the researchers said. [Beyond Gaming: 10 Other Fascinating Uses for Virtual-Reality Tech]
Now, Interactive Dynamic Video could bridge that gap, the researchers said.
"When I came up with and tested the technique, I was surprised that it worked quite so well," said study lead author Abe Davis, a computer scientist at the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
Many Pokémon Go players were saddened this week when an update prevented the use of unofficial tracking services. It also made it harder to catch Pokémon on your travels - problems developer Niantic has finally addressed.
The most recent version of the phenomenally successful Pokémon Go banned third party apps such as Pokévision and Pokéhound, which allowed players to see where Pokémon were spawning in the world around them. It also removed the game's 'Nearby' feature, a footstep indicator that was supposed to show how far a Pokémon was from the player, but which rarely worked.
In a blog post, Niantic has revealed what many had suspected - that the server drain caused by the tracking apps was impacting Pokémon Go's already volatile performance. "Running a product like Pokémon Go at scale is challenging. Those challenges have been amplified by third parties attempting to access our servers in various ways outside of the game itself," wrote Niantic founder John Hanke. "We were delayed in [launching the game in Latin America] due to aggressive efforts by third parties to access our servers outside of the Pokémon Go game client and our terms of service. We blocked some more of those attempts yesterday."
It’s not hard to see why Pokémon Go has become so popular. Its simple gameplay and social element have made the augmented reality game an instant hit, eclipsing the daily active user figures of giants such as Candy Crush Saga and Tinder. Yet the company behind Pokémon Go, Niantic, has done very little to promote the game since it launched. Beyond a handful of release notifications from the official Pokémon Go Twitter account, no TV commercials have been commissioned and in-app advertising is minimal.
Niantic has instead relied on word-of-mouth to promote its take on Pokémon, particularly in the form of unofficial viral pictures, videos and social media posts shared online (internet memes) that reference or parody the game. This user-generated content ensures the title is on the lips of the masses, even if many of them haven’t even played it yet.
The term “meme”, coined by biologist Richard Dawkins in his book The Selfish Gene, refers to an idea, behaviour or style that propagates across culture, just as a successful gene spreads through a population. Internet memes pervade the web in a similar way. Carried via text, images or videos, the idea (often a quip or funny observation) is replicated by being shared and reposted, transforming over time to spawn hundreds of thematic variants.
Even before a realistic-looking Tupac Shakur was "resurrected" on-stage for a live performance, holograms have captured the minds and imaginations of many. But the gap between fantasy and reality has narrowed significantly over the past few years. HoloVit, which recently proved a prototype, is seeking funds on Indiegogo for its personal holography system. HoloVit recording sets and screens are designed to capture and display holograms projected from smartphones, tablets, laptops, or TVs.
There are number of different hologram technologies currently being developed. Some, like Microsoft's "holoportation," involve complex 3D video capture systems while others, such as Holho's "hologram generator," employ simple mirrors set upon mobile devices to create the illusion of moving, three-dimensional images. HoloVit is a bit of a cross between the two, with the way it displays floating video on transparent screens.
The system is designed to work with devices without the requirement of a projector or special equipment. In a sense, a smartphone, tablet, laptop, or TV becomes the projector as it faces a HoloVit screen. When set at the optimal distance (for best results), images and video come to life, even in brightly-lit rooms – a challenge common to many projectors. One caveat is that only content that has been formatted as a hologram will work.
VR hardware is already capable of tracking your head, your hands, your eyes and in some cases, your feet, but Veeso is claimed to be the first VR headset to capture your face and transmit your expressions – and as a result, your emotions – onto a virtual avatar in real time. With it, the company is emphasizing emotional connections through chat apps and social games like poker.
Like the Samsung Gear VR and Google Cardboard, Veeso is a smartphone-based VR headset, which the company claims is compatible with Android and iOS devices. Unlike those aforementioned headsets, however, Veeso has two infrared cameras mounted on it to capture the wearer's facial expressions.
One of these cameras is located between the eyes to capture pupil movements, eyebrows, and how open or closed the eyelids are, while the second hangs off the bottom of the unit, taking in the jaw, lips and mouth. Together, from what we see in the videos, they seem to do a pretty solid job of covering the whole face and mimicking the facial expressions on a digital avatar in real time.
In his 1963 book God and Golem, the founder of the cybernetics movement Norbert Wiener suggested a compelling thought experiment. Imagine cutting off someone’s hand, he wrote, but leaving intact the key muscles and nerves. Theoretically, a prosthesis could connect directly both to nerves and muscles, giving the subject control of the replacement organ as if it were real.
So far so sensible: this scenario was a reasonable extrapolation at the time, and is close to becoming a reality today. Wiener, however, went further. Having imagined an artificial hand able to replace its original, he wondered why we should not now imagine the addition of an entirely new kind of limb or sensory organ? “There is,” he wrote, “a prosthesis of parts which we do not have and which we never have had.” There was no need to stop at nature. Human-machine integration could in theory blur its boundaries well beyond replacement.
It’s 14 July 2016, and between typing this paragraph and the last I dashed outside with my iPhone to catch a Pokémon lurking next to a tree (a cute orange lizard: Charmander, weight 8.5kg, height 0.6m).
What would Wiener have made of this? I suspect he would have been delighted. While I’m playing Pokémon, my smartphone functions much like a sensory prosthesis. In order to move my avatar around a map, I must move myself. When I get close enough to a target, I hold the device up and through its camera see something superimposed on the world that would otherwise be invisible. It’s like having a sixth sense. My Pokémon-gathering escapades place me somewhere between a cyborg and a stamp collector.
Houseplants have never been known as great conversationalists, but it's possible we just can't hear what they're saying. Swiss company, Vivent SARL, is hoping to rectify that with its Phytl Signs device that picks up the tiny electrical signals emitted by plants and broadcasts them through a speaker. The ultimate goal is to translate what the plants are actually "saying."
The system, which is currently the subject of a crowdfunding campaign, features two receptors – a stake that is inserted into the soil next to the plant, and a clip that gently connects to a leaf. These measure the voltage coming from the plant, which feeds into a signal processor. From there the plant-speak is output through a built-in speaker. A smartphone app can also receive raw data from a plant, allowing analysis of the signals using data analysis software.
Unlike current plant monitors on the market that measure environmental metrics like soil moisture and sunlight, the Phytl Signs device is claimed to pick up on whether your plant is thriving or stressed, active or quiet, or besieged by pests. The plant responds immediately to a change in lighting or the cutting of a leaf with a spike in sound, which is an electronic howl akin to a theramin. But decoding what the audio output means is still being worked out by the company.
To that end, the company encourages device owners to share their data with an online community of fellow users, allowing the company to crowdsource the data to help them decode and translate the plant signals so they can be understood.
Children with a rare neurological disease were recently given the chance to walk for the first time thanks to a new robotic exoskeleton. These devices – which are essentially robotic suits that give artificial movement to a user’s limbs – are set to become an increasingly common way of helping people who’ve lost the use of their legs to walk. But while today’s exoskeletons are mostly clumsy, heavy devices, new technology could make them much easier and more natural to use by creating a robotic skin.
Exoskeletons have been in development since the 1960s. The first one was a bulky set of legs and claw-like gloves reminiscent of the superhero, Iron Man, designed to use hydraulic power to help industrial workers lift hundreds of kilogrammes of weight. It didn’t work, but since then other designs for both the upper and lower body have successfully been used to increase people’s strength, help teach them to use their limbs again, or even as a way to interact with computers using touch or “haptic” feedback.
These devices usually consist of a chain of links and powered joints that align with the user’s own bones and joints. The links are strapped securely to the user’s limbs and when the powered joints are activated they cause their joints to flex. Control of the exoskeleton can be performed by a computer – for example if it is performing a physiotherapy routine – or by monitoring the electrical activity in the user’s muscles and then amplifying the force they are creating.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.