The human brain is an enigma wrapped in a skull, but the field of neuroscience is beginning to unravel its secrets. What we learn could be used for good – such as how to develop prosthetic limbs and wheelchairs that can be controlled directly by a patient's thoughts – or bad, like the possibility of mind-controlled weaponry. To help us navigate the potentially murky waters of probing and peering into the human mind, researchers from Switzerland have proposed four new human rights relating to limitations on how the brain should be read or manipulated.
Brainwaves can be tracked using electroencephalography (EEG), and that's helping us map out which parts of the brain are involved in which processes, diagnose concussions, guess which number someone's thinking of, help stroke victims regain their motor skills or let "locked-in" people communicate with the outside world. If that's the "out" signal, than transcranial magnetic stimulation (TMS) is the "in": placing a magnetic coil on the back of the skull, the patient's brain can be directly stimulated to boost memory or learning, send messages, treat migraines or even play games.
In the year 2000, logging onto the Internet usually meant sitting down at a monitor connected to a dial-up modem, a bunch of beeps and clicks, and a "You've got mail!" notification. In those days, AOL Instant Messenger was the Internet's favorite pastime, and the king of AIM was SmarterChild, a chatbot that lived in your buddy list.
A chatbot is a computer program designed to simulate human conversation, and SmarterChild was one of the first chatbots the public ever saw. The idea was that you would ask SmarterChild a question — "Who won the Mets game last night?" or "Where did the Dow close today?" — then the program would scour the Internet and, within seconds, respond with the answer. The company that built SmarterChild, a startup called ActiveBuddy, thought it could make money by building custom bots for big companies and made SmarterChild as a test case.
And people did use SmarterChild — a lot. At its height, SmarterChild chatted with 250,000 people a day.
Responding like a human
But most of those people weren't asking SmarterChild about sports or stocks. They were just chitchatting with it, about nothing in particular — like how you'd chat with a friend. "Our goal was to make a bot people would actually use, and to do that we had to make the best friend on the Internet," says Robert Hoffer, one of its creators.
“I’m dying of Boredom,” complains the young wife, Yelena, in Chekhov’s 1897 play Uncle Vanya. “I don’t know what to do.” Of course, if Yelena were around today, we know how she’d alleviate her boredom: She’d pull out her smartphone and find something diverting, like BuzzFeed or Twitter or Clash of Clans. If you have a planet’s worth of entertainment in your pocket, it’s easy to stave off ennui.
Unless it turns out ennui is good for us. What if boredom is a meaningful experience—one that propels us to states of deeper thoughtfulness or creativity?
That’s the conclusion of two fascinating recent studies. In one, researchers asked a group of subjects to do something boring, like copying out numbers from a phone book, and then take tests of creative thinking, such as devising uses for a pair of cups. The result? Bored subjects came up with more ideas than a nonbored control group, and their ideas were often more creative. In a second study, subjects who took an “associative thought” word test came up with more answers when they’d been forced to watch a dull screensaver.
A concept for remote-controlled “avatar” humanoid robots, presented by ANA, Japan’s largest airline, was named one of the three “top prize concept” finalists at XPRIZE’s recent inaugural Visioneers event.
The ANA AVATAR Team, led by scientist Harry Kloor, PhD, presented an ambitious vision of a future in which human pilots would hear, see, talk, touch, and feel as if a humanoid robotic body were their own — instantaneously across the globe and beyond. Ray Kurzweil is Co-Prize Developer.
The avatar concept, an extension of the avatar portrayed in the movie Avatar, involves haptics, motion capture, AR, VR, artificial intelligence, and deep machine learning technologies.
“The ANA team presented a compelling AVATAR XPRIZE concept,” said Dr. Peter H. Diamandis, executive chairman and founder of XPRIZE. “The Avatar XPRIZE will initiate a revolution in the way we work, travel, explore and interact with each other.”
“The top three Visioneers teams, with prize concepts in the areas of cancer, avatars, and ALS, have been certified by XPRIZE as ‘ready to launch’ and we look forward to working with the teams to finalize the prize designs, secure the necessary funding, and launch one or all of these world-changing competitions,” Marcus Shingles, CEO of XPRIZE told KurzweilAI.
Other top concepts were presented by Deloitte and Caterpillar for conquering cancer and ALS, respectively. The Visioneers Summit is intended to help the XPRIZE strategic approach evolve and advance its prize designs.
In an age when digital information can fly around the connected networks of the world in the blink of an eye, it may seem a little old timey to consider delivering messages by hand. But that's precisely what Panasonic is doing at CEATEC this week. The company is demonstrating a prototype communication system where data from one person to another is transmitted through touch.
There's very little information on the system available, but Panasonic says that the prototype uses electric field communication technology to move data from "thing-to-thing, human-to-human and human-to-thing." Data transfer and authentication occurs when the objects or people touch, with digital information stored in a source tag instantaneously moving to a receiver module – kind of like NFC tap to connect technology, but with people in the equation as well as devices.
When writing the screenplay for 1968's 2001, Arthur C. Clarke and Stanley Kubrick were confident that something resembling the sentient, humanlike HAL 9000 computer would be possible by the film's namesake year. That's because the leading AI experts of the time were equally confident.
Clarke and Kubrick took the scientific community's predictions to their logical conclusion, that an AI could have not only human charm but human frailty as well: HAL goes mad and starts offing the crew. But HAL was put in an impossible situation, forced to hide critical information from its (his?) coworkers and ordered to complete the mission to Jupiter no matter what. "I'm afraid, Dave," says the robot as it's being dismantled by the surviving astronaut.
If humans had more respect for the emotions that they themselves gave to HAL, would things have turned out better? And will that very question ever be more than an experiment in thought?
The year 2001 came and went, with an AI even remotely resembling HAL looking no more realistic than that Jovian space mission (although Elon Musk is stoking hopes). The disappointment that followed 1960's optimism led to the "AI Winter"—decades in which artificial intelligence researchers received more derision than funding. Yet the fascination with humanlike AI is as strong as ever in popular consciousness, manifest in the disembodied voice of Samantha in the movie Her, the partial humanoid Eva in Ex Machina, and the whole town full of robot people in HBO's new reboot of the 1973 sci-fi Western Westworld.
Today, the term "artificial intelligence" is ferociously en vogue again, freed from winter and enjoying the blazing heat of an endless summer. AI now typically refers to such specialized tools as machine-learning systems that scarf data and barf analysis. These technologies are extremely useful—even transformative—for the economy and scientific endeavors like genetic research, but they bear virtually no resemblance to HAL, Samantha, Eva, or Dolores from Westworld.
The augmented-reality game "Pokémon Go" may be the hottest thing in mobile gaming right now, but new advances in computer science could give players an even more realistic experience in the future, according to a new study. In fact, researchers say a new imaging technique could help make imaginary characters, such as Pokémon, appear to convincingly interact with real objects.
A new imaging technique called Interactive Dynamic Video can take pictures of real objects and quickly create video simulations that people, or 3D models, can virtually interact with, the researchers said. In addition to fueling game development, these advances could help simulate how real bridges and buildings might respond to potentially disastrous situations, the researchers added.
The smartphone game "Pokémon Go" superimposes images onto the real world to create a mixed reality. The popularity of this game follows a decades-long trend of computer-generated imagery weaving its way into movies and TV shows. However, while 3D models that can move amid real surroundings on video screens are now commonplace, it remains a challenge getting computer-generated images to look as if they are interacting with real objects. Building 3D models of real items is expensive, and can be nearly impossible for many objects, the researchers said. [Beyond Gaming: 10 Other Fascinating Uses for Virtual-Reality Tech]
Now, Interactive Dynamic Video could bridge that gap, the researchers said.
"When I came up with and tested the technique, I was surprised that it worked quite so well," said study lead author Abe Davis, a computer scientist at the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
Many Pokémon Go players were saddened this week when an update prevented the use of unofficial tracking services. It also made it harder to catch Pokémon on your travels - problems developer Niantic has finally addressed.
The most recent version of the phenomenally successful Pokémon Go banned third party apps such as Pokévision and Pokéhound, which allowed players to see where Pokémon were spawning in the world around them. It also removed the game's 'Nearby' feature, a footstep indicator that was supposed to show how far a Pokémon was from the player, but which rarely worked.
In a blog post, Niantic has revealed what many had suspected - that the server drain caused by the tracking apps was impacting Pokémon Go's already volatile performance. "Running a product like Pokémon Go at scale is challenging. Those challenges have been amplified by third parties attempting to access our servers in various ways outside of the game itself," wrote Niantic founder John Hanke. "We were delayed in [launching the game in Latin America] due to aggressive efforts by third parties to access our servers outside of the Pokémon Go game client and our terms of service. We blocked some more of those attempts yesterday."
It’s not hard to see why Pokémon Go has become so popular. Its simple gameplay and social element have made the augmented reality game an instant hit, eclipsing the daily active user figures of giants such as Candy Crush Saga and Tinder. Yet the company behind Pokémon Go, Niantic, has done very little to promote the game since it launched. Beyond a handful of release notifications from the official Pokémon Go Twitter account, no TV commercials have been commissioned and in-app advertising is minimal.
Niantic has instead relied on word-of-mouth to promote its take on Pokémon, particularly in the form of unofficial viral pictures, videos and social media posts shared online (internet memes) that reference or parody the game. This user-generated content ensures the title is on the lips of the masses, even if many of them haven’t even played it yet.
The term “meme”, coined by biologist Richard Dawkins in his book The Selfish Gene, refers to an idea, behaviour or style that propagates across culture, just as a successful gene spreads through a population. Internet memes pervade the web in a similar way. Carried via text, images or videos, the idea (often a quip or funny observation) is replicated by being shared and reposted, transforming over time to spawn hundreds of thematic variants.
Even before a realistic-looking Tupac Shakur was "resurrected" on-stage for a live performance, holograms have captured the minds and imaginations of many. But the gap between fantasy and reality has narrowed significantly over the past few years. HoloVit, which recently proved a prototype, is seeking funds on Indiegogo for its personal holography system. HoloVit recording sets and screens are designed to capture and display holograms projected from smartphones, tablets, laptops, or TVs.
There are number of different hologram technologies currently being developed. Some, like Microsoft's "holoportation," involve complex 3D video capture systems while others, such as Holho's "hologram generator," employ simple mirrors set upon mobile devices to create the illusion of moving, three-dimensional images. HoloVit is a bit of a cross between the two, with the way it displays floating video on transparent screens.
The system is designed to work with devices without the requirement of a projector or special equipment. In a sense, a smartphone, tablet, laptop, or TV becomes the projector as it faces a HoloVit screen. When set at the optimal distance (for best results), images and video come to life, even in brightly-lit rooms – a challenge common to many projectors. One caveat is that only content that has been formatted as a hologram will work.
VR hardware is already capable of tracking your head, your hands, your eyes and in some cases, your feet, but Veeso is claimed to be the first VR headset to capture your face and transmit your expressions – and as a result, your emotions – onto a virtual avatar in real time. With it, the company is emphasizing emotional connections through chat apps and social games like poker.
Like the Samsung Gear VR and Google Cardboard, Veeso is a smartphone-based VR headset, which the company claims is compatible with Android and iOS devices. Unlike those aforementioned headsets, however, Veeso has two infrared cameras mounted on it to capture the wearer's facial expressions.
One of these cameras is located between the eyes to capture pupil movements, eyebrows, and how open or closed the eyelids are, while the second hangs off the bottom of the unit, taking in the jaw, lips and mouth. Together, from what we see in the videos, they seem to do a pretty solid job of covering the whole face and mimicking the facial expressions on a digital avatar in real time.
In his 1963 book God and Golem, the founder of the cybernetics movement Norbert Wiener suggested a compelling thought experiment. Imagine cutting off someone’s hand, he wrote, but leaving intact the key muscles and nerves. Theoretically, a prosthesis could connect directly both to nerves and muscles, giving the subject control of the replacement organ as if it were real.
So far so sensible: this scenario was a reasonable extrapolation at the time, and is close to becoming a reality today. Wiener, however, went further. Having imagined an artificial hand able to replace its original, he wondered why we should not now imagine the addition of an entirely new kind of limb or sensory organ? “There is,” he wrote, “a prosthesis of parts which we do not have and which we never have had.” There was no need to stop at nature. Human-machine integration could in theory blur its boundaries well beyond replacement.
It’s 14 July 2016, and between typing this paragraph and the last I dashed outside with my iPhone to catch a Pokémon lurking next to a tree (a cute orange lizard: Charmander, weight 8.5kg, height 0.6m).
What would Wiener have made of this? I suspect he would have been delighted. While I’m playing Pokémon, my smartphone functions much like a sensory prosthesis. In order to move my avatar around a map, I must move myself. When I get close enough to a target, I hold the device up and through its camera see something superimposed on the world that would otherwise be invisible. It’s like having a sixth sense. My Pokémon-gathering escapades place me somewhere between a cyborg and a stamp collector.
Instead of replacing humans with robots, artificial intelligence should be used more for augmenting human memory and other human weaknesses, Apple Inc. executive Tom Gruber suggested at the TED 2017 conference yesterday (April 25, 2017).
Thanks to the internet and our smartphones, much of our personal data is already being captured, notes Gruber, who was one the inventors of voice-controlled intelligent-assistant Siri. Future AI memory enhancement could be especially life-changing for those with Alzheimer’s or dementia, he suggested.
“Superintelligence should give us super-human abilities,” he said. “As machines get smarter, so do we. Artificial intelligence can enable partnerships where each human on the team is doing what they do best. Instead of asking how smart we can make our machines, let’s ask how smart our machines can make us.
“I can’t say when or what form factors are involved, but I think it is inevitable,” he said. “What if you could have a memory that was as good as computer memory and is about your life? What if you could remember every person you ever met? How to pronounce their name? Their family details? Their favorite sports? The last conversation you had with them?”
Gruber’s ideas mesh with a prediction by Ray Kurzweil: “Once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle human levels of pattern recognition with the natural advantages of machine intelligence, in speed, memory capacity, and, most importantly, the ability to quickly share knowledge and skills.”
Two projects announced last week aim in that direction: Facebook’s plan to develop a non-invasive brain-computer interface that will let you type at 100 words per minute and Elon Musks’ proposal that we become superhuman cyborgs to deal with superintelligent AI.
But trusting machines also raises security concerns, Gruber warned. “We get to choose what is and is not recalled,” he said. “It’s absolutely essential that this be kept very secure.”
News writing has remained the exclusive domain of journalists until relatively recently. But with the arrival of so-called automated or “robo” journalism, this is no longer the case. Automated journalism is software that converts structured data into stories with limited or no human intervention beyond the initial programming. It has already been deployed by several large news organisations – including the Associated Press which uses the technology to write thousands of business stories every year.
Research I conducted recently with Konstantin Dörr of the University of Zurich and Jessica Kunert of the University of Munich looks at the opportunities and limitations of automated journalism. Although these are not new questions, our approach was. We recruited ten journalists from a range of news organisations and gave them hands-on training with robo-writing software from a leading supplier. We then interviewed them afterwards to analyse their views.
Japanese high school student Saya has flawless skin, shiny hair and her uniform is always clean. That's not surprising since her parents are computer graphics artists who worked together to design their own virtual child.
When Saya was first unveiled around October last year by creators Teruyuki and Yuka Ishikawa, she blew viewers away. She's so realistic -- from her perfectly crafted features and the ultra-high detail skin to the way the light catches each individual strand of hair. People were scratching their heads in amazement.
This month has seen Saya get an upgrade of sorts, bringing her into the moving world. This video of Saya shown at this year's CEATEC, or Combined Exhibition of Advanced Technologies, shows her smiling and nodding at the camera.
It's hard to stop saying "she" and "her" instead of "it." That's because Saya's CGI-designer parents have effectively bridged the uncanny valley. The term, coined by Japanese robotics professor Masahiro Mori, refers to our natural sense of unease around things that are almost-but-not-quite human. It doesn't appear with Saya. Teruyuki and Yuka Ishikawa have somehow reached the other side of the valley and created a character that's both appealing and fascinating to look at.
Last year a new Japanese celebrity burst onto the scene. But "Saya" was a different kind of star, because she is the product of a Tokyo computer lab. And like all "parents", her creators have big ambitions for her, writes the BBC's Yvette Tan.
"'I think I've seen her somewhere' or 'She looks like someone I know' are what people usually say when they see Saya," says Yuka Ishikawa, one half of the husband and wife graphic artist team behind Saya.
When the couple first posted pictures of the hyper-realistic schoolgirl online last year, it was a revelation about what can be achieved with computer design.
Her slightly askew school tie, heavily fringed hair, freckled skin and teenage pout left thousands trying to work out whether or not she was a real person.
Is the throwaway era about to end? The past half century has given us toasters that are irreparable after a minor fault, T-shirts that quickly shrink or fade, and vacuum cleaners that need replacing after a few years. “Planned obsolescence” means old smartphones may perform worse after necessary updates, and products ranging from clothing to spectacles are regularly redesigned to encourage new purchases.
However the Swedish government’s plan to reduce the rate of VAT levied on repair work from 25% to 12% is the latest sign that Europeans are beginning to question the “take, make and throw away” culture of consumerism that lies at the heart of industrialised economies.
In France, planned obsolescence is now punishable by two years’ imprisonment with a fine of up to €300,000. Spain recently became the first country to set a target designed to increase reuse. Meanwhile Germany’s environment agency, the UBA, has commissioned research on the lifetime of electrical goods in order to develop strategies against obsolescence. An EU end to throwaway culture
These policies need to be understood in the context of European Union initiatives aimed at advancing sustainability, notably on waste and the “circular economy”, in which materials are kept in use for as long as possible and ultimately recycled. For instance the Waste Framework Directive, approved in 2013, requires each member state to produce a waste prevention programme. To its credit, the UK’s then-coalition government was the first to do so. The minister overseeing waste policy, Dan Rogerson, even proclaimed that “products should be designed … with longer lifetimes, repair and reuse in mind”.
A 2015 EU Action Plan added to the momentum, committing the European Commission to investigate the extent of planned obsolescence and take action where necessary.
And the EcoDesign Directive, which has primarily been used to address energy efficiency, is also to be applied to product lifetimes. The directive already requires vacuum cleaners sold in the EU from September 2017 to have motors designed to last at least 500 hours. Other products may soon be subject to similar requirements.
Is it safe to assume that a gold medalist at the Olympics practiced more than a silver medalist—and that a silver medalist practiced more than a bronze winner? Definitely not, according to a new analysis, which looked at nearly 3,000 athletes. The study found that although becoming world class takes an enormous amount of practice, the success of elite athletes cannot be predicted based on the number of hours they spend in careful training.
In 1993 Swedish psychologist K. Anders Ericsson published a highly influential paper that suggested performance differences between mediocre musicians and their superior counterparts—as determined by the evaluations of their professors—were largely determined by the number of hours they spent practicing. He would later publish work extending his theory to other pursuits, including sports, chess and medicine. Ericsson emphasized that there was no upper bound to the effect that deliberate practice had on success in these areas—the world’s best athletes, musicians and doctors were simply the ones who practiced the most. His work would eventually be popularized by journalist Malcolm Gladwell and others as the “10,000-hour rule,” which suggests that top performance in virtually any field is simply a matter of putting in 10,000 hours of work.
But a new study published in Perspectives on Psychological Science shows—as others have—that deliberate practice is just one factor that makes world sports champions. “More or less across the board, practice will improve one’s performance,” says Brooke Macnamara, a psychologist at Case Western University and lead author of the study. At a certain level of success, however, other factors determine who is the absolute best, she says.
Macnamara and her colleagues analyzed 34 studies that—put together—had tracked the number of hours 2,765 athletes had practiced. Those studies also recorded the athletes’ achievements, as determined by either objective measure such as a race time, expert rating of performance or membership in elite groups. For sports at all levels, including athletes performing at a state level or in clubs, deliberate practice could explain 18 percent of the differences in achievement between athletes. But when the researchers looked only at the very best competitors—those who had competed in the Olympics or other world competitions—differences in the number of hours they had practiced explained just 1 percent of the difference in their performance at sporting events. “This suggests that practice is important to a point, but it stops differentiating who’s good and who’s great,” Macnamara says. At the national and global level, a poorly understood mixture of genetics, psychological traits and other factors influence performance.
Vance Bergeron was once an amateur cyclist who rode 7,000 kilometres per year—much of it on steep climbs in the Alps. But in February 2013, as the 50-year-old chemical engineer was biking to work at the École Normale Supérieure in Lyons, France, he was hit by a car. The impact sent him flying through the air and onto his head, breaking his neck. When he woke, he learnt that he would never again move his legs on his own, and would have only limited use of his arms.
Confined to bed for months while his body did what healing it could, Bergeron began to look for a way back to cycling. He started to study neuroscience, with an emphasis on research into robotic prostheses that could turn people like him into 'cyborgs': combinations of human and machine. He learnt that some of these prostheses used a technique known as functional electrical stimulation (FES) to deliver electrical signals to atrophied limbs or the stumps of missing ones, causing the muscles to contract and restoring some function.
As soon as Bergeron had recovered enough to use a wheelchair, he took that idea back to the lab, where he switched his research focus to neuroscience. Using himself as a guinea pig, he and his team worked out how to stimulate the nerves in his legs so that his muscles would flex and pedal a bike. “I have become my own research project and it's a win–win,” he says.
Even with regular exercise sessions to build muscle, Bergeron's artificially stimulated legs have produced at most 20 watts of power, barely one-tenth of the 150–200 watts produced by an average cyclist. But he and his team are building the FES controller and electrodes into a carbon-fibre recumbent tricycle that he hopes will help him to do better—and perhaps win a medal on 8 October, when he takes his machine to Zurich, Switzerland, to race against other FES cyclists in the Cybathlon: the first cyborg Olympics.
Breathable clothing is important for soldiers looking to avoid heat stress and exhaustion, but in some situations, added protection is needed against biological and chemical agents. Current protective equipment struggles to effectively offer both at once, but now scientists at Lawrence Livermore National Laboratory (LLNL) have developed a material that begins to bridge the gap, using carbon nanotubes to actively block contaminants while still allowing water vapor to escape.
The material is a flexible polymer membrane containing an array of aligned carbon nanotubes (CNTs), which function as extremely tiny pores. The key to how they block biological agents is simple: these tubes have a diameter of under 5 nanometers, which is 5,000 times smaller than a human hair, and crucially, less than half the size of most bacteria and viruses. Sweat, in the form of water vapor, can easily escape from the wearer's skin through these pores, yet bacteria are just too big to get in.
"We demonstrated that these membranes provide rates of water vapor transport that surpass those of commercial breathable fabrics like GoreTex, even though the CNT pores are only a few nanometers wide," says Ngoc Bui, the lead author of the paper.
During filtration tests, the nanotube membranes were exposed to liquid solutions containing dengue virus and successfully kept the bugs out, even when the material was wet.
After robot cars and robot rescue workers, US research agency Darpa is turning its attention to robot hackers.
Best known for its part in bringing the internet into being, the Defence Advanced Research Projects Agency has more recently brought engineers together to tackle what it considers to be "grand challenges".
These competitions try to accelerate research into issues it believes deserve greater attention - they gave rise to serious work on autonomous vehicles and saw the first stumbling steps towards robots that could help in disaster zones.
Next is a Cyber Grand Challenge that aims to develop software smart enough to spot and seal vulnerabilities in other programs before malicious hackers even know they exist.
"Currently, the process of creating a fix for a vulnerability is all people, and it's a process that's reactive and slow," said Mike Walker, head of the Cyber Grand Challenge at Darpa.
This counted as a grand challenge, he said, because of the sheer complexity of modern software and the fundamental difficulty one computer had in understanding what another was doing - a problem first explored by computer pioneer Alan Turing.
We’ve seen how design can keep us away from harm and save our lives. But there is a more subtle way that design influences our daily decisions and behavior – whether we know it or not. It’s not sexy or trendy or flashy in any way. I’m talking about defaults.
Defaults are the settings that come out of the box, the selections you make on your computer by hitting enter, the assumptions that people make unless you object, the options easily available to you because you haven’t changed them.
They might not seem like much, but defaults (and their designers) hold immense power – they make decisions for us that we’re not even aware of making. Consider the fact that most people never change the factory settings on their computer, the default ringtone on their phones, or the default temperature in their fridge. Someone, somewhere, decided what those defaults should be – and it probably wasn’t you.
Another example: In the U.S. when you register for your driver’s license, you’re asked whether or not you’d like to be an organ donor. We operate on an opt-in basis: that is, the default is that you are not an organ donor. If you want to donate your organs, you need to actively check a box on the DMV questionnaire. Only about 40 percent of the population is signed up to be an organ donor.
In other countries such as Spain, Portugal and Austria, the default is that you’re an organ donor unless you explicitly choose not to be. And in many of those countries over 99 percent of the population is registered. A recent study found that countries with opt-out or “presumed consent” policies don’t just have more people who sign up to be donors, they also have consistently higher numbers of transplants.
Just when you thought your favourite childhood science hero, Sir David Attenborough, couldn't get any more awesome, some genius over at the Lovin Dublin Facebook page has edited his pithy narration over the top of Pokémon Go game play.
The result is both hilarious and nostalgic, seeing as the game is probably the closest thing today's generation will get to actually sleuthing wild animals National Geographic-style.
The video really has to be seen to be appreciated, but the best part of all is when even Attenborough is sick of goddamn Zubats.
"Bats, with their fluttering zigzag flights, are not easy targets," he explains in the footage above. "That is one bat that will not return to the roost tonight."
For everyone (anyone?) who hasn't tried playing Pokémon Go just yet, it's not too late.
It might just be a game, but it's reportedly helping people to treat their depression and anxiety by getting them out of the house and socialising.
Plus, you get to walk around your neighbourhood, phone in hand, pretending to be a zoologist on the hunt for the next rare species. And, if you're lucky, you might even find it.
Iris recognition, retina scanning, fingerprints, voice recognition—all of these show promise. But there’s one security tool that’s secure, effortless and available now. It’s a special kind of face recognition that’s available on some Windows 10 computers—those equipped with an Intel RealSense camera, such as the Surface Pro 4.
RealSense is actually a sophisticated set of three sensors: one each for infrared, color and 3-D perception. Some laptops come with the RealSense camera built in or you can buy one as an external gadget that plugs into your computer’s USB jack.
The feature is called Windows Hello. Actually, Hello can log you into your PC using fingerprint, iris orfacial recognition—but the facial thing is by far the most convenient. Once it’s set up, when you sit down in front of your computer, it recognizes your face and logs you in instantly. You can’t fool it with a photograph, a 3-D model of your head or even an identical twin. Thanks to the infrared camera, you can log yourself in even in the dark.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.