When we catch balls, Jeff Hawkins, cofounder of Numenta and author of “On Intelligence,” tells us we aren’t solving differential equations. A robot, on the other hand, does solve differential equations, requiring roughly 3-trillion calculations for a 1s toss (“Kinematically Optimal Catching a Flying Ball with a Hand-Arm-System,” Berthold Bauml, Thomas Wimbock and Gerd Hirzinger, Institute of Robotics and Mechatronics, 2010).
There’s a big difference between intelligence and intelligent behavior Jeff would also have you note. Deep Blue, the IBM computer that beat chess grand master Gary Kasparov back in 1996, and IBM’s Watson, which defeated two Jeopardy! champions back in 2011, displayed intelligent behavior, not intelligence. When you stop feeding these machines their specialized input, their intelligent behavior ceases. On the other hand, when you lay awake in the warm darkness of your bedroom with your eyes closed, your mind continues processing unabated, thinking, musing, possibly stumbling onto some deep aha moment. Part of the reason you keep ticking in the dark is that there are, in fact,pacemaker-like neurons always active in your brain.
This innate lack of intelligence is the problem underlying all of the current machine architectures generating all of the hype in AI. Whether it’s the work of Google’s Professor Geoff Hinton, the work of Professor Fei-Fei Li at the Stanford Vision Lab, or the work of Professor Yann LeCun at Facebook’s AI lab, the output of their machines is intelligent behavior without actually harboring any actual intelligence. The machines and their algorithms are as innately intelligent as the cursor at the end of this sentence.
“On Intelligence” makes the compelling case that a more proper way of building intelligent machines is by copying the salient features of the mammalian neocortex, an auto-associative, predictive memory pattern processing machine. The recent partnership between IBM and Numentatells us that Jeff’s ideas have traction.
Consider the cat from the famous thought experiment by the physicist Erwin Schrödinger. The cat can be dead and alive at once, since its life depends on the quantum mechanically determined state of a radioactively decaying atom, which, in turn, releases toxic gas into the cat’s cage.
As long as you haven’t measured the state of the atom, you know nothing about the poor cat’s health either—atom and kitty are intimately “entangled” with each other.
Equally striking, if less well known, are the so-called squeezed quantum states: normally, Heisenberg’s uncertainty principle means that you can’t measure the values of certain pairs of physical quantities, such as the position and velocity of a quantum particle, with arbitrary precision.
Nevertheless, nature allows a barter trade: if the particle has been appropriately prepared, then one of the quantities can be measured a little more exactly if you’re willing to accept a less precise knowledge of the other quantity.
In this case the preparation of the particle is known as “squeezing” because the uncertainty in one variable is reduced (squeezed).
Schrödinger’s cat and squeezed quantum states are both important physical phenomena that lie at the heart of promising technologies of the future. Researchers at ETH Zurich were successful in combining both in a single experiment, which they report in Nature.
Lyme disease, swine flu, bubonic plague—many of humanity’s greatest scourges jumped from our animal co-habitants to make us sick. During an outbreak, researchers need to understand where the disease is coming from in order to effectively treat it and stop it from spreading. But with thousands of pests as possible vectors of disease, and with diseases coming from animals more frequently now than ever before, they often have difficulty doing so. Now artificial intelligence can help identify disease-carrying animals with up to 90 percent accuracy, according to a study published yesterday in the journal PNAS.
To create the algorithm, the researchers started with the 217 rodent species known to harbor pathogens that infect humans, as well as 2,000 that don't cause disease as far as we know. The researchers incorporated data from 86 criteria such as the species' geographic distribution, how quickly they reproduce, and their physiological traits such as body size. They gathered similar information for rodent-borne bugs, many of which also cause disease in humans. Then the computer crunched the data to find which combinations of traits are most indicative of disease-carrying pests.
First of all, we (humans) consistently overreact to new technologies. Our default, evolutionary response to new things that we don't understand is to fear the worst.
Nowadays, the fear is promulgated by a flood of dystopian Hollywood movies and negative news that keeps us in fear of the future.
In the 1980's, when DNA restriction enzymes were discovered, making genetic engineering possible, the fear mongers warned the world of devastating killer engineered viruses and mutated life forms.
What we got was miracle drugs and extraordinary increases in food production.
Rather than extensive government regulations, a group of biologists, physicians and even lawyers came together at the Asilomar Conference on Recombinant DNA to discuss the potential biohazards and regulation of biotechnology and to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.
The guidelines they came up with allowed the researchers to move forward safely and continue to innovate, and we've been using them for 30 years.
The cloning of Dolly the sheep in 1997 led to prophesies that in just a few years we would have armies of cloned super-soldiers, parents implanting Einstein genes in their unborn children and warehouses of zombies being harvested for spare organs.
An artificially intelligent algorithm told me I’d enjoy both these things. I’d like the restaurant, the machine told me, because I prefer Mexican food and wine bars “with a casual atmosphere,” and the movie because “drama movies are in my digital DNA.” Besides, the title shows up around the web next to Boyhood, another film I like.
Nara Logics, the company behind this algorithm, is the brainchild (pun intended) of its CTO and cofounder, Nathan Wilson, a former research scientist at MIT who holds a doctorate in brain and cognitive science. Wilson spent his academic career and early professional life immersed in studying neural networks—software that mimics how a human mind thinks and makes connections. Nara Logics’ brain-like platform, under development for the past five years, is the product of all that thinking..
Vancouver-based architect Michael Green was unequivocal at a conference at which I heard him speak a while ago: “We grow trees in British Columbia that are 35 storeys tall, so why do our building codes restrict timber buildings to only five storeys?”
True, regulations in that part of Canada have changed relatively recently to permit an additional storey, but the point still stands. This can hardly be said to keep pace with the new manufacturing technologies and developments in engineered wood products that are causing architects and engineers to think very differently about the opportunities wood offers in the structure and construction of tall buildings.
Green himself produced a book in 2012 called Tall Wood, which explored in detail the design of 20-storey commercial buildings using engineered timber products throughout. Since then he has completed the Wood Innovation and Design Centre at the University of North British Columbia which, at 29.25 metres (effectively eight storeys), is currently lauded as the tallest modern timber building in North America.
The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience, cognitive computing does this, taking computing concepts to a whole new level.
Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.
Taking a step back to set the foundation for this discussion, let me review a few of these terms.
Researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell. Researchers at the MicroNano Research Facility (MNRF) have built one of the world’s first electronic multi-state memory cells, which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step toward creating a bionic brain — which could help unlock successful treatments for common neurological conditions, such as Alzheimer’s and Parkinson’s diseases.
The discovery was recently published in the materials science journal Advanced Functional Materials.
Project leader Dr. Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.
“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Sharath said.
“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences and, up until now, this functionality has not been able to be adequately reproduced with digital technology.”
The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.
The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film — 10,000 times thinner than a human hair.
As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.
So it is with information technology. We know – indeed, it has become a cliche – that computing power has been doubling at least every two years since records of these things began. We know that the amount of data now generated by our digital existence is expanding annually at an astonishing rate. We know that our capacity to store digital information has been increasing exponentially. And so on. What we apparently have not sussed, however, is that these various strands of technological progress are not unconnected. Quite the contrary, and therein lies our problem.
The thinker who has done most to explain the consequences of connectedness is a Belfast man named W Brian Arthur, an economist who was the youngest person ever to occupy an endowed chair at Stanford University and who in later years has been associated with the Santa Fe Institute, one of the world’s leading interdisciplinary research institutes. In 2009, he published a remarkable book, The Nature of Technology, in which he formulated a coherent theory of what technology is, how it evolves and how it spurs innovation and industry. Technology, he argued, “builds itself organically from itself” in ways that resemble chemistry or even organic life. And implicit in Arthur’s conception of technology is the idea that innovation is not linear, but what mathematicians call “combinatorial”, ie one driven by a whole bunch of things. And the significant point about combinatorial innovation is that it brings about radical discontinuities that nobody could have anticipated.
Here is the thing, whether you agree with Zoltan Istvan or not is irrelevant. What is relevant and deeply so, is that here is a human that walks his talk, and promotes relentlessly and irreverently that which he believes in. And that which he believes in is best summed up in his own words, as an answer to my question:
Who are you Zoltan Istvan?
“I am human being who loves life, and I don't want that life to end. But I believe that life will end for myself and others if people don't do anything. So I'm doing all I can do to try and preserve my life and the lives of others via science and technology.”
Dutch Professor Mark Post, the scientist who made world's first laboratory grown beef burger believes so-called "cultured meat" could spell the end of traditional cattle farming within just a few decades.
A year and a half ago the professor of vascular physiology gave the world its first taste of a beef burger he'd grown from stem cells taken from cow muscle.
It passed the food critics' taste test, but at more than a quarter of a million dollars, the lab quarter-pounder was no threat to the real deal. Now, after further development, Dr Post estimates it's possible to produce lab-beef for $80 a kilo - and that within years it will be a price-competitive alternative.
A small piece of muscle you can produce 10,000 kilos of meat.
In 2013, it cost $325,000 to make lab grown meat for a burger made from cultured muscle tissue cells. Now the cost is $11 for a quarter pound lab grown patty.
Artificial Intelligence has always been hyped by its often-charismatic enthusiasts; lately it seems that the hype may be coming true. Media pundits, technologists, and increasingly the broader public see the rise of artificial intelligence as inevitable. Companies with mass data fed into AI systems—like Google, Facebook, Yahoo, and others—make headlines with technological successes that were science fiction even a decade ago: We can talk to our phones, get recommendations that are personalized to our interests, and may even ride around in cars driven by computers soon. The world has changed, and AI is a big part of why.
As one might expect, pundits and technologists talk about the “AI revolution” in the most glowing terms, equating advances in computer tech with advances in humanity: standards of living, access to knowledge, and a spate of emerging systems and applications ranging from improved hearing and vision aids for the impaired, to the cheaper manufacture of goods, to better recommendations from Amazon, Netflix, Pandora, and others. Artificial Intelligence is a measuring rod for progress, scientifically, technologically, and even socially.
Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.
The development brings them closer to imitating key electronic aspects of the human brain – a vital step towards creating a bionic brain – which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.
The discovery was recently published in the prestigious materials science journal Advanced Functional Materials.
Project leader Dr Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.
“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.
“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”
The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.
The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film - 10,000 times thinner than a human hair.
Dr Hannah Critchlow strips down the brain. Using Radio, TV and Festival platforms she designs, produces and presents brainy interactive experiences for the public. She has featured on BBC, Sky and ITV channels and presented live events to over 30, 000 people across the globe. She is Neuroscience public engagement professor at Cambridge University.
In 2014 Hannah was named as a Top 100 UK scientist by the Science Council for her work in science communication.
At the Telegraph UK, Dr Hannah Critchlow said that if a computer could be built to recreate the 100 trillion connections in the brain their it would be possible to exist inside a programme.
Dr Critchlow, who spoke at the Hay Festival on ‘busting brain myths’ said that although the brain was enormously complex, it worked like a large circuit board and scientists were beginning to understand the function of each part.
Asked if it would be possible one day to download consciousness onto a machine, she said: “If you had a computer that could make those 100 trillion circuit connections then that circuit is what makes us us, and so, yes, it would be possible.
Considering machines that think is a nice step forward in the AI debate as it departs from our own human-based concerns, and accords machines otherness in a productive way. It causes us to consider the other entity’s frame of reference. However, even more importantly this questioning suggests a large future possibility space for intelligence.
There could be “classic” unenhanced humans, enhanced humans (with nootropics, wearables, brain-computer interfaces), neocortical simulations, uploaded mind files, corporations as digital abstractions, and many forms of generated AI: deep learning meshes, neural networks, machine learning clusters, blockchain-based distributed autonomous organizations, and empathic compassionate machines. We should consider the future world as one of multi-species intelligence.
What we call the human function of “thinking” could be quite different in the variety of possible future implementations of intelligence. The derivation of different species of machine intelligence will necessarily be different than that of humans.
In humans, embodiment and emotion as a short-cut heuristic for the fight-or-flight response and beyond have been important elements influencing human thinking. Machines will not have the evolutionary biology legacy of being driven by resource acquisition, status garnering, mate selection, and group acceptance, at least in the same way.
The smallest and rarest marine dolphin in the world could be extinct within 15 years if protection is not stepped up, new research suggests.
Conservationists say the remaining population of Maui's dolphins has dropped below 50.
The critically endangered species is found only in waters off New Zealand.
Measures to prevent dolphins dying in fishing nets must be extended, according to the German conservation organisation Nabu.
Fishing should be banned across the dolphin's entire habitat rather than only limited areas, they say.
According to new estimates just 43-47 individuals, including about 10 mature females, are left.
The study is being presented at a meeting of the scientific committee of the International Whaling Commission (IWC) in San Diego, US.
More than 200 experts are attending the annual event.
"These new figures are a loud wakeup call: New Zealand has to abandons its current stance, which places the interests of the fishing industry above biodiversity conservation, and finally protect the dolphins' habitat from harmful fishing nets, seismic airgun blasts and oil and gas extraction," said Dr Barbara Maas, Nabu's head of endangered species conservation.
A host of supersmart, self-healing technologies could someday make “normal wear and tear” a bygone expense. By definition, these materials—either intrinsically or with the aid of an outside agent—can mend broken molecular bonds without human intervention.
When scientists created the first thermoset elastomer that could repair itself at room temperature without a catalyst, in 2013, they made history. Dubbed “The Terminator”, after the famous Arnold Schwarzenegger films, this advanced polymer and other self-healing materials provide a glimpse into the future of industrial design.
SAN FRANCISCO—Personal computers, cellphones, self-driving cars—Gordon Moore predicted the invention of all these technologies half a century ago in a 1965 article for Electronics magazine. The enabling force behind those inventions would be computing power, and Moore laid out how he thought computing power would evolve over the coming decade. Last week the tech world celebrated his prediction here because it has held true with uncanny accuracy—for the past 50 years.
It is now called Moore’s law, although Moore (who co-founded the chip maker Intel) doesn’t much like the name. “For the first 20 years I couldn’t utter the term Moore’s law. It was embarrassing,” the 86-year-old visionary said in an interview with New York Times columnist Thomas Friedman at the gala event, held at Exploratorium science museum. “Finally, I got accustomed to it where now I could say it with a straight face.” He and Friedman chatted in front of a rapt audience, with Moore cracking jokes the whole time and doling out advice, like how once you’ve made one successful prediction, you should avoid making another. In the background Intel’s latest gadgets whirred quietly: collision-avoidance drones, dancing spider robots, a braille printer—technologies all made possible via advances in processing power anticipated by Moore’s law.
It's popular to talk about how wonderful, beautiful and rare a treasure our planet is; I certainly say such things all the time, and many satellite, Instagram and Pinterest photos testify to this truism. But let's be real for a minute, my fellow humans and A.I. beings -- we don't really have firsthand experience with an adequate sample size of habitable planets to say this for sure.
In fact, a pair of scientists have been looking into the possibility that there might be a distant planet (or a couple of them or maybe 3 billion) out there more suitable to supporting life as we know it. They even describe what such a "superhabitable" planet might look like -- a super-Earth with a mass double or triple that of our planet, orbiting in the habitable zone around a K-type dwarf star several billion years older than our sun.
The basic explanation for why such a planet would make a "better Earth" is that it might have a long-lasting magnetic field, which protects the planet from the abundant radiation of space and stars, and plate tectonics activity, which keeps some of the key life-supporting elements in balance. Also, a planet with double or triple the mass of Earth would mean more surface gravity, likely forming more shallow lakes and oceans, more archipelago-like land masses and fewer deserts. More shallow waters might mean more biodiversity, as they typically do here on our planet.
Last week, scientists from University College London released a paper presenting evidence that men and women in early society lived in relative equality. The paper challenges much of our understanding of human history, a fact not lost on the scientists. Mark Dyble, the study’s lead author, stated “sexual equality is one of the important changes that distinguishes humans. It hasn’t really been highlighted before.”
Despite Dyble’s comments, however, this paper isn’t the first foray into the issue. In fact, it represents another shot fired in a debate between scientific and anthropological communities that has been raging for centuries. It’s a debate that asks some fundamental questions: who are we, and how did we become the society we are today?
Our modern picture of prehistoric societies, or what we can call the “standard narrative of prehistory” looks a lot like The Flintstones. The narrative goes that we have always lived in nuclear families. Men have always gone out to work or hunt, while women stayed at home to look after the house and the children. The nuclear family and the patriarchy are as old as society itself.
The narrative is multifaceted, but has strong roots in biological science, which can probably be traced back to Charles Darwin’s theory of sexual selection. Darwin’s premise was that due to their need to carry and nurture a child women have a greater investment in offspring than men. Women are therefore significantly more hesitant to participate in sexual activity, creating conflicting sexual agendas between the two genders.
The UN estimates that 4.3 billion people do not use the Internet, mostly because the cost is prohibitive or their area lacks the infrastructure. Outernet’s free broadcast could give many of those people a way to access useful online information relatively quickly, says Karim. The World Bank has agreed to help roll out Pillar devices in South Sudan as a way to distribute educational material to schools. Teachers and pupils will still need to have devices or printers to make use of that information, though.
The designs and software for Outernet’s Pillar devices are freely available so people or companies can make their own versions. They currently cost around $150 to make, but that should fall below $100 once they are being made in larger numbers, says Karim.
Outernet is also working on a portable solar-powered receiver called Lantern. It can be hooked up to a dish to pick up Outernet’s existing signal and also has a built-in antenna designed to pick up a different kind of satellite signal that Outernet aims to switch on this summer. The company has taken orders for more than 5,000 Lantern devices. It has a grant from the U.K. Space Agency to have three small satellites made dedicated to broadcasting the Lantern signal. The first satellites and portable Lantern receiver devices are expected to be ready late this year.
Speaking at the 2015 TED conference in Vancouver, Canada, MIT professor Neri Oxman has displayed what is claimed to be the world’s first 3D-printed photosynthetic wearable prototype embedded with living matter. Dubbed "Mushtari," the wearable is constructed from 58 meters (190 ft) of 3D-printed tubes coiled into a mass that emulates the construction of the human gastrointestinal tract. Filled with living bacteria designed to fluoresce and produce sugars or bio-fuel when exposed to light, Mushtari is a vision of a possible future where symbiotic human/microorganism relationships may help us explore other worlds in space.
By their second birthday, children are learning the names of things. What’s this? Cat. And this? Whale. Very good. What’s this color? Red! That’s right. You love red. The human brain is good at making some cognitive tasks look easy—when they aren’t easy at all. Teaching software to recognize objects, for example, has been a challenge in computer science. And up until a few years ago, computers were pretty terrible at it.
deep-learning-images-6However, like many things once the sole domain of humans, machines are rapidly improving their ability to slice and dice, sort and name objects like we do.
Earlier this year, Microsoft revealed its image recognition software was wrong just 4.94% of the time—it was the first to beat an expert human error rate of 5.1%. A month later, Google reported it had achieved a rate of 4.8%.
Now, Chinese search engine giant, Baidu, says their specialized supercomputer, Minwa, has bested Google with an error rate of 4.58%. Put another way, these programs can correctly recognize everyday stuff over 95% of the time. That’s amazing.
And how AI researchers got to this point is equally impressive.
Fabrication size limits — one of the barriers to using graphene on a commercial scale — could be overcome using a new method developed by researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL).
Graphene, a one-atom-thick material that is about 100 times stronger than steel by weight, has enormous commercial potential but has been impractical to employ on a large scale, mainly because of size limits and expense.
Now, using chemical vapor deposition, a team led by ORNL’s Ivan Vlassiouk has fabricated polymer laminate (layered) composites containing 2-inch-by-2-inch graphene sheets created from large continuous sheets of single-layer graphene. They were also able to produce graphene-based fibers.
The idea that teachers should teach and students should listen presumes that teachers know more than their students.
While this was generally true back when textbooks were a rarity, and may have been partly true since the invention of the public library, it is most likely untrue for at least many students in this era of the “active learner” (AKA “digital natives”).
After all, with a smartphone in every student’s pocket and Google only a tap away, how can the humble sage expect to compete as the font of all online knowledge?
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.