SAN FRANCISCO—Personal computers, cellphones, self-driving cars—Gordon Moore predicted the invention of all these technologies half a century ago in a 1965 article for Electronics magazine. The enabling force behind those inventions would be computing power, and Moore laid out how he thought computing power would evolve over the coming decade. Last week the tech world celebrated his prediction here because it has held true with uncanny accuracy—for the past 50 years.
It is now called Moore’s law, although Moore (who co-founded the chip maker Intel) doesn’t much like the name. “For the first 20 years I couldn’t utter the term Moore’s law. It was embarrassing,” the 86-year-old visionary said in an interview with New York Times columnist Thomas Friedman at the gala event, held at Exploratorium science museum. “Finally, I got accustomed to it where now I could say it with a straight face.” He and Friedman chatted in front of a rapt audience, with Moore cracking jokes the whole time and doling out advice, like how once you’ve made one successful prediction, you should avoid making another. In the background Intel’s latest gadgets whirred quietly: collision-avoidance drones, dancing spider robots, a braille printer—technologies all made possible via advances in processing power anticipated by Moore’s law.
It's popular to talk about how wonderful, beautiful and rare a treasure our planet is; I certainly say such things all the time, and many satellite, Instagram and Pinterest photos testify to this truism. But let's be real for a minute, my fellow humans and A.I. beings -- we don't really have firsthand experience with an adequate sample size of habitable planets to say this for sure.
In fact, a pair of scientists have been looking into the possibility that there might be a distant planet (or a couple of them or maybe 3 billion) out there more suitable to supporting life as we know it. They even describe what such a "superhabitable" planet might look like -- a super-Earth with a mass double or triple that of our planet, orbiting in the habitable zone around a K-type dwarf star several billion years older than our sun.
The basic explanation for why such a planet would make a "better Earth" is that it might have a long-lasting magnetic field, which protects the planet from the abundant radiation of space and stars, and plate tectonics activity, which keeps some of the key life-supporting elements in balance. Also, a planet with double or triple the mass of Earth would mean more surface gravity, likely forming more shallow lakes and oceans, more archipelago-like land masses and fewer deserts. More shallow waters might mean more biodiversity, as they typically do here on our planet.
Last week, scientists from University College London released a paper presenting evidence that men and women in early society lived in relative equality. The paper challenges much of our understanding of human history, a fact not lost on the scientists. Mark Dyble, the study’s lead author, stated “sexual equality is one of the important changes that distinguishes humans. It hasn’t really been highlighted before.”
Despite Dyble’s comments, however, this paper isn’t the first foray into the issue. In fact, it represents another shot fired in a debate between scientific and anthropological communities that has been raging for centuries. It’s a debate that asks some fundamental questions: who are we, and how did we become the society we are today?
Our modern picture of prehistoric societies, or what we can call the “standard narrative of prehistory” looks a lot like The Flintstones. The narrative goes that we have always lived in nuclear families. Men have always gone out to work or hunt, while women stayed at home to look after the house and the children. The nuclear family and the patriarchy are as old as society itself.
The narrative is multifaceted, but has strong roots in biological science, which can probably be traced back to Charles Darwin’s theory of sexual selection. Darwin’s premise was that due to their need to carry and nurture a child women have a greater investment in offspring than men. Women are therefore significantly more hesitant to participate in sexual activity, creating conflicting sexual agendas between the two genders.
The UN estimates that 4.3 billion people do not use the Internet, mostly because the cost is prohibitive or their area lacks the infrastructure. Outernet’s free broadcast could give many of those people a way to access useful online information relatively quickly, says Karim. The World Bank has agreed to help roll out Pillar devices in South Sudan as a way to distribute educational material to schools. Teachers and pupils will still need to have devices or printers to make use of that information, though.
The designs and software for Outernet’s Pillar devices are freely available so people or companies can make their own versions. They currently cost around $150 to make, but that should fall below $100 once they are being made in larger numbers, says Karim.
Outernet is also working on a portable solar-powered receiver called Lantern. It can be hooked up to a dish to pick up Outernet’s existing signal and also has a built-in antenna designed to pick up a different kind of satellite signal that Outernet aims to switch on this summer. The company has taken orders for more than 5,000 Lantern devices. It has a grant from the U.K. Space Agency to have three small satellites made dedicated to broadcasting the Lantern signal. The first satellites and portable Lantern receiver devices are expected to be ready late this year.
Speaking at the 2015 TED conference in Vancouver, Canada, MIT professor Neri Oxman has displayed what is claimed to be the world’s first 3D-printed photosynthetic wearable prototype embedded with living matter. Dubbed "Mushtari," the wearable is constructed from 58 meters (190 ft) of 3D-printed tubes coiled into a mass that emulates the construction of the human gastrointestinal tract. Filled with living bacteria designed to fluoresce and produce sugars or bio-fuel when exposed to light, Mushtari is a vision of a possible future where symbiotic human/microorganism relationships may help us explore other worlds in space.
By their second birthday, children are learning the names of things. What’s this? Cat. And this? Whale. Very good. What’s this color? Red! That’s right. You love red. The human brain is good at making some cognitive tasks look easy—when they aren’t easy at all. Teaching software to recognize objects, for example, has been a challenge in computer science. And up until a few years ago, computers were pretty terrible at it.
deep-learning-images-6However, like many things once the sole domain of humans, machines are rapidly improving their ability to slice and dice, sort and name objects like we do.
Earlier this year, Microsoft revealed its image recognition software was wrong just 4.94% of the time—it was the first to beat an expert human error rate of 5.1%. A month later, Google reported it had achieved a rate of 4.8%.
Now, Chinese search engine giant, Baidu, says their specialized supercomputer, Minwa, has bested Google with an error rate of 4.58%. Put another way, these programs can correctly recognize everyday stuff over 95% of the time. That’s amazing.
And how AI researchers got to this point is equally impressive.
Fabrication size limits — one of the barriers to using graphene on a commercial scale — could be overcome using a new method developed by researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL).
Graphene, a one-atom-thick material that is about 100 times stronger than steel by weight, has enormous commercial potential but has been impractical to employ on a large scale, mainly because of size limits and expense.
Now, using chemical vapor deposition, a team led by ORNL’s Ivan Vlassiouk has fabricated polymer laminate (layered) composites containing 2-inch-by-2-inch graphene sheets created from large continuous sheets of single-layer graphene. They were also able to produce graphene-based fibers.
The idea that teachers should teach and students should listen presumes that teachers know more than their students.
While this was generally true back when textbooks were a rarity, and may have been partly true since the invention of the public library, it is most likely untrue for at least many students in this era of the “active learner” (AKA “digital natives”).
After all, with a smartphone in every student’s pocket and Google only a tap away, how can the humble sage expect to compete as the font of all online knowledge?
“We attract weird people,” Andrew Snyder-Beattie said. “I get crazy emails in my inbox all the time.” What kinds of people? “People who have their own theories of physics.”Snyder-Beattie is the project manager at the Future of Humanity Institute. Headed up by Nick Bostrom, the Swedish philosopher famous for popularizing the risks of artificial intelligence, the FHI is part of the Oxford Martin School, created when a computer billionaire gave the largest donation in Oxford University’s 900-year history to set up a place to solve some of the world’s biggest problems. One of Bostrom’s research papers (pdf, p. 26) noted that more academic research has been done on dung beetles and Star Trek than on human extinction. The FHI is trying to change that.
An international research team has reconstructed anatomically realistic 3D models of cortical columns of the rat brain, providing unprecedented insight into how neurons in the elementary functional units of the sensory cortex called cortical columns are interconnected.
The models suggest that cortical circuitry interconnects most neurons across cortical columns, rather than within and that these “trans-columnar” networks are not uniformly structured: they are highly specialized and integrate signals from multiple sensory receptors.
For example, rodents are nocturnal animals that use facial whiskers as their primary sensory receptors to orient themselves in their environment. For example, to determine the position, size and texture of objects, they rhythmically move the whiskers back and forth, thereby exploring and touching objects within their immediate surroundings. Such tactile sensory information is then relayed from the periphery to the sensory cortex via whisker-specific neuronal pathways, where each individual whisker activates neurons located within a dedicated cortical column. The one-to-one correspondence between a facial whisker and a cortical column renders the rodent vibrissal system as an ideal model to investigate the structural and functional organization of cortical columns.
THE development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it.
Dread that the abominations people create will become their masters, or their executioners, is hardly new. But voiced by a renowned cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly Luddites—and set against the vast investment in AI by big firms like Google and Microsoft, such fears have taken on new weight. With supercomputers in every pocket and robots looking down on every battlefield, just dismissing them as science fiction seems like self-deception. The question is how to worry wisely.
You taught me language and...
The first step is to understand what computers can now do and what they are likely to be able to do in the future. Thanks to the rise in processing power and the growing abundance of digitally available data, AI is enjoying a boom in its capabilities (see article). Today’s “deep learning” systems, by mimicking the layers of neurons in a human brain and crunching vast amounts of data, can teach themselves to perform some tasks, from pattern recognition to translation, almost as well as humans can. As a result, things that once called for a mind—from interpreting pictures to playing the video game “Frogger”—are now within the scope of computer programs. DeepFace, an algorithm unveiled by Facebook in 2014, can recognise individual human faces in images 97% of the time.
This four-second time-lapse photo of a Los Angeles freeway illustrates the complexities of decision-making, as one driver appears to have made a late change of mind while most drivers decided in advance whether to stay on the main road or take an exit ramp. (Photo: Susanica Tam)
Mornings on the International Space Station (ISS) got a bit brighter as the first cup of espresso coffee in space was brewed and drank on the station by Italian astronaut Samantha Cristoforetti. To celebrate, Cristoforetti tweeted back to Earth a photo of her imbibing the brew, saying, "'Coffee: the finest organic suspension ever devised.' Fresh espresso in the new Zero-G cup! To boldly brew…"
It may not have been the first Moon landing, but history was made on May 3 at 12:44 GMT as the ISSpresso capsule espresso machine hissed and bubbled out its first cuppa. Designed and built as joint effort between aerospace engineering firm Argotec and coffee company Lavazza with the help of the Italian Space Agency (ASI), the ISSpresso uses standard Lavazza coffee capsules, but the makers say that it is otherwise a pretty serious piece of space engineering, which Cristoforetti installed in one of the station's modules.
ISSpresso isn't like a terrestrial espresso machine. It had to be redesigned to take into account the peculiar fluid dynamics of zero gravity, as well as engineered to meet space station safety requirements, such as using steel instead of plastic tubing. According to Lavazza, this makes the ISSpresso more than a publicity stunt and it is officially listed as one of the nine experiments selected by ASI for Cristoforetti’s Futura Mission to the station.
Vancouver-based architect Michael Green was unequivocal at a conference at which I heard him speak a while ago: “We grow trees in British Columbia that are 35 storeys tall, so why do our building codes restrict timber buildings to only five storeys?”
True, regulations in that part of Canada have changed relatively recently to permit an additional storey, but the point still stands. This can hardly be said to keep pace with the new manufacturing technologies and developments in engineered wood products that are causing architects and engineers to think very differently about the opportunities wood offers in the structure and construction of tall buildings.
Green himself produced a book in 2012 called Tall Wood, which explored in detail the design of 20-storey commercial buildings using engineered timber products throughout. Since then he has completed the Wood Innovation and Design Centre at the University of North British Columbia which, at 29.25 metres (effectively eight storeys), is currently lauded as the tallest modern timber building in North America.
The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience, cognitive computing does this, taking computing concepts to a whole new level.
Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.
Taking a step back to set the foundation for this discussion, let me review a few of these terms.
Researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell. Researchers at the MicroNano Research Facility (MNRF) have built one of the world’s first electronic multi-state memory cells, which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step toward creating a bionic brain — which could help unlock successful treatments for common neurological conditions, such as Alzheimer’s and Parkinson’s diseases.
The discovery was recently published in the materials science journal Advanced Functional Materials.
Project leader Dr. Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.
“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Sharath said.
“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences and, up until now, this functionality has not been able to be adequately reproduced with digital technology.”
The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.
The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film — 10,000 times thinner than a human hair.
As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.
So it is with information technology. We know – indeed, it has become a cliche – that computing power has been doubling at least every two years since records of these things began. We know that the amount of data now generated by our digital existence is expanding annually at an astonishing rate. We know that our capacity to store digital information has been increasing exponentially. And so on. What we apparently have not sussed, however, is that these various strands of technological progress are not unconnected. Quite the contrary, and therein lies our problem.
The thinker who has done most to explain the consequences of connectedness is a Belfast man named W Brian Arthur, an economist who was the youngest person ever to occupy an endowed chair at Stanford University and who in later years has been associated with the Santa Fe Institute, one of the world’s leading interdisciplinary research institutes. In 2009, he published a remarkable book, The Nature of Technology, in which he formulated a coherent theory of what technology is, how it evolves and how it spurs innovation and industry. Technology, he argued, “builds itself organically from itself” in ways that resemble chemistry or even organic life. And implicit in Arthur’s conception of technology is the idea that innovation is not linear, but what mathematicians call “combinatorial”, ie one driven by a whole bunch of things. And the significant point about combinatorial innovation is that it brings about radical discontinuities that nobody could have anticipated.
Here is the thing, whether you agree with Zoltan Istvan or not is irrelevant. What is relevant and deeply so, is that here is a human that walks his talk, and promotes relentlessly and irreverently that which he believes in. And that which he believes in is best summed up in his own words, as an answer to my question:
Who are you Zoltan Istvan?
“I am human being who loves life, and I don't want that life to end. But I believe that life will end for myself and others if people don't do anything. So I'm doing all I can do to try and preserve my life and the lives of others via science and technology.”
Dutch Professor Mark Post, the scientist who made world's first laboratory grown beef burger believes so-called "cultured meat" could spell the end of traditional cattle farming within just a few decades.
A year and a half ago the professor of vascular physiology gave the world its first taste of a beef burger he'd grown from stem cells taken from cow muscle.
It passed the food critics' taste test, but at more than a quarter of a million dollars, the lab quarter-pounder was no threat to the real deal. Now, after further development, Dr Post estimates it's possible to produce lab-beef for $80 a kilo - and that within years it will be a price-competitive alternative.
A small piece of muscle you can produce 10,000 kilos of meat.
In 2013, it cost $325,000 to make lab grown meat for a burger made from cultured muscle tissue cells. Now the cost is $11 for a quarter pound lab grown patty.
Artificial Intelligence has always been hyped by its often-charismatic enthusiasts; lately it seems that the hype may be coming true. Media pundits, technologists, and increasingly the broader public see the rise of artificial intelligence as inevitable. Companies with mass data fed into AI systems—like Google, Facebook, Yahoo, and others—make headlines with technological successes that were science fiction even a decade ago: We can talk to our phones, get recommendations that are personalized to our interests, and may even ride around in cars driven by computers soon. The world has changed, and AI is a big part of why.
As one might expect, pundits and technologists talk about the “AI revolution” in the most glowing terms, equating advances in computer tech with advances in humanity: standards of living, access to knowledge, and a spate of emerging systems and applications ranging from improved hearing and vision aids for the impaired, to the cheaper manufacture of goods, to better recommendations from Amazon, Netflix, Pandora, and others. Artificial Intelligence is a measuring rod for progress, scientifically, technologically, and even socially.
Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.
The development brings them closer to imitating key electronic aspects of the human brain – a vital step towards creating a bionic brain – which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.
The discovery was recently published in the prestigious materials science journal Advanced Functional Materials.
Project leader Dr Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.
“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.
“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”
The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.
The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film - 10,000 times thinner than a human hair.
As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5 Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption. Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.
Argentina-based designer Santiago Muros Cortés just unveiled plans for a gigantic solar energy generating hourglass that could produce enough electricity for 1,000 Danish homes while serving as a visual reminder that there's still time to stop climate change - if we act now. Planned for an industrial brownfield site across the harbor from Copenhagen's iconic Little Mermaid statue, the artwork showcases concentrated solar power technology while engaging the public. Cortés' Solar Hourglass just won the 2014 Land Art Generator Initiative design competition, and if constructed, it will be an amazing new tourist attraction for the Danish capital.
It is curious that, although the modern theory of evolution has its source in Charles Darwin’s great book On the Origin of Species (1859), the word evolution does not appear in the original text at all. In fact, Darwin seems deliberately to have avoided using the word evolution, preferring to refer to the process of biological change as ‘transmutation’. Some of the reasons for this, and for continuing confusion about the word evolution in the succeeding century and a half, can be unpacked from the word’s entry in the Oxford English Dictionary (OED). Evolution before Darwin
The word evolution first arrived in English (and in several other European languages) from an influential treatise on military tactics and drill, written in Greek by the second-century writer Aelian (Aelianus Tacticus). In translations of his work, the Latin word evolutio and its offspring, the French word évolution, were used to refer to a military manoeuvre or change of formation, and hence the earliest known English example of evolution traced by the OED comes from a translation of Aelian, published in 1616. As well as being applied in this military context to the present day, it is also still used with reference to movements of various kinds, especially in dance or gymnastics, often with a sense of twisting or turning.
In classical Latin, though, evolutio had first denoted the unrolling of a scroll, and by the early 17th century, the English word evolution was often applied to ‘the process of unrolling, opening out, or revealing’. It is this aspect of its application which may have been behind Darwin’s reluctance to use the term. Despite its association with ‘development’, which might have seemed apt enough, he would not have wanted to associate his theory with the notion that the history of life was the simple chronological unrolling of a predetermined creative plan. Nor would he have wanted to promote the similar concept of embryonic development, which saw the growth of an organism as a kind of unfolding or opening out of structures already present in miniature in the earliest embryo (the ‘preformation’ theory of the 18th century). The use of the word evolution in such a way, radically opposed to Darwin’s theory, appears in the writings of his uncle:
The world…might have been gradually produced from very small beginnings…rather than by a sudden evolution of the whole by the Almighty fiat.
Vladimir Voevodsky had no sooner sat himself down at the sparkling table, set for a dinner party at the illustrious Institute for Advanced Study in Princeton, New Jersey, than he overturned his empty wine glass, flipping bowl over stem and standing the glass on its rim—a signal to waiters that he would not be imbibing. He is not always so abstemious, but Voevodsky, that fall of 2013, was in the midst of some serious work.
Founded in 1930, the Institute has been called “the earthly temple of mathematical and theoretical physics,” and is a hub for all manner of rigorous intellectual inquiry. Einstein’s old house is around the corner. In the parking lot a car sports a nerdy bumper sticker reading, “Don’t Believe Everything You Think”—which might very well be aimed directly at Voevodsky. Because during the course of some professional soul-searching over the last decade or so, he’d come to the realization that a mathematician’s work is 5 percent creative insight and 95 percent self-verification. And this was only reinforced by a recent discovery around the time of the dinner party: He’d made a big mistake.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.