Knowmads, Infocol...
Follow
Find
49.2K views | +24 today
 
Scooped by Wildcat2030
onto Knowmads, Infocology of the future
Scoop.it!

Slime Molds: Ancient, Alien and Sophisticated,Can Answers to Evolution Be Found in Slime?

Slime Molds: Ancient, Alien and Sophisticated,Can Answers to Evolution Be Found in Slime? | Knowmads, Infocology of the future | Scoop.it
In a study of slime molds, scientists are learning more about how they cooperate, which ties into some of the deepest questions in evolution.
more...
No comment yet.

From around the web

Knowmads, Infocology of the future
Exploring the possible , the probable, the plausible
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030
Scoop.it!

Why I Don't Fear Artificial Intelligence

Why I Don't Fear Artificial Intelligence | Knowmads, Infocology of the future | Scoop.it
Why I don't fear AI (at least, not for now)

First of all, we (humans) consistently overreact to new technologies. Our default, evolutionary response to new things that we don't understand is to fear the worst.

Nowadays, the fear is promulgated by a flood of dystopian Hollywood movies and negative news that keeps us in fear of the future.

In the 1980's, when DNA restriction enzymes were discovered, making genetic engineering possible, the fear mongers warned the world of devastating killer engineered viruses and mutated life forms.

What we got was miracle drugs and extraordinary increases in food production.

Rather than extensive government regulations, a group of biologists, physicians and even lawyers came together at the Asilomar Conference on Recombinant DNA to discuss the potential biohazards and regulation of biotechnology and to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.

The guidelines they came up with allowed the researchers to move forward safely and continue to innovate, and we've been using them for 30 years.

The cloning of Dolly the sheep in 1997 led to prophesies that in just a few years we would have armies of cloned super-soldiers, parents implanting Einstein genes in their unborn children and warehouses of zombies being harvested for spare organs.

To my knowledge, none of this has come true.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Neuroscientists Are Making an Artificial Brain for Everyone | WIRED

Neuroscientists Are Making an Artificial Brain for Everyone | WIRED | Knowmads, Infocology of the future | Scoop.it
An artificially intelligent algorithm told me I’d enjoy both these things. I’d like the restaurant, the machine told me, because I prefer Mexican food and wine bars “with a casual atmosphere,” and the movie because “drama movies are in my digital DNA.” Besides, the title shows up around the web next to Boyhood, another film I like.

Nara Logics, the company behind this algorithm, is the brainchild (pun intended) of its CTO and cofounder, Nathan Wilson, a former research scientist at MIT who holds a doctorate in brain and cognitive science. Wilson spent his academic career and early professional life immersed in studying neural networks—software that mimics how a human mind thinks and makes connections. Nara Logics’ brain-like platform, under development for the past five years, is the product of all that thinking..
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The skyscrapers of the future will be made of wood

The skyscrapers of the future will be made of wood | Knowmads, Infocology of the future | Scoop.it
Vancouver-based architect Michael Green was unequivocal at a conference at which I heard him speak a while ago: “We grow trees in British Columbia that are 35 storeys tall, so why do our building codes restrict timber buildings to only five storeys?”

True, regulations in that part of Canada have changed relatively recently to permit an additional storey, but the point still stands. This can hardly be said to keep pace with the new manufacturing technologies and developments in engineered wood products that are causing architects and engineers to think very differently about the opportunities wood offers in the structure and construction of tall buildings.

Green himself produced a book in 2012 called Tall Wood, which explored in detail the design of 20-storey commercial buildings using engineered timber products throughout. Since then he has completed the Wood Innovation and Design Centre at the University of North British Columbia which, at 29.25 metres (effectively eight storeys), is currently lauded as the tallest modern timber building in North America.
more...
Pilar Ledezma's curator insight, May 22, 12:46 PM

añada su visión ...

Scooped by Wildcat2030
Scoop.it!

Peter Fingar: The Cognitive Computing Era is Upon Us

Peter Fingar: The Cognitive Computing Era is Upon Us | Knowmads, Infocology of the future | Scoop.it
The era of cognitive systems is dawning and building on today’s computer programming era. All machines, for now, require programming, and by definition programming does not allow for alternate scenarios that have not been programmed. To allow alternating outcomes would require going up a level, creating a self-learning Artificial Intelligence (AI) system. Via biomimicry and neuroscience, cognitive computing does this, taking computing concepts to a whole new level.

Fast forward to 2011 when IBM’s Watson won Jeopardy! Google recently made a $500 million acquisition of DeepMind. Facebook recently hired NYU professor Yann LeCun, a respected pioneer in AI. Microsoft has more than 65 PhD-level researchers working on deep learning. China’s Baidu search company hired Stanford University’s AI Professor Andrew Ng. All this has a lot of people talking about deep learning. While artificial intelligence has been around for years (John McCarthy coined the term in 1955), “deep learning” is now considered cutting-edge AI that represents an evolution over primitive neural networks.

Taking a step back to set the foundation for this discussion, let me review a few of these terms.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Researchers take Vital Step toward Creating Bionic Brain

Researchers take Vital Step toward Creating Bionic Brain | Knowmads, Infocology of the future | Scoop.it
Researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell. Researchers at the MicroNano Research Facility (MNRF) have built one of the world’s first electronic multi-state memory cells, which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step toward creating a bionic brain — which could help unlock successful treatments for common neurological conditions, such as Alzheimer’s and Parkinson’s diseases.

The discovery was recently published in the materials science journal Advanced Functional Materials.

Project leader Dr. Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences and, up until now, this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film — 10,000 times thinner than a human hair.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

We are ignoring the new machine age at our peril

We are ignoring the new machine age at our peril | Knowmads, Infocology of the future | Scoop.it
As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.

So it is with information technology. We know – indeed, it has become a cliche – that computing power has been doubling at least every two years since records of these things began. We know that the amount of data now generated by our digital existence is expanding annually at an astonishing rate. We know that our capacity to store digital information has been increasing exponentially. And so on. What we apparently have not sussed, however, is that these various strands of technological progress are not unconnected. Quite the contrary, and therein lies our problem.

The thinker who has done most to explain the consequences of connectedness is a Belfast man named W Brian Arthur, an economist who was the youngest person ever to occupy an endowed chair at Stanford University and who in later years has been associated with the Santa Fe Institute, one of the world’s leading interdisciplinary research institutes. In 2009, he published a remarkable book, The Nature of Technology, in which he formulated a coherent theory of what technology is, how it evolves and how it spurs innovation and industry. Technology, he argued, “builds itself organically from itself” in ways that resemble chemistry or even organic life. And implicit in Arthur’s conception of technology is the idea that innovation is not linear, but what mathematicians call “combinatorial”, ie one driven by a whole bunch of things. And the significant point about combinatorial innovation is that it brings about radical discontinuities that nobody could have anticipated.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Wildcat: Tilting at windmills or Enlightened vision Zoltan Istvan An original interview and essay for SC

Wildcat: Tilting at windmills or Enlightened vision Zoltan Istvan An original interview and essay for SC | Knowmads, Infocology of the future | Scoop.it
Here is the thing, whether you agree with Zoltan Istvan or not is irrelevant. What is relevant and deeply so, is that here is a human that walks his talk, and promotes relentlessly and irreverently that which he believes in.
And that which he believes in is best summed up in his own words, as an answer to my question:

Who are you Zoltan Istvan?

“I am human being who loves life, and I don't want that life to end. But I believe that life will end for myself and others if people don't do anything. So I'm doing all I can do to try and preserve my life and the lives of others via science and technology.”
Wildcat2030's insight:

my interview and essay with Zoltan Istvan

more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Next Big Future: Lab grown meat thirty thousand times cheaper than 18 months ago

Next Big Future: Lab grown meat thirty thousand times cheaper than 18 months ago | Knowmads, Infocology of the future | Scoop.it
Dutch Professor Mark Post, the scientist who made world's first laboratory grown beef burger believes so-called "cultured meat" could spell the end of traditional cattle farming within just a few decades.

A year and a half ago the professor of vascular physiology gave the world its first taste of a beef burger he'd grown from stem cells taken from cow muscle.

It passed the food critics' taste test, but at more than a quarter of a million dollars, the lab quarter-pounder was no threat to the real deal. Now, after further development, Dr Post estimates it's possible to produce lab-beef for $80 a kilo - and that within years it will be a price-competitive alternative.

A small piece of muscle you can produce 10,000 kilos of meat.


In 2013, it cost $325,000 to make lab grown meat for a burger made from cultured muscle tissue cells. Now the cost is $11 for a quarter pound lab grown patty.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Questioning the Hype About Artificial Intelligence

Questioning the Hype About Artificial Intelligence | Knowmads, Infocology of the future | Scoop.it
Artificial Intelligence has always been hyped by its often-charismatic enthusiasts; lately it seems that the hype may be coming true. Media pundits, technologists, and increasingly the broader public see the rise of artificial intelligence as inevitable. Companies with mass data fed into AI systems—like Google, Facebook, Yahoo, and others—make headlines with technological successes that were science fiction even a decade ago: We can talk to our phones, get recommendations that are personalized to our interests, and may even ride around in cars driven by computers soon. The world has changed, and AI is a big part of why.

As one might expect, pundits and technologists talk about the “AI revolution” in the most glowing terms, equating advances in computer tech with advances in humanity: standards of living, access to knowledge, and a spate of emerging systems and applications ranging from improved hearing and vision aids for the impaired, to the cheaper manufacture of goods, to better recommendations from Amazon, Netflix, Pandora, and others. Artificial Intelligence is a measuring rod for progress, scientifically, technologically, and even socially.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Nano memory cell can mimic the brain’s long-term memory - RMIT University

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain – a vital step towards creating a bionic brain – which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

The discovery was recently published in the prestigious materials science journal Advanced Functional Materials.

Project leader Dr Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the ground-breaking development imitates the way the brain uses long-term memory.

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film - 10,000 times thinner than a human hair.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The AI Revolution: Our Immortality or Extinction | Wait But Why

The AI Revolution: Our Immortality or Extinction | Wait But Why | Knowmads, Infocology of the future | Scoop.it
As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5
Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.
Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

This Gigantic Solar Hourglass Could Power 1,000 Danish Homes

This Gigantic Solar Hourglass Could Power 1,000 Danish Homes | Knowmads, Infocology of the future | Scoop.it
Argentina-based designer Santiago Muros Cortés just unveiled plans for a gigantic solar energy generating hourglass that could produce enough electricity for 1,000 Danish homes while serving as a visual reminder that there's still time to stop climate change - if we act now. Planned for an industrial brownfield site across the harbor from Copenhagen's iconic Little Mermaid statue, the artwork showcases concentrated solar power technology while engaging the public. Cortés' Solar Hourglass just won the 2014 Land Art Generator Initiative design competition, and if constructed, it will be an amazing new tourist attraction for the Danish capital.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The evolution of the word ‘evolution’ | OxfordWords blog

The evolution of the word ‘evolution’ | OxfordWords blog | Knowmads, Infocology of the future | Scoop.it
It is curious that, although the modern theory of evolution has its source in Charles Darwin’s great book On the Origin of Species (1859), the word evolution does not appear in the original text at all. In fact, Darwin seems deliberately to have avoided using the word evolution, preferring to refer to the process of biological change as ‘transmutation’. Some of the reasons for this, and for continuing confusion about the word evolution in the succeeding century and a half, can be unpacked from the word’s entry in the Oxford English Dictionary (OED).
Evolution before Darwin

The word evolution first arrived in English (and in several other European languages) from an influential treatise on military tactics and drill, written in Greek by the second-century writer Aelian (Aelianus Tacticus). In translations of his work, the Latin word evolutio and its offspring, the French word évolution, were used to refer to a military manoeuvre or change of formation, and hence the earliest known English example of evolution traced by the OED comes from a translation of Aelian, published in 1616. As well as being applied in this military context to the present day, it is also still used with reference to movements of various kinds, especially in dance or gymnastics, often with a sense of twisting or turning.

In classical Latin, though, evolutio had first denoted the unrolling of a scroll, and by the early 17th century, the English word evolution was often applied to ‘the process of unrolling, opening out, or revealing’. It is this aspect of its application which may have been behind Darwin’s reluctance to use the term. Despite its association with ‘development’, which might have seemed apt enough, he would not have wanted to associate his theory with the notion that the history of life was the simple chronological unrolling of a predetermined creative plan. Nor would he have wanted to promote the similar concept of embryonic development, which saw the growth of an organism as a kind of unfolding or opening out of structures already present in miniature in the earliest embryo (the ‘preformation’ theory of the 18th century). The use of the word evolution in such a way, radically opposed to Darwin’s theory, appears in the writings of his uncle:

The world…might have been gradually produced from very small beginnings…rather than by a sudden evolution of the whole by the Almighty fiat.

Erasmus Darwin, Zoonomia (1801)
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Time heals all wounds. Polymers heal them faster. - | GE Look Ahead | The Economist

Time heals all wounds. Polymers heal them faster. - | GE Look Ahead | The Economist | Knowmads, Infocology of the future | Scoop.it
A host of supersmart, self-healing technologies could someday make “normal wear and tear” a bygone expense. By definition, these materials—either intrinsically or with the aid of an outside agent—can mend broken molecular bonds without human intervention.

When scientists created the first thermoset elastomer that could repair itself at room temperature without a catalyst, in 2013, they made history. Dubbed “The Terminator”, after the famous Arnold Schwarzenegger films, this advanced polymer and other self-healing materials provide a glimpse into the future of industrial design.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Moore's Law Keeps Going, Defying Expectations

Moore's Law Keeps Going, Defying Expectations | Knowmads, Infocology of the future | Scoop.it
SAN FRANCISCO—Personal computers, cellphones, self-driving cars—Gordon Moore predicted the invention of all these technologies half a century ago in a 1965 article for Electronics magazine. The enabling force behind those inventions would be computing power, and Moore laid out how he thought computing power would evolve over the coming decade. Last week the tech world celebrated his prediction here because it has held true with uncanny accuracy—for the past 50 years.

It is now called Moore’s law, although Moore (who co-founded the chip maker Intel) doesn’t much like the name. “For the first 20 years I couldn’t utter the term Moore’s law. It was embarrassing,” the 86-year-old visionary said in an interview with New York Times columnist Thomas Friedman at the gala event, held at Exploratorium science museum. “Finally, I got accustomed to it where now I could say it with a straight face.” He and Friedman chatted in front of a rapt audience, with Moore cracking jokes the whole time and doling out advice, like how once you’ve made one successful prediction, you should avoid making another. In the background Intel’s latest gadgets whirred quietly: collision-avoidance drones, dancing spider robots, a braille printer—technologies all made possible via advances in processing power anticipated by Moore’s law.
more...
Jan Vajda's curator insight, May 22, 2:20 PM

Přidejte svůj pohled ...

Scooped by Wildcat2030
Scoop.it!

Could a 'super-Earth' be even more habitable than our own planet? - CNET

Could a 'super-Earth' be even more habitable than our own planet? - CNET | Knowmads, Infocology of the future | Scoop.it
It's popular to talk about how wonderful, beautiful and rare a treasure our planet is; I certainly say such things all the time, and many satellite, Instagram and Pinterest photos testify to this truism. But let's be real for a minute, my fellow humans and A.I. beings -- we don't really have firsthand experience with an adequate sample size of habitable planets to say this for sure.

In fact, a pair of scientists have been looking into the possibility that there might be a distant planet (or a couple of them or maybe 3 billion) out there more suitable to supporting life as we know it. They even describe what such a "superhabitable" planet might look like -- a super-Earth with a mass double or triple that of our planet, orbiting in the habitable zone around a K-type dwarf star several billion years older than our sun.

The basic explanation for why such a planet would make a "better Earth" is that it might have a long-lasting magnetic field, which protects the planet from the abundant radiation of space and stars, and plate tectonics activity, which keeps some of the key life-supporting elements in balance. Also, a planet with double or triple the mass of Earth would mean more surface gravity, likely forming more shallow lakes and oceans, more archipelago-like land masses and fewer deserts. More shallow waters might mean more biodiversity, as they typically do here on our planet.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Equality and polyamory: why early humans weren't The Flintstones

Equality and polyamory: why early humans weren't The Flintstones | Knowmads, Infocology of the future | Scoop.it
Last week, scientists from University College London released a paper presenting evidence that men and women in early society lived in relative equality. The paper challenges much of our understanding of human history, a fact not lost on the scientists. Mark Dyble, the study’s lead author, stated “sexual equality is one of the important changes that distinguishes humans. It hasn’t really been highlighted before.”

Despite Dyble’s comments, however, this paper isn’t the first foray into the issue. In fact, it represents another shot fired in a debate between scientific and anthropological communities that has been raging for centuries. It’s a debate that asks some fundamental questions: who are we, and how did we become the society we are today?

Our modern picture of prehistoric societies, or what we can call the “standard narrative of prehistory” looks a lot like The Flintstones. The narrative goes that we have always lived in nuclear families. Men have always gone out to work or hunt, while women stayed at home to look after the house and the children. The nuclear family and the patriarchy are as old as society itself.

The narrative is multifaceted, but has strong roots in biological science, which can probably be traced back to Charles Darwin’s theory of sexual selection. Darwin’s premise was that due to their need to carry and nurture a child women have a greater investment in offspring than men. Women are therefore significantly more hesitant to participate in sexual activity, creating conflicting sexual agendas between the two genders.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Cheap Satellite Receiver Offers a Free Way to Access Wikipedia, International News, and Other Vital Websites | MIT Technology Review

Cheap Satellite Receiver Offers a Free Way to Access Wikipedia, International News, and Other Vital Websites | MIT Technology Review | Knowmads, Infocology of the future | Scoop.it
The UN estimates that 4.3 billion people do not use the Internet, mostly because the cost is prohibitive or their area lacks the infrastructure. Outernet’s free broadcast could give many of those people a way to access useful online information relatively quickly, says Karim. The World Bank has agreed to help roll out Pillar devices in South Sudan as a way to distribute educational material to schools. Teachers and pupils will still need to have devices or printers to make use of that information, though.

The designs and software for Outernet’s Pillar devices are freely available so people or companies can make their own versions. They currently cost around $150 to make, but that should fall below $100 once they are being made in larger numbers, says Karim.

Outernet is also working on a portable solar-powered receiver called Lantern. It can be hooked up to a dish to pick up Outernet’s existing signal and also has a built-in antenna designed to pick up a different kind of satellite signal that Outernet aims to switch on this summer. The company has taken orders for more than 5,000 Lantern devices. It has a grant from the U.K. Space Agency to have three small satellites made dedicated to broadcasting the Lantern signal. The first satellites and portable Lantern receiver devices are expected to be ready late this year.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

World’s first photosynthetic living matter-infused 3D-printed wearable

World’s first photosynthetic living matter-infused 3D-printed wearable | Knowmads, Infocology of the future | Scoop.it
Speaking at the 2015 TED conference in Vancouver, Canada, MIT professor Neri Oxman has displayed what is claimed to be the world’s first 3D-printed photosynthetic wearable prototype embedded with living matter. Dubbed "Mushtari," the wearable is constructed from 58 meters (190 ft) of 3D-printed tubes coiled into a mass that emulates the construction of the human gastrointestinal tract. Filled with living bacteria designed to fluoresce and produce sugars or bio-fuel when exposed to light, Mushtari is a vision of a possible future where symbiotic human/microorganism relationships may help us explore other worlds in space.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How Artificial Intelligence Is Primed to Beat You at Where's Waldo

How Artificial Intelligence Is Primed to Beat You at Where's Waldo | Knowmads, Infocology of the future | Scoop.it
By their second birthday, children are learning the names of things. What’s this? Cat. And this? Whale. Very good. What’s this color? Red! That’s right. You love red. The human brain is good at making some cognitive tasks look easy—when they aren’t easy at all. Teaching software to recognize objects, for example, has been a challenge in computer science. And up until a few years ago, computers were pretty terrible at it.

deep-learning-images-6However, like many things once the sole domain of humans, machines are rapidly improving their ability to slice and dice, sort and name objects like we do.

Earlier this year, Microsoft revealed its image recognition software was wrong just 4.94% of the time—it was the first to beat an expert human error rate of 5.1%. A month later, Google reported it had achieved a rate of 4.8%.

Now, Chinese search engine giant, Baidu, says their specialized supercomputer, Minwa, has bested Google with an error rate of 4.58%. Put another way, these programs can correctly recognize everyday stuff over 95% of the time. That’s amazing.

And how AI researchers got to this point is equally impressive.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

First large-scale graphene fabrication | KurzweilAI

First large-scale graphene fabrication | KurzweilAI | Knowmads, Infocology of the future | Scoop.it
Fabrication size limits — one of the barriers to using graphene on a commercial scale — could be overcome using a new method developed by researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL).

Graphene, a one-atom-thick material that is about 100 times stronger than steel by weight, has enormous commercial potential but has been impractical to employ on a large scale, mainly because of size limits and expense.

Now, using chemical vapor deposition, a team led by ORNL’s Ivan Vlassiouk has fabricated polymer laminate (layered) composites containing 2-inch-by-2-inch graphene sheets created from large continuous sheets of single-layer graphene. They were also able to produce graphene-based fibers.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

How 'digital natives' are killing the 'sage on the stage'

How 'digital natives' are killing the 'sage on the stage' | Knowmads, Infocology of the future | Scoop.it
The idea that teachers should teach and students should listen presumes that teachers know more than their students.

While this was generally true back when textbooks were a rarity, and may have been partly true since the invention of the public library, it is most likely untrue for at least many students in this era of the “active learner” (AKA “digital natives”).

After all, with a smartphone in every student’s pocket and Google only a tap away, how can the humble sage expect to compete as the font of all online knowledge?
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

Meet the people out to stop humanity from destroying itself

Meet the people out to stop humanity from destroying itself | Knowmads, Infocology of the future | Scoop.it
“We attract weird people,” Andrew Snyder-Beattie said. “I get crazy emails in my inbox all the time.” What kinds of people? “People who have their own theories of physics.”Snyder-Beattie is the project manager at the Future of Humanity Institute. Headed up by Nick Bostrom, the Swedish philosopher famous for popularizing the risks of artificial intelligence, the FHI is part of the Oxford Martin School, created when a computer billionaire gave the largest donation in Oxford University’s 900-year history to set up a place to solve some of the world’s biggest problems. One of Bostrom’s research papers (pdf, p. 26) noted that more academic research has been done on dung beetles and Star Trek than on human extinction. The FHI is trying to change that.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

3D reconstruction of neuronal networks uncovers hidden organizational principles of sensory cortex | KurzweilAI

3D reconstruction of neuronal networks uncovers hidden organizational principles of sensory cortex | KurzweilAI | Knowmads, Infocology of the future | Scoop.it
An international research team has reconstructed anatomically realistic 3D models of cortical columns of the rat brain, providing unprecedented insight into how neurons in the elementary functional units of the sensory cortex called cortical columns are interconnected.

The models suggest that cortical circuitry interconnects most neurons across cortical columns, rather than within and that these “trans-columnar” networks are not uniformly structured: they are highly specialized and integrate signals from multiple sensory receptors.

For example, rodents are nocturnal animals that use facial whiskers as their primary sensory receptors to orient themselves in their environment. For example, to determine the position, size and texture of objects, they rhythmically move the whiskers back and forth, thereby exploring and touching objects within their immediate surroundings. Such tactile sensory information is then relayed from the periphery to the sensory cortex via whisker-specific neuronal pathways, where each individual whisker activates neurons located within a dedicated cortical column. The one-to-one correspondence between a facial whisker and a cortical column renders the rodent vibrissal system as an ideal model to investigate the structural and functional organization of cortical columns.
more...
No comment yet.
Scooped by Wildcat2030
Scoop.it!

The dawn of artificial intelligence

The dawn of artificial intelligence | Knowmads, Infocology of the future | Scoop.it
THE development of full artificial intelligence could spell the end of the human race,” Stephen Hawking warns. Elon Musk fears that the development of artificial intelligence, or AI, may be the biggest existential threat humanity faces. Bill Gates urges people to beware of it.

Dread that the abominations people create will become their masters, or their executioners, is hardly new. But voiced by a renowned cosmologist, a Silicon Valley entrepreneur and the founder of Microsoft—hardly Luddites—and set against the vast investment in AI by big firms like Google and Microsoft, such fears have taken on new weight. With supercomputers in every pocket and robots looking down on every battlefield, just dismissing them as science fiction seems like self-deception. The question is how to worry wisely.

You taught me language and...

The first step is to understand what computers can now do and what they are likely to be able to do in the future. Thanks to the rise in processing power and the growing abundance of digitally available data, AI is enjoying a boom in its capabilities (see article). Today’s “deep learning” systems, by mimicking the layers of neurons in a human brain and crunching vast amounts of data, can teach themselves to perform some tasks, from pattern recognition to translation, almost as well as humans can. As a result, things that once called for a mind—from interpreting pictures to playing the video game “Frogger”—are now within the scope of computer programs. DeepFace, an algorithm unveiled by Facebook in 2014, can recognise individual human faces in images 97% of the time.
more...
No comment yet.