Knowmads, Infocol...
91.3K views | +108 today
Scooped by Wildcat2030
onto Knowmads, Infocology of the future!

Why mapping the human brain matters,Today’s technology landscape would be completely altered.

Why mapping the human brain matters,Today’s technology landscape would be completely altered. | Knowmads, Infocology of the future |
Entirely new companies may emerge, offering futuristic services such as cosmetic brain surgery or “total recall” or “the eternal sunshine of a spotless mind.


It turns out that President Barack Obama’s head-scratching mention of a project to map the human brain in his most recent State of the Union speech was more than just a casual comment. John Markoff of the New York Times reported this week that the White House will soon unveil a massive, multi-billion-dollar research project to map the entire human brain that will likely involve scores of scientists, foundations and government agencies. When completed, a detailed map of how the human brain works would be a staggering development in innovation — one that could lead to cures for brain-related illnesses as well as unimagined breakthroughs in artificial intelligence.

Today’s technology landscape would be completely altered.

Similar to the Human Genome Project to which it has already been already compared, the Brain Activity Map Project would open up the mysterious workings of what makes us human. We already know something of how the brain processes information, but thus far, we only have isolated bits and pieces. A full map would let researchers know how every neuron fires, how every thought comes into being, and how the human brain learns over time. A map would let us understand the link between thought, memory and emotion. Now, this is where things get really interesting (and possibly scary), understanding how the brain works could, perhaps, help us solve some of the greatest mysteries of our age: What is the essence of genius? Do we have a soul?


No comment yet.
Knowmads, Infocology of the future
Exploring the possible , the probable, the plausible
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030!

A Radical Way of Paying for College … From 18th-Century Scotland

A Radical Way of Paying for College … From 18th-Century Scotland | Knowmads, Infocology of the future |
The saddest part of the debate over how to rein in the cost of college is that rising prices have not been tied to any real improvement in the quality of education. Skyrocketing tuition, it’s generally agreed, has been brought on by the expansion of student services. There are nothing but bad choices, it seems: Allow the status quo to persist and saddle students with debt that will hamper their ability to buy houses, start families, or even get the jobs they need to pay off their debt. Or make college (and graduate school, argues Samual Garner, a bioethicist who chronicled his personal student-debt crisis in Slate) taxpayer-funded, and risk a larger and more catastrophic version of the cost escalation that can come with a pot of free money.

While extravagances such as hot tubs, movie theaters, and climbing walls may seem to make this discussion distinctively modern, parts of today’s college-cost dilemma are recognizable, in fact, in an 18th-century debate about how best to finance a university’s operations. It was so important that Adam Smith took time out of analyzing more traditional economic subjects like the corn laws to devote a long section of The Wealth of Nations to it. And with cause: The Scottish universities of the 18th century, much like America’s today, had been quickly becoming the universally acknowledged ticket to social advancement.

Smith, despite accusations of Connery-esque misplaced nationalism, was justly proud of the Scottish system of universities, which ran on a radical (by today’s standards, at least) system in which students paid their professors directly. Scotland had begun the 18th century with the humiliating Act of Union, which rendered it subject to the British Empire and shuttered its parliament. The country then boasted only three universities, all of which taught an obsolete, traditional medieval curriculum (in Latin, no less), and carried a reputation as a backwater of subsistence farmers and awkward, only recently semi-civilized rubes.
No comment yet.
Scooped by Wildcat2030!

What species would become dominant on Earth if humans died out?

What species would become dominant on Earth if humans died out? | Knowmads, Infocology of the future |
In a post-apocalyptic future, what might happen to life if humans left the scene? After all, humans are very likely to disappear long before the sun expands into a red giant and exterminates all living things from the Earth.

Assuming that we don’t extinguish all other life as we disappear (an unlikely feat in spite of our unique propensity for driving extinction), history tells us to expect some pretty fundamental changes when humans are no longer the planet’s dominant animal species.

So if we were given the chance to peek forward in time at the Earth some 50m years after our disappearance, what would we find? Which animal or group of animals would “take over” as the dominant species? Would we have a Planet of the Apes, as imagined in popular fiction? Or would the Earth come to be dominated by dolphins, or rats, or water bears, or cockroaches or pigs, or ants?

The question has inspired a lot of popular speculation and many writers have offered lists of candidate species. Before offering any guesses, however, we need to carefully explain what we mean by a dominant species.
Let’s stick to the animal kingdom

One could argue that the current era is an age of flowering plants. But most people aren’t imagining Audrey Two in Little Shop of Horrors when they envision life in the future (even the fictional triffids had characteristically animal features – predatory behaviour and the ability to move).
No comment yet.
Scooped by Wildcat2030!

The Many Minds of Marvin Minsky (R.I.P.)

The Many Minds of Marvin Minsky (R.I.P.) | Knowmads, Infocology of the future |
Marvin Minsky, a pioneer of artificial intelligence, died on Sunday, January 24, in Boston, according to The New York Times. He was 88. Minsky contributed two important articles to Scientific American: Artificial Intelligence, on his theories of multiple minds, and Will Robots Inherit The Earth?, on the future of AI. I profiled Minsky for Scientific American in 1993, after spending an afternoon with him at MIT’s Artificial Intelligence Laboratory, and again in The End of Science. Below is an edited version of the latter profile. -–John Horgan

Before I visited Marvin Minsky at MIT, colleagues warned me that he might be defensive, even hostile. If I did not want the interview cut short, I should not ask him too bluntly about the falling fortunes of artificial intelligence or of his own particular theories of the mind. A former associate pleaded with me not to take advantage of Minsky's penchant for outrageous utterances. "Ask him if he means it, and if he doesn't say it three times you shouldn't use it."

When I met Minsky, he was rather edgy, but the condition seemed congenital rather than acquired. He fidgeted ceaselessly, blinking, waggling his foot, pushing things about his desk. Unlike most scientific celebrities, he gave the impression of conceiving ideas and tropes from scratch rather than retrieving them whole from memory. He was often but not always incisive. "I'm rambling here," he muttered after a riff on verifying mind-models collapsed in a heap of sentence fragments.

Even his physical appearance had an improvisational air. His large, round head seemed entirely bald but was actually fringed by hairs as transparent as optical fibers. He wore a braided belt that supported, in addition to his pants, a belly pack and a tiny holster containing pliers with retractable jaws. With his paunch and vaguely Asian features, he resembled Buddha--Buddha reincarnated as a hyperactive hacker.
No comment yet.
Scooped by Wildcat2030!

First GM human embryos 'could be created in Britain within weeks'

First GM human embryos 'could be created in Britain within weeks' | Knowmads, Infocology of the future |
The first genetically-modified human embryos could be created in Britain within weeks according to the scientists who are about to learn whether their research proposal has been approved by the fertility watchdog.

Although it will be illegal to allow the embryos to live beyond 14 days, and be implanted into the womb, the researchers accepted that the research could one day lead to the birth of the first GM babies should the existing ban be lifted for medical reasons.

A licence application to edit the genes of “spare” IVF embryos for research purposes only is to be discussed on 14 January by the Human Fertilisation and Embryology Authority (HFEA), with final approval likely to be given this month.

Scientists at the Francis Crick Institute in London said that if they are given the go-ahead they could begin work straight away, leading to the first transgenic human embryos created in Britain within the coming weeks or months.

The researchers emphasised that the research concerns the fundamental causes of infertility and involves editing of the genes of day-old IVF embryos that will not be allowed to develop beyond the seven-day “blastocyst” stage – it will be illegal to implant the modified embryos into the womb to create GM babies.

However, they accepted that if the research leads to a discovery of a genetic mutation that could improve the chances of successful pregnancies in women undergoing IVF treatment, it could lead to pressure to change the existing law to allow so-called “germ-line” editing of embryos and the birth of GM children.
No comment yet.
Scooped by Wildcat2030!

Can Another Body Be Seen as an Extension of Your Own?

Can Another Body Be Seen as an Extension of Your Own? | Knowmads, Infocology of the future |
Among dance forms, tango holds a unique and potent allure. It showcases two individuals—each with a separate mind, body, and bundle of goals and intentions, moving at times in close embrace, at times stepping away from each other, improvising moves and flourishes while responding to the imaginative overtures of the other—who somehow manage to give the impression of two bodies answering to a single mind. For performers and viewers alike, much of tango’s appeal comes from this apparent psychic fusion into a super-individual unit. Michael Kimmel, a social and cultural anthropologist who has researched the interpersonal dynamics of tango, writes that dancers “speak in awe of the way that individuality dissolves into a meditative unity for the three minutes that the dance lasts. Time and space give way to a unique moment of presence, of flow within and between partners.”

Tango offers more than aesthetic bliss; like all artistic practices that demand great skill, it also presents a seductive scientific puzzle, highlighting the mind’s potential to learn and re-shape itself in dramatic ways. But it’s only very recently that scientists have started building a systematic framework to explain how a person might achieve the sort of fusion that is needed for activities like social dancing, and what the impact of such an interpersonal entanglement might be.

At the heart of the puzzle is the notion of a body schema—a mental representation of the physical self that allows us to navigate through space without smashing into things, to scratch our nose without inadvertently smacking it, and to know how far and how quickly to reach for a cup of coffee without knocking it over. We can do all these things because our brains have learned to identify the edges of our bodies using information from multiple senses and devote exquisite attention to stimuli near our bodily boundaries.
No comment yet.
Scooped by Wildcat2030!

Genetic ‘intelligence networks’ discovered in the brain | KurzweilAI

Genetic ‘intelligence networks’ discovered in the brain | KurzweilAI | Knowmads, Infocology of the future |
Scientists from Imperial College London have identified two clusters (“gene networks”) of genes that are linked to human intelligence. Called M1 and M3, these gene networks appear to influence cognitive function, which includes memory, attention, processing speed and reasoning.

Importantly, the scientists have discovered that these two networks are likely to be under the control of master regulator switches. The researcher want to identify those switches and see if they can manipulate them, and ultimately find out if this knowledge of gene networks could allow for boosting cognitive function.

“We know that genetics plays a major role in intelligence but until now, haven’t known which genes are relevant,” said Michael Johnson, lead author of the study from the Imperial College London Department of Medicine. Johnson says the genes they have found so far are likely to share a common regulation, which means it may be possible to manipulate a whole set of genes linked to human intelligence.

Combining data from brain samples, genomic information, and IQ tests

In the study, published in the journal Nature Neuroscience, the international team of researchers looked at samples of human brain from patients who had undergone neurosurgery for epilepsy. The investigators analyzed thousands of genes expressed in the human brain, and then combined these results with genetic information from healthy people who had undergone IQ tests and from people with neurological disorders such as autism spectrum disorder and intellectual disability.

Then they conducted various computational analyses and comparisons to identify the gene networks influencing healthy human cognitive abilities. Remarkably, they found that some of the same genes that influence human intelligence in healthy people cause impaired cognitive ability and epilepsy when mutated. And they found that genes that make new memories or sensible decisions when faced with lots of complex information also overlap with those that cause severe childhood onset epilepsy or intellectual disability.
Farid Mheir's curator insight, January 15, 5:57 PM

Brain research and work to understand the architecture of our brain and reverse engineer it into machines continues to bring wonderful insights into who we are and what will soon be possible with digital machines.

Scooped by Wildcat2030!

Your Algorithmic Self Meets Super-Intelligent AI

Your Algorithmic Self Meets Super-Intelligent AI | Knowmads, Infocology of the future |
As humanity debates the threats and opportunities of advanced artificial intelligence, we are simultaneously enabling that technology through the increasing use of personalization that is understanding and anticipating our needs through sophisticated machine learning solutions.

In effect, while using personalization technologies in our everyday lives, we are contributing in a real way to the development of the intelligent systems we purport to fear.

Perhaps uncovering the currently inaccessible personalization systems is crucial for creating a sustainable relationship between humans and super–intelligent machines?
No comment yet.
Scooped by Wildcat2030!

Elon Musk Snags Top Google Researcher for New AI Non-Profit | WIRED

Elon Musk Snags Top Google Researcher for New AI Non-Profit | WIRED | Knowmads, Infocology of the future |
Tesla founder Elon Musk, big-name venture capitalist Peter Thiel, LinkedIn co-founder Reid Hoffman, and several other notable tech names have launched a new artificial intelligence startup called OpenAI, assembling a particularly impressive array of AI talent that includes a top researcher from Google. But the idea, ostensibly, isn’t to make money.

Overseen by ex-Googler Ilya Sutskever along with Greg Brockman, the former CTO of high-profile payments startup Stripe, OpenAI has the talent to compete with the industry’s top artificial intelligence outfits, including Google and Facebook—but the company has been setup as a non-profit. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” Brockman said in a blog post.
No comment yet.
Scooped by Wildcat2030!

Why are there no little green men? (Clue: it's something to do with photosynthesis)

Why are there no little green men? (Clue: it's something to do with photosynthesis) | Knowmads, Infocology of the future |
Little green men feature heavily in movies about aliens. But why aren’t humans green? And why can’t we photosynthesise like plants? It would, after all, save us a whole lot of bother.

The most familiar green organisms are, of course, plants. Plants are green because their cells are packed with internal organelles – organs within cells – called chloroplasts, which are the centres of photosynthesis. These chloroplasts have a rather interesting evolutionary history, as they were once free-living cyanobacteria, independent of plants.

Cyanobacteria are famous for inventing photosynthesis, a process that harnesses the energy in sunlight to make sugar from water and carbon dioxide. But as any inventor can tell you, if you have a great idea then pretty soon everyone else will want a piece of it.

In an astonishing piece of insight, Lynne Margulis realised that the chloroplasts inside plants were domesticated cyanobacteria that had been captured early in the evolution of the plant lineage. It appears that a unicellular ancestor of the land plant engulfed a cyanobacteria, but rather than digesting it, realised that it was a useful acquisition.

But even more fundamental to the functioning of all higher organisms is a second organelle called the mitochondrion. Margulis realised that this too was once a free-living bacterium – in this case one that could harness the chemical energy locked up in sugary substrates such as glucose. So the cells of plants are really chimeras – a single organism made up of the original host plus two captured bacteria. This theory is known as the endosymbiotic theory.
No comment yet.
Scooped by Wildcat2030!

Why Robots Should Be More Like Babies

Why Robots Should Be More Like Babies | Knowmads, Infocology of the future |
Babies are such little copycats. They learn by watching other humans. This is how small, dependent people turn into larger, independent people.

The thing is, babies aren’t just imitators. By the time they’re around 18 months old, they’re pretty adept at figuring out what a person is trying to do—say, stack some blocks, or toss a ball into a toy bin—even if that person doesn’t succeed. In other words, they can infer intent, and even develop their own alternate strategies of achieving a goal.

“Humans are the most imitative creature on the planet and young kids seamlessly intertwine imitation and innovation,” said Andrew Meltzoff, a psychology professor at the University of Washington and a co-director of the school’s Institute for Learning & Brain Sciences. “They pick up essential skills, mannerisms, customs, and ways of being from watching others, and then combine these building blocks in novel ways to invent new solutions.”

Meltzoff recently worked with a team of roboticists and machine-learning experts to explore a strange and compelling question: What if robots could learn this way, too? A paper detailing their findings was published in the journal PLOS ONE last month.

“The secret sauce of babies is that they are born immature with a great gift to learn flexibly from observation and imitation. They see another person and register that the person is ‘Like Me.’ They devote great attention to the ‘Like Me’ entities in the world,” Meltzoff told me. “Roboticists have a lot to learn from babies.”
No comment yet.
Scooped by Wildcat2030!

Biotech tattoos: the reality and the future in 2016 (Wired UK)

Biotech tattoos: the reality and the future in 2016 (Wired UK) | Knowmads, Infocology of the future |
From fitness trackers like FitBit and Jawbone to the numerous available smartwatches, wearables have become an important part of many people's daily routines. It's easy to see why -- they're functional, convenient and often aesthetically pleasing. But how can you bring wearables closer to the people that wear them?

Chaotic Moon, a technology start-up, think they have a (rather literal) answer. Tattoos.
What is a biotech tattoo?

Unusual tech tattoos are nothing new -- WIRED reported on LED tattoos back in 2009 and Gadi Amit presented his concept for embedded wearables at this year's WIRED Health. But combining the pure aesthetics of traditional tattoos with the functionality of wearables is a fresh idea. And Chaotic Moon's 'Tech Tat' does just that.

Sticking to the skin just like temporary tattoos do, Tech Tats are made of electronic components and are able to monitor your vital statistics -- your heart rate, your blood pressure, your body temperature and more. They have a similar function to wearables like FitBit, but they're easier to wear -- being attached to your skin, they're far less cumbersome.

Chaotic Moon's tattoos use electroconducive paint to pick up the vital signs from the body. "We use a conductive material to connect the micro controller with a variety of sensors held within a flexible temporary tattoo format," Ben Lamm, CEO of Chaotic Moon explained to WIRED.
No comment yet.
Scooped by Wildcat2030!

Robots Are Learning How to Say 'No' to Humans

Robots Are Learning How to Say 'No' to Humans | Knowmads, Infocology of the future |
Science fiction is filled with examples of robots declining the requests of their human companions. “I’m sorry Dave, I’m afraid I can’t do that,” says the HAL 9000. “It's against my programming to impersonate a deity,” insists C3PO. The Terminator, ever a robot of few words, simply goes with “negative.” And let’s not even get into all the ways Bender rejects authority.

As it turns out, robots in the real world are taking a hint from their fictional counterparts and learning the power of the word “no.” Researchers based at the Human-Robot Interaction Lab run by Tufts University have been teaching robots how to disobey direct orders. See for yourself in the below video starring a Nao robot named Shafer.
No comment yet.
Scooped by Wildcat2030!

Identity Is an Inside Joke - Issue 30: Identity - Nautilus

Identity Is an Inside Joke - Issue 30: Identity - Nautilus | Knowmads, Infocology of the future |
“The judgment of whether something is funny or not is spontaneous, automatic, almost a reflex,” writes Dutch sociologist Giselinde Kuipers. “Sense of humor thus lies very close to self-image.” Humor also takes on the shape of the teller’s surroundings: age, gender, class, and clan. Shared humor implies shared identity, shared ways of confronting reality. When we don’t get a joke, we feel left out; when we get a joke, or better yet, tell the joke that everyone roars at, we belong. “If you’re both laughing, it means you see the world in the same way,” says Peter McGraw, psychologist and co-author of The Humor Code: A Global Search for What Makes Things Funny.
Wildcat2030's insight:

a very worthwhile read

No comment yet.
Scooped by Wildcat2030!

In praise of artificial food — Rachel Laudan — Aeon Opinions

Artificial food. That’s what humans eat. I say this to anyone who will listen. ‘Oh yes,’ comes the reply. ‘The more’s the pity. Cheap, nasty, imitation food-like substances. It’s high time to return to natural food.’ But, no, I mean artificial in its original sense of man-made, produced by humans, artfully created.

Our distant ancestors found little good in the food that nature provided. Greens had too few calories to sustain life, chewy meat came tightly wrapped in awkward-sized packages known as living animals, nuts were bitter or oily, roots tended to be poisonous, and grains were tiny and so hard that they passed undigested through the system. Acquiring and digesting food was a constant struggle.

So sometime in the distant past, at least 20,000 years ago and probably much more, members of our species decided they could improve on nature. They discovered how to process raw foods by using fire to cook them, or stones to chop and grind them, or coopting microorganisms to ferment them. They began creating niches for the more edible species, breeding sweeter fruits, less toxic roots, and bigger grains. In short, they created the art of cookery to transform the natural.

The art of cookery, they believed – and modern science has confirmed – produced more food that, on balance, was nutritious, easier to digest, safer, longer-lasting and better-tasting than raw plants and meat. With the benefit of hindsight, anthropologists such as Richard Wrangham at Harvard have argued that bodies changed as the energy formerly spent on digesting was diverted to brains that increased in size, and society evolved as a response to cooked, and hence communal, meals.
No comment yet.
Scooped by Wildcat2030!

Google achieves AI 'breakthrough' by beating Go champion - BBC News

Google achieves AI 'breakthrough' by beating Go champion - BBC News | Knowmads, Infocology of the future |
A Google artificial intelligence program has beaten the European champion of the board game Go.

The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.

The tech company's DeepMind division said its software had beaten its human rival five games to nil.

One independent expert called it a breakthrough for AI with potentially far-reaching consequences.

The achievement was announced to coincide with the publication of a paper, in the scientific journal Nature, detailing the techniques used.

Earlier on Wednesday, Facebook's chief executive had said its own AI project had been "getting close" to beating humans at Go.

But the research he referred to indicated its software was ranked only as an "advanced amateur" and not a "professional level" player.
No comment yet.
Scooped by Wildcat2030!

You Don't Know as Much as You Think: False Expertise

You Don't Know as Much as You Think: False Expertise | Knowmads, Infocology of the future |
It is only logical to trust our instincts if we think we know a lot about a subject, right? New research suggests the opposite: self-proclaimed experts are more likely to fall victim to a phenomenon known as overclaiming, professing to know things they really do not.

People overclaim for a host of reasons, including a desire to influence others' opinions—when people think they are being judged, they will try to appear smarter. Yet sometimes overclaiming is not deliberate; rather it is an honest overestimation of knowledge.

In a series of experiments published in July in Psychological Science, researchers at Cornell University tested people's likelihood to overclaim in a variety of scenarios. In the first two experiments, participants rated how knowledgeable they believed themselves to be about a variety of topics, then rated how well they knew each of 15 terms, three of which were fake. The more knowledgeable people rated themselves to be on a particular topic, the more likely they were to claim knowledge of the fake terms in that field. In a third experiment, additional participants took the same tests, but half were warned that some terms would be fake. The warning reduced overclaiming in general but did not change the positive correlation between self-perceived knowledge and overclaiming.

In a final experiment, the researchers manipulated participants' self-perceived knowledge by giving one group a difficult geography quiz, one group an easy quiz and one group no quiz. Participants who took the easy quiz then rated themselves as knowing more about geography than did participants in the other groups and consequently were more likely to overclaim knowledge of fake terms on a subsequent test.
No comment yet.
Scooped by Wildcat2030!

In a driverless future, what happens to today's drivers?

In a driverless future, what happens to today's drivers? | Knowmads, Infocology of the future |
Self-driving cars are becoming a very real technology. The latest Tesla car has an autopilot feature. The CEO of Uber has stated that he will buy every self-driving car Tesla can produce for a year (about 500,000). The Google self-driving car occasionally overtakes me as I cycle to work in Austin. Other manufacturers are also developing their own self-driving systems. There’s even talk of driverless car racing.

So let us imagine a not too distant (and quite likely) future where the majority of vehicles on the roads drive themselves. Those cars are networked together – they communicate information about their position, speed, traffic and hazards around them. You don’t need to stop at junctions if you know there’s no traffic approaching. A large traffic management system keeps cars moving, finding the best route.

An entire system of transportation that manages itself, reduces traffic, accidents, emissions – and all for a lower cost. How long before people not only accept this but even prefer it? Though many are quick to claim no one will want it, how many people still ride a horse-drawn carriage?

The implications of driverless cars are huge because the transportation industry is huge, employing almost five million people in the U.S. alone. Suddenly you don’t need drivers for taxis, buses, garbage trucks, deliveries, you name it. Not just cars either – boats, planes, anything that moves could be completely automated. Once this process begins, it’s likely to happen quickly, because there’s an incredible amount of money to be saved this way. What happened to the horses when we didn’t need them to pull carts?
No comment yet.
Scooped by Wildcat2030!

Could Evolution Ever Yield a 'Perfect' Organism? - D-brief

Could Evolution Ever Yield a 'Perfect' Organism? - D-brief | Knowmads, Infocology of the future |
It’s a crazy world out there, and if an organism wants to survive, it had better have the right tools for the job.

From anteaters to chameleons, animals have gained some pretty useful features over time, helping them to adapt to their environments and beat out the competition. Driving all of this diversity is natural selection, or the process by which beneficial mutations in genomes are identified and promoted, enabling organisms of all stripes to live longer, mate more often or perhaps just look weirder.

What’s the end game in terms of evolution though? Is there a “perfect” form that all organisms are working and evolving toward? Is there an end game? The notion of evolutionary perfection, while enticing, is likely a myth say researchers at Michigan State University. Led by Richard Lenski, a team of scientists has been observing a long lineage of E. coli bacteria for almost 28 years.

Longest-running Experiment

Encompassing over 60,000 generations of bacteria, the Long-Term Evolution Experiment is the longest continuous look at how a population of organisms evolves in a fixed environment. Put in human generational time scales, this would be about 1.8 million years, or roughly the length of time Homo sapiens has existed as a species. But for bacteria, even 60,000 generations is a small amount of time when considered against the billions of years they’ve been around.

The Long-Term Evolution Experiment is quite simple in design: Twelve separate populations of identical bacteria in identical growth mixtures were allowed to multiply and grow. Every day since 1988, one percent of each population has been transferred to a fresh flask of growth medium, allowing them to proliferate unabated at a rate of about 6.6 generations per day. A sample is frozen every 500 generations, or roughly every 75 days, to preserve a historical record of the bacteria. The bacteria are also periodically tested to determine how their level of fitness, measured by their rate of reproduction, has changed.
ZikoShop's comment, January 3, 3:31 PM
Survive the End Times
Be prepared to Survive end
days. End days survival kit
Scooped by Wildcat2030!

What Happens When Artificial Intelligence Makes MAGIC: THE GATHERING Cards | Nerdist

What Happens When Artificial Intelligence Makes MAGIC: THE GATHERING Cards | Nerdist | Knowmads, Infocology of the future |
The are over 13,000 Magic: The Gathering cards, each of which fits uniquely into an incredibly rich, decades-old world of lore, rules, tokens, and tournaments.
No comment yet.
Scooped by Wildcat2030!

When machines learn like humans | Probabilistic programs pass the "visual Turing test" KurzweilAI

When machines learn like humans | Probabilistic programs pass the "visual Turing test" KurzweilAI | Knowmads, Infocology of the future |
A team of scientists has developed an algorithm that captures human learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.

The work by researchers at MIT, New York University, and the University of Toronto, which appears in the latest issue of the journal Science, marks a significant advance in the field — one that dramatically shortens the time it takes computers to “learn” new concepts and broadens their application to more creative tasks, according to the researchers.

“Our results show that by reverse-engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine-learning tasks.”

The paper’s other authors are Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

When humans are exposed to a new concept — such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet — they often need only a few examples to understand its make-up and recognize new instances. But machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Salakhutdinov helped to launch recent interest in learning with “deep neural networks,” in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts — the digits 0-9 — from 6,000 examples each, or a total of 60,000 training examples.
Farid Mheir's curator insight, December 14, 2015 10:28 AM

Review of work done to make computers learn like humans, the experiment shows that new algorithms can learn from very large sets of images - 60 000 examples to learn how to draw digits 0 to 9. 

Scooped by Wildcat2030!

Bret Easton Ellis on Living in the Cult of Likability

Bret Easton Ellis on Living in the Cult of Likability | Knowmads, Infocology of the future |
On a recent episode of the television series “South Park,” the character Cartman and other townspeople who are enthralled with Yelp, the app that lets customers rate and review restaurants, remind maître d’s and waiters that they will be posting reviews of their meals. These “Yelpers” threaten to give the eateries only one star out of five if they don’t please them and do exactly as they say. The restaurants feel that they have no choice but to comply with the Yelpers, who take advantage of their power by asking for free dishes and making suggestions on improving the lighting. The restaurant employees tolerate all this with increasing frustration and anger — at one point Yelp reviewers are even compared to the Islamic State group — before both parties finally arrive at a truce. Yet unknown to the Yelpers, the restaurants decide to get their revenge by contaminating the Yelpers’ plates with every bodily fluid imaginable.The point of the episode is that today everyone thinks that they’re a professional critic (“Everyone relies on my Yelp reviews!”), even if they have no idea what they’re talking about. But it’s also a bleak commentary on what has become known as the “reputation economy.” In depicting the restaurants’ getting their revenge on the Yelpers, the episode touches on the fact that services today are also rating us, which raises a question: How will we deal with the way we present ourselves online and in social media, and how do individuals brand themselves in what is a widening corporate culture?
No comment yet.
Scooped by Wildcat2030!

Chomsky was right, researchers find: We do have a 'grammar' in our head

Chomsky was right, researchers find: We do have a 'grammar' in our head | Knowmads, Infocology of the future |
A team of neuroscientists has found new support for MIT linguist Noam Chomsky's decades-old theory that we possess an "internal grammar" that allows us to comprehend even nonsensical phrases.

"One of the foundational elements of Chomsky's work is that we have a grammar in our head, which underlies our processing of language," explains David Poeppel, the study's senior researcher and a professor in New York University's Department of Psychology. "Our neurophysiological findings support this theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner—a process that reflects an 'internal grammar' mechanism."

The research, which appears in the latest issue of the journal Nature Neuroscience, builds on Chomsky's 1957 work, Syntactic Structures (1957). It posited that we can recognize a phrase such as "Colorless green ideas sleep furiously" as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though the statistical relations between words are non-existent.

Neuroscientists and psychologists predominantly reject this viewpoint, contending that our comprehension does not result from an internal grammar; rather, it is based on both statistical calculations between words and sound cues to structure. That is, we know from experience how sentences should be properly constructed—a reservoir of information we employ upon hearing words and phrases. Many linguists, in contrast, argue that hierarchical structure building is a central feature of language processing.

In an effort to illuminate this debate, the researchers explored whether and how linguistic units are represented in the brain during speech comprehension.
Wildcat2030's insight:


No comment yet.
Scooped by Wildcat2030!

Google and Facebook Race to Solve the Ancient Game of Go With AI

Google and Facebook Race to Solve the Ancient Game of Go With AI | Knowmads, Infocology of the future |
Rémi Coulom spent the last decade building software that can play the ancient game of Go better than practically any other machine on earth. He calls his creation Crazy Stone. Early last year, at the climax of a tournament in Tokyo, it challenged the Go grandmaster Norimoto Yoda, one of the world’s top human players, and it performed remarkably well. In what’s known as the Electric Sage Battle, Crazy Stone beat the grandmaster. But the win came with a caveat.

Over the last 20 years, machines have topped the best humans at so many games of intellectual skill, we now assume computers can beat us at just about anything. But Go—the Eastern version of chess in which two players compete with polished stones on 19-by-19-line grid—remains the exception. Yes, Crazy Stone beat Yoda. But it started with a four-stone advantage. That was the only way to ensure a fair fight.
No comment yet.
Scooped by Wildcat2030!

Beyond 'he' and 'she': The rise of non-binary pronouns - BBC News

Beyond 'he' and 'she': The rise of non-binary pronouns - BBC News | Knowmads, Infocology of the future |
In the English language, the word "he" is used to refer to males and "she" to refer to females. But some people identify as neither gender, or both - which is why an increasing number of US universities are making it easier for people to choose to be referred to by other pronouns.

Kit Wilson's introduction when meeting other people is: "Hi, I'm Kit. I use they/them pronouns." That means that when people refer to Kit in conversation, the first-year student at the University of Wisconsin-Milwaukee would prefer them to use "they" rather than "she" or "he".

As a child, Wilson never felt entirely female or entirely male. They figured they were a "tomboy" until the age of 16, but later began to identify as "genderqueer".

"Neither end of the [male/female] spectrum is a suitable way of expressing the gender I am," Wilson says. "Sometimes I feel 'feminine' and 'masculine' at the same time, and other times I reject the two terms entirely."

Earlier this year, Wilson asked friends to call them "Kit," instead of the name they (Wilson) had grown up with, and to use the pronoun "they" when talking about them.

Transgender: Applies to a person whose gender is different from their "assigned" sex at birth

Cisgender: Applies to someone whose gender matches their "assigned" sex at birth (ie someone who is not trangender)

Non-binary: Applies to a person who does not identify as "male" or "female"

Genderqueer: Similar to "non-binary" - some people regard "queer" as offensive, others embrace it

Genderfluid: Applies to a person whose gender identity changes over time

See also: A guide to transgender terms

Sharing one's pronouns and asking for others' pronouns when making introductions is a growing trend in US colleges.

For example, when new students attended orientation sessions at American University in Washington DC a few months ago, they were asked to introduce themselves with their name, hometown, and preferred gender pronoun (sometimes abbreviated to PGP).
No comment yet.
Scooped by Wildcat2030!

Is Farmed Salmon Really Salmon? - Issue 30: Identity - Nautilus

Is Farmed Salmon Really Salmon? - Issue 30: Identity - Nautilus | Knowmads, Infocology of the future |
The fish market has become the site of an ontological crisis. Detailed labels inform us where each fillet is from or how it was caught or whether it was farmed or wild-caught. Although we can now tell the farmed salmon from the wild, the degree of differences or similarities between the two defies straightforward labels. When a fish—or any animal—is removed from its wild habitat and domesticated over generations for human consumption, it changes—both the fish and our perception of it. The farmed and wild both say “salmon” on their labels, but are they both equally “salmon?” When does the label no longer apply?

This crisis of identity is ours to sort out; not the fish’s. For us, the salmon is an icon of the wild, braving thousand-mile treks through rivers and oceans, leaping up waterfalls to spawn or be caught in the clutches of a grizzly bear. The name “salmon” is likely derived from the Latin word, “salire,” to leap. But it’s a long way from a leaping wild salmon to schools of fish swimming in circles in dockside pens. Most of the salmon we eat today don’t leap and don’t migrate.
No comment yet.