anti dogmanti
6.2K views | +0 today
anti dogmanti
discoveries based on the scientific method
Curated by Sue Tamani
Your new post is loading...
Your new post is loading...
Scooped by Sue Tamani!

Google simulates brain networks to recognize speech and images | KurzweilAI

Google simulates brain networks to recognize speech and images | KurzweilAI | anti dogmanti |

We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? Credit: Google Research)

This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“).

That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports.

Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.

Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. Google’s engineers have found ways to put more computing power behind the approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations.

The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.

Google is now using these neural networks to recognize speech more accurately, a technology increasingly important to Google’s smartphone operating system, Android, as well as the search app it makes available for Apple devices (see “Google’s Answer to Siri Thinks Ahead“). “We got between 20 and 25 percent improvement in terms of words that are wrong,” says Vincent Vanhoucke, a leader of Google’s speech-recognition efforts. “That means that many more people will have a perfect experience without errors.” The neural net is so far only working on U.S. English, and Vanhoucke says similar improvements should be possible when it is introduced for other dialects and languages.

Other Google products will likely improve over time with help from the new learning software. The company’s image search tools, for example, could become better able to understand what’s in a photo without relying on surrounding text. And Google’s self-driving cars (see “Look, No Hands“) and mobile computer built into a pair of glasses (see “You Will Want Google’s Goggles“) could benefit from software better able to make sense of more real-world data.

The new technology grabbed headlines back in June of this year, when Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause.

“Most people keep their model in a single machine, but we wanted to experiment with very large neural networks,” says Jeff Dean, an engineer helping lead the research at Google. “If you scale up both the size of the model and the amount of data you train it with, you can learn finer distinctions or more complex features.”

The neural networks that come out of that process are more flexible. “These models can typically take a lot more context,” says Dean, giving an example from the world of speech recognition. If, for example, Google’s system thought it heard someone say “I’m going to eat a lychee,” but the last word was slightly muffled, it could confirm its hunch based on past experience of phrases because “lychee” is a fruit and is used in the same context as “apple” or “orange.”

Dean says his team is also testing models that understand both images and text together. “You give it ‘porpoise’ and it gives you pictures of porpoises,” he says. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”

A next step could be to have the same model learn the sounds of words as well. Being able to relate different forms of data like that could lead to speech recognition that gathers extra clues from video, for example, and it could boost the capabilities of Google’s self-driving cars by helping them understand their surroundings by combining the many streams of data they collect, from laser scans of nearby obstacles to information from the car’s engine.

Google’s work on making neural networks brings us a small step closer to one of the ultimate goals of AI — creating software that can match animal or perhaps even human intelligence, says Yoshua Bengio, a professor at the University of Montreal who works on similar machine-learning techniques. “This is the route toward making more general artificial intelligence — there’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world,” he says.

In fact, the workings of Google’s neural networks operate in similar ways to what neuroscientists know about the visual cortex in mammals, the part of the brain that processes visual information, says Bengio. “It turns out that the feature learning networks being used [by Google] are similar to the methods used by the brain that are able to discover objects that exist.”

However, he is quick to add that even Google’s neural networks are much smaller than the brain, and that they can’t perform many things necessary to intelligence, such as reasoning with information collected from the outside world.

Dean is also careful not to imply that the limited intelligences he’s building are close to matching any biological brain. But he can’t resist pointing out that if you pick the right contest, Google’s neural networks have humans beat.

“We are seeing better than human-level performance in some visual tasks,” he says, giving the example of labeling, where house numbers appear in photos taken by Google’s Street View car, a job that used to be farmed out to many humans.

“They’re starting to use neural nets to decide whether a patch [in an image] is a house number or not,” says Dean, and they turn out to perform better than humans. It’s a small victory — but one that highlights how far artificial neural nets are behind the ones in your head. “It’s probably that it’s not very exciting, and a computer never gets tired,” says Dean. It takes real intelligence to get bored.

Using large-scale brain simulations for machine learning and AI
Google’s answer to Siri thinks ahead
Topics: AI/Robotics | Innovation/Entrepreneurship | Internet/Telecom

No comment yet.
Rescooped by Sue Tamani from Amazing Science!

Imagining Web 3.0: The Internet - With 100 Billion Clicks Per Day The Greatest Machine Humanity Ever Built

Imagining Web 3.0: The Internet - With 100 Billion Clicks Per Day The Greatest Machine Humanity Ever Built | anti dogmanti |

The internet at its current growth rate and development stands to be the greatest machine ever built in the history of humanity. This machine also happens to be the most reliable machine human beings have ever constructed. It has never crashed before and has always run uninterrupted. Consider the usage of the internet too.


There are over 100 billion clicks per day online, there are approximately five trillion links between all internet pages in the world and over two million emails are sent per second from all around the planet. The internet also accounts for five percent of all electricity used on the planet to keep it running continuously.


An approximation of the internet in terms of size and complexity resembles the way a human brain would function. The internet however is continuing to grow in size and complexity every two years. At the rate at which the internet is evolving, it is projected that by the year 2040, the internet will be able to store more knowledge and information and be able to operate at a higher level of cognisance than the whole of humanity combined.

Via Dr. Stefan Gruenwald
No comment yet.
Scooped by Sue Tamani!

African neighbours divided by their genes

African neighbours divided by their genes | anti dogmanti |
Geographically close human populations in southern Africa have been genetically isolated for thousands of years.
No comment yet.
Rescooped by Sue Tamani from Content Curation World!

Curated Collections of Video Documentaries:

Curated Collections of Video Documentaries: | anti dogmanti |

Robin Good: If you are looking for a good curated resource for video documentaries, and you are not looking just for mainstream stuff, Chockadoc provides over 32 different collections and over 2000 free video documentaries, immediately viewable online.

Categories include everything from "health" to "war". See all the categories isted here:

The service is free, it requires no registration and it is ad-supported.


Try it out now:

Via Robin Good
Deanna Dahlsad's comment, September 18, 2012 5:00 AM
I'm rather surprised you consider this curation... It's not that it's product-centered, but that it's a limited pool from which any sort of curation can be done.
VideobeuZ's comment, January 11, 2013 5:53 AM
I think it's curation when they curate their all content from YouTube
Scooped by Sue Tamani!

Quantum evolution › Science Features (ABC Science)

Quantum evolution › Science Features (ABC Science) | anti dogmanti |

Australian researchers report they've made a breakthrough in quantum computing. So how does their discovery fit in the race to build a supercomputer?

By Stephen Pincock

Artist's impression of a phosphorus atom (red sphere surrounded by electron cloud, with arrow showing the spin direction) coupled to a silicon single-electron transistor. (University of New South Wales: Tony Melov)

In Andrea Morello's hands, the future of computing looks beautiful.

Standing in a light-filled laboratory at the University of New South Wales, Morello grips a small gold-plated circuit board in his fingertips. Slightly larger than a matchbox, its surface is punctured by a constellation of tiny holes and overlaid with white shapes like raindrops running down a window.

The tails of the raindrops converge near the middle of the golden board, where a tiny opening is waiting for Morello to insert a small piece of specially manipulated silicon.

This sliver of silicon is the same material used to build normal digital computers, but it has been altered at the atomic level.

By replacing selected silicon atoms with atoms of phosphorus, Morello and his colleague Andrew Dzurak have taken a step forward in a global race to build a computer using the weird laws that govern the physical world at the tiniest, quantum, scale.

The dream of building vastly more powerful computers by harnessing quantum properties has been around since the 1970s.

For decades, theorists have filled books and journals with discussion about what such computers could do. But turning those dreams into physical reality has proven a slow process.

Now, using a variety of different technologies, Morello's team and many other scientists around the world are getting closer to building a functioning quantum computer.

There have really been some big strides toward the development of a quantum computer in the last two decades, says Dr Michael Biercuk, a quantum computing researcher at the University of Sydney and a chief investigator in the ARC Centre for Engineered Quantum Systems.

"We keep checking things off that people have said are impossible. Researchers have moved from proof-of-principle demonstrations to working on the engineering challenges we need to overcome to build something useful."

How do quantum computers work?

The computer on your desk works by manipulating pieces of information known as binary digits, bits for short. Bits can only have two possible values - either 1 or 0 - normally represented by means of changes in electric current in a circuit.

However in the second half of the 20th century, scientists realised that they could add a twist to this scenario by using special properties of matter that applies when you get down to the sub-atomic scale.

The first of these properties is known as superposition. Put simply, superposition means that physical systems, such as electrons, exist in all their theoretically possible states at once. It's only when you measure them that you get a result that corresponds to just one of the possible states.

Scientists realised that that if they could harness this kind of quantum system, each "quantum bit" of information, or qubit, could actually be both a 0 and 1 simultaneously. As you add more and more bits together, this superposition would allow you to exponentially increase the power of your quantum computer.

The whole idea of quantum computing hinges on the concept of 'entanglement', explains Dr Michelle Simmons, director of the Centre for Quantum Computation and Communication Technology at the University of New South Wales. "It means that if you change the state of one qubit it affects the other qubits that it is entangled with in the system," she says. "It's where the power of quantum computing comes from."

"Entanglement in quantum computing plays the same role as heat in an engine or electricity in a light-bulb," adds Andrew White, a quantum physicist from the University of Queensland. "It's the underlying phenomenon you need to understand to build a quantum computer."

Cracking codes, designing drugs

As theorists thought more about theoretical quantum computers, they came up with specific problems that they would be ideal for solving.

"The thing that really got the thing moving was in 1994 when a computer scientist called Peter Shor came up with a theoretical algorithm that could be run on a quantum computer if it existed," says Dzurak.

Shor's algorithm had the potential to solve a problem at the heart of the systems we use to keep our data secure, called public key encryption. This encryption system relies on the fact that conventional computers struggle to figure out the two large prime numbers that have been multiplied together to form another even more enormous number.

Shor figured out that a quantum computer would be great at solving this problem quickly, explains Dzurak. "All of a sudden one could see an application for quantum computers that was something that a conventional computer simply couldn't do in any useful time."

Not surprisingly, Shor's realisation that quantum computers were ideal code-cracking machines generated interest from governments and the military. Their funding support provided an enormous boost to the field.

Since then, other potential uses have emerged. One example is the designing of the chemical molecules that are at the heart of drugs. Currently, this is a process of trial and error, which is one reason the development of new medicines is currently so costly and time-consuming.

"Ideally what you would like is to design them on a computer," says Morello. "Some of these molecules aren't that big, they may be only 20, 30 atoms, maybe 50 atoms. But this is completely impossible on a conventional computer."

Beyond this, it is fair to say that the number of known situations where a quantum computer will out-perform your iPod is currently small, says Dzurak. "It's a handful at the moment. But a couple of them happen to be quite important applications."

Yet it seems likely that once we have quantum computers to play around with, the number of potential applications will increase dramatically. "In 10 or 20 years I don't think the big impacts from quantum computers will be in cracking codes," says White. "We're finding category after category of problems that will be vastly easier if you had a working quantum computer."

Ion traps and beyond

Ion trap: Under the right experimental conditions, the ion crystal spontaneously forms this nearly perfect triangular lattice structure. (Source: Britton/NIST)
So much for the theory: when it comes to building an actual quantum computer, scientists have had to overcome some major practical hurdles. Crucially, they have needed to find things to use as qubits that can be isolated from the world around them. Then they needed to develop means of having their qubits interact with each other.

The first experimental demonstrations of qubits emerged in the 1990s using a technology that had been developed for atomic clocks, called ion traps. An ion is an atom (or molecule) that has a positive or negative charge. Scientists found that if they suspended such charged atoms in a vacuum using an electromagnetic field, they could use laser beams to control their internal energy levels and to measure them, allowing them to perform operations on basic quantum states.

Ion traps have led the field of quantum computing since the 1990s, partly because ions in a vacuum are well separated from their environment.

Earlier this year, Dr Michael Biercuk and colleagues from the US and South Africa used an ion trap to build a quantum computer with a layer of 300 beryllium ions with interacting spins acting as qubits.

"The system we have developed has the potential to perform calculations that would require a classical machine larger than the size of the known universe - and it does it all in a diameter of less than a millimetre," says Biercuk.

The computer Biercuk's group built is of a type known as a 'quantum simulator', which uses a well-controlled quantum device to mimic another system that is not understood.

"In our case, we are studying the interactions of spins in the field of quantum magnetism - a key problem that underlies new discoveries in materials science for energy, biology, and medicine," says Biercuk.

Quantum dots and silicon

Silicon: Andrea Morello holds a circuit board that can contain a small piece of specially manipulated silicon (Source: Stephen Pincock)
Ion traps are not the only systems people are exploring for quantum computers. Other approaches developed or conceived since the late 1990s include using superconductors, photons, diamonds, nuclear magnetic resonance on molecules in solutions and many more.

Rather than trapping an ion in a vacuum, Morelli and Dzurak's team are inserting phosphorus atoms in chips of silicon.

"In a sense the silicon is like a vacuum because we make the silicon very pure so that the only kind of thing that's active are the atoms that we deliberately put there," says Dzurak. The spin of an electron orbiting the phosphorus serves as the qubit.

In 2010, the same group of researchers showed that they could measure the spin of that single phosphorous electron, which was controlling the flow of electrons in a nearby circuit.

Now, in research reported in today's issue of Nature , they have demonstrated the ability to both read and write information on a single electron bound to one phosphorus atom embedded in silicon.

For Dzurak and Morello, the beauty of a silicon system lies in the fact that it's a technology conventional computer manufacturers are comfortable with.

"That's the thing about silicon quantum computing and why we've had so much interest and funding, because we're using the technology that is the platform of a trillion dollar industry today. It will look exactly the same. You'll look at it and it'll look just like a computer chip."

Biercuk says the latest research by the UNSW researchers is a major advance towards realising silicon-based quantum processing and takes us closer to the ideal of an integrated quantum computer.

"A major goal for the research community has been to realise the same quantum-coherent functionality afforded by atomic systems in a scalable, integrated platform. Silicon is a natural choice from this perspective, based on decades of research on large-scale integrated circuits for microprocessors and advanced digital electronics."

"Nonetheless the whole community has a long way to go before a practically useful quantum computer is available," he says.

The future

As things stand, quantum computing is roughly at the place conventional computing was at in the 1950s, says Michelle Simmons, whose team at the University of New South Wales created the world's first functioning single atom transistor.

"The first transistor was built in 1947 and the first integrated circuit came in 1960," she says. "Lots was happening in the 13 years in between, and that's where we are at now with quantum computing. We're trying to integrate all the components in the one chip, so to speak."

For a quantum computer to do some calculations that are beyond a conventional computer, researchers estimate it would need to have somewhere in the region of 30 or so qubits operating together. For more powerful operations, hundreds of thousands of qubits are needed. By this standard, Dzurak and Morello's team, and most other research groups, have some way to go.

For now, it's too early to say whether any of the various models of quantum computer might eventually win the race. Many researchers think some combination of different approaches might be most useful.

"We're not quite at the point of selecting between different systems," says University of Queensland physicist Andrew White. "We really don't know what a real quantum computer will look like in the future."

No comment yet.
Scooped by Sue Tamani!

Researchers identify biochemical functions for most of the human genome | KurzweilAI

Researchers identify biochemical functions for most of the human genome | KurzweilAI | anti dogmanti |
(Credit: ENCODE Project) Only about 1 percent of the human genome contains gene regions that code for proteins, raising the question of what the rest of the...
No comment yet.
Scooped by Sue Tamani!

Emotional intelligence: fact or fad? › Opinion (ABC Science)

Emotional intelligence: fact or fad? › Opinion (ABC Science) | anti dogmanti |

Emotional intelligence is not the cure-all elixir for spotting who will succeed in work and life, but it is more than a useless fad, says Carolyn MacCann.


Personality jigsaw: emotional intelligence refers to the capacity to perceive emotions, assimilate emotion-related feelings, understand the information of those emotions, and manage them


Popular interest in emotional intelligence began with a 1995 self-help book called Emotional Intelligence: Why It Can Matter More than IQ, written by psychologist Daniel Goleman.


Goleman proposed that IQ is not the only road to success, and that emotional skills are more important in many areas of life. He packed the book with references to research by high-calibre academics, supporting the credibility of his ideas.


The book sold like hot cakes, and the concept took off. Suddenly emotional intelligence was everywhere: on the Oprah Winfrey show, on the cover of TIME Magazine, voted the most useful new word by the American Dialect Society, and enthusiastically used by business and HR professionals for selection, training and evaluation.


The suddenness and the extent of this popularity lent emotional intelligence an air of faddishness. The cartoon Dilbert lampooned emotional intelligence as a meaningless executive buzzword with one character telling another: "You have to consider my 'emotional intelligence', which is defined in a book I haven't read".

No comment yet.
Scooped by Sue Tamani!

Arctic ice low heralds end of 3-million-year cover - environment - 29 August 2012 - New Scientist

Arctic ice low heralds end of 3-million-year cover - environment - 29 August 2012 - New Scientist | anti dogmanti |

IT IS smaller, patchier and thinner than ever - and rotten in parts. The extent of the Arctic ice cap has hit a record low, and the consequences of what is arguably the greatest environmental change in human history will extend far beyond the North Pole.

For at least 3 million years, and most likely 13 million, says Louis Fortier of the University of Laval in Quebec City, Canada, the Arctic Ocean has been covered by a thick, floating ice cap, the breadth of which fluctuates with the seasons and currents. Each summer, the cap shrinks to an annual minimum in mid-September before growing out again, fuelled by plummeting winter temperatures and long nights.

No comment yet.
Scooped by Sue Tamani!

Scanning your home with kinect could improve 3D robot vision | KurzweilAI

Scanning your home with kinect could improve 3D robot vision | KurzweilAI | anti dogmanti |
Seeking a way to crowdsource better computer vision, roboticists have launched a website that allows users to record pieces of their environments in 3-D with a Kinect camera, Wired Science reports.

Called Kinect@Home, the open-source and browser-based effort remains in its infancy. Users have uploaded only a few dozen models of their living room couches, kitchen countertops and themselves.

Should the project catch on, however, researchers may be able to amass 3-D data to improve navigation and object-recognition algorithms that allow robots to cruise and manipulate indoor environments.

“For robots to work in everyday space and homes, we need lots of 3-D data” said roboticist Alper Aydemir of the Royal Institute of Technology in Sweden. With the advent of Microsoft’s low-cost yet highly effective 3-D camera system, called Kinect, and sanctioned ways to hack the device, computer vision research is experiencing a revolution.

“I think we’ve developed a win-win situation,” said Aydemir, who leads the Kinect@Home effort. “Users get access to 3-D models they can embed anywhere on the internet, and we use this data to create better computer vision algorithms.”

What’s more, helper robots are only useful if they can recognize and interact with a dizzying variety of objects. Some crowdsourced schemes use Amazon Mechanical Turk to categorize objects in 2-D images acquired by robots, but these images don’t inform any item’s 3-D shape or behavior.

In hopes of gathering these and other data that define human environments, Aydemir created Kinect@Home. Users install a plugin, attach their Kinect to a computer, and start recording whatever they please.

No comment yet.
Scooped by Sue Tamani!

Did ants invent the Internet? | KurzweilAI

Did ants invent the Internet? | KurzweilAI | anti dogmanti |

Two Stanford researchers have discovered that harvester ants determine how many foragers to send out of the nest in much the same way that Internet protocols discover how much bandwidth is available for the transfer of data.

There are 11,000 species of ants, living in every habitat and dealing with every type of ecological problem, Gordon said. “Ants have evolved ways of doing things that we haven’t thought up, but could apply in computer systems. Computationally speaking, each ant has limited capabilities, but the collective can perform complex tasks.

No comment yet.
Scooped by Sue Tamani!

Tributes pour in for 'man on the moon' › News in Science (ABC Science)

Tributes pour in for 'man on the moon' › News in Science (ABC Science) | anti dogmanti |
Tributes have been pouring in following the death of Neil Armstrong, the humble US astronaut whose "small step" on the moon captivated the world and came to embody the wonder of space exploration.

Neil Armstrong belonged to humanity. I still vividly remember watching the moon walk as a young teacher with my class of first graders.

What a brave man!

Vale Neil Armstrong.

No comment yet.
Scooped by Sue Tamani!

Forget electric cars, this one runs on compressed air | KurzweilAI

Forget electric cars, this one runs on compressed air | KurzweilAI | anti dogmanti |

Airpod car (credit: Tata Motors) India's Tata Motors is pushing technology for compressed air to power cars forward with its project to build Airpods- — zero-pollution, cute-as-a-bug smartcars that zip along at 40 m.p.h. via the magic of squeezed air.

No comment yet.
Scooped by Sue Tamani!

Universe was 'born in a big chill' › News in Science (ABC Science)

Universe was 'born in a big chill' › News in Science (ABC Science) | anti dogmanti |
The early universe can be likened to water that froze into ice and cracked as it cooled, say a team of Australian scientists.
No comment yet.
Rescooped by Sue Tamani from Science News!

How artificial intelligence is changing our lives

How artificial intelligence is changing our lives | anti dogmanti |

In a sense, AI has become almost mundanely ubiquitous, from the intelligent sensors that set the aperture and shutter speed in digital cameras, to the heat and humidity probes in dryers, to the automatic parking feature in cars. And more applications are tumbling out of labs and laptops by the hour.

“It’s an exciting world,” says Colin Angle, chairman and cofounder of iRobot, which has brought a number of smart products, including the Roomba vacuum cleaner, to consumers in the past decade.

What may be most surprising about AI today, in fact, is how little amazement it creates. Perhaps science-fiction stories with humanlike androids, from the charming Data (“Star Trek“) to the obsequious C-3PO (“Star Wars”) to the sinister Terminator, have raised unrealistic expectations. Or maybe human nature just doesn’t stay amazed for long.

“Today’s mind-popping, eye-popping technology in 18 months will be as blasé and old as a 1980 pair of double-knit trousers,” says Paul Saffo, a futurist and managing director of foresight at Discern Analytics in San Francisco. “Our expectations are a moving target.”


The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.


Entrepreneurs like iRobot’s Mr. Angle aren’t fussing over whether today’s clever gadgets represent “true” AI, or worrying about when, or if, their robots will ever be self-aware. Starting with Roomba, which marks its 10th birthday this month, his company has produced a stream of practical robots that do “dull, dirty, or dangerous” jobs in the home or on the battlefield. These range from smart machines that clean floors and gutters to the thousands of PackBots and other robot models used by the US military for reconnaissance and bomb disposal.

While robots in particular seem to fascinate humans, especially if they are designed to look like us, they represent only one visible form of AI. Two other developments are poised to fundamentally change the way we use the technology: voice recognition and self-driving cars.

Via Dr. Stefan Gruenwald, Sakis Koukouvis
oliviersc's comment, October 3, 2012 11:19 AM
Un petit tour par mes Cercles privés à Google+ Thanks for this article !
Scooped by Sue Tamani!

What our civilization needs is a billion-year plan | KurzweilAI

What our civilization needs is a billion-year plan | KurzweilAI | anti dogmanti |
Artist’s concept of a Kardashev Type 2 civilization (credit: Chris Cold)

Lt Col Garretson — one of the USAF’s most farsighted and original thinkers — has been at the forefront of USAF strategy on the long-term future in projects such as Blue Horizons (on KurzweilAI — see video), Energy Horizons, Space Solar Power, the AF Futures Game, the USAF Strategic Environmental Assessment, and the USAF RPA Flight Plan. Now in this exclusive to KurzweilAI, he pushes the boundary of long-term thinking about humanity’s survival out to the edge … and beyond. — Ed.


The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. government.


It isn’t enough just to plan for two or 20, or even the fabled Chinese 100 year periods. We need to be thinking and planning on the order of billions of years. Our civilization needs inter-generational plans and goals that span as far out as we can forecast significant events.


For this discussion, I define a “significant event” as an event about which we have foreknowledge and which will fundamentally change our planning assumptions.

No comment yet.
Scooped by Sue Tamani!

Google spans entire planet with GPS-powered database | KurzweilAI

Google spans entire planet with GPS-powered database | KurzweilAI | anti dogmanti |
(Credit: NASA) Wired Enterprise reports that Google has published a research paper (open access) detailing Spanner, which Google says is the first database that can quickly store and retrieve information across a worldwide network of data centers while keeping that information “consistent” — meaning all users see the same collection of information at all times.

Spanner borrows techniques from some of the other massive software platforms Google built for its data centers, but at its heart is something completely new. Spanner plugs into a network of servers equipped with super-precise atomic clocks and GPS antennas, using these time keepers to more accurately synchronize the distribution of data across such a vast network.

“If you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now,” Mike Olson, the CEO of Hadoop specialist Cloudera, said at recent event in Silicon Valley.

Facebook is already building a system that’s somewhat similar to Spanner, in that it aims to juggle information across multiple data centers. Judging from our discussions with Facebook about this system — known as Prism — it’s quite different from Google’s creation.

The genius of the platform lies in something Google calls the TrueTime API. API is short for application programming interface, but in this case, Google is referring to a central data feed that its servers plug into. Basically, TrueTime uses those GPS antennas and atomic clocks to get Google’s entire network running in lock step.

To understand TrueTime, you have to understand the limits of existing databases. Today, there are many databases designed to store data across thousands of servers. Most were inspired either by Google’s BigTable database or a similar storage system built by Amazon known as Dynamo. They work well enough, but they aren’t designed to juggle information across multiple data centers — at least not in a way that keeps the information consistent at all times.

According to Andy Gross — the principal architect at Basho, whose Riak database is based on Amazon Dynamo — the problem is that servers must constantly communicate to ensure they correctly store and retrieve data, and all this back-and-forth ends up bogging down the system if you spread it across multiple geographic locations. “You have to a do a whole lot of communication to decide the correct order for all the transactions,” Gross says, “and the latencies you get are typically prohibitive for a fast database.”

James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Yasushi Saito, Michal Szymaniak, Christopher Taylor, Ruth Wang, Dale Woodford, Spanner: Google's Globally-Distributed Database, to appear in: OSDI'12: Tenth Symposium on Operating System Design and Implementation, Hollywood, CA, October, 2012 (open access)

Topics: Computers/Infotech/UI | Innovation/Entrepreneurship | Internet/Telecom

No comment yet.
Rescooped by Sue Tamani from Geography Education!

The True Size Of Africa

The True Size Of Africa | anti dogmanti |

This is another old classic image that I might have shared earlier but it merits repeating. As Salvatore Natoli (a leader in geography education) once said, "In our society we unconsciously equate size with importance and even power." This is one reason why many people have underestimated the true size of Africa relative to places that they view as more important or more powerful.

Tags: mapping, Africa, perspective, images

Via Seth Dixon
Afrikasources's curator insight, January 15, 2014 10:10 AM

Just a reminder

Edelin Espino's curator insight, December 5, 2014 11:01 AM

It is incredible big, but unfortunately most of the north area is cover by the big Sahara and most of the are is typically unfertilized. 

Jason Schneider's curator insight, March 9, 2015 4:29 PM

As we can see, there's a little overlapping here and some empty spots but it's pretty accurate. The United States and China are in the top 5 largest countries of the world list and they still fit in the 2nd largest continent of the world, Africa. I'd like to see the size comparison between Africa and Russia. I did some research on that and it turns out that Russia is a little over half the size of Africa, maybe the size of the combination of the United States and China.

Scooped by Sue Tamani!

New benchtop sequencers shipping; sequence genome in under a day | KurzweilAI

New benchtop sequencers shipping; sequence genome in under a day | KurzweilAI | anti dogmanti |

Life Technologies began shipments of its new Ion Proton benchtop sequencing instrument on Thursday, but the sequencing race is still on.

Illumina and Oxford Nanopore have also promised new machines by the end of the year, each capable of sequencing a human genome in less than a day, Nature News Blog reports.

The Ion Proton machine costs $150,000 and performs 4-hour sequencing runs using $1,000 disposable chips. A chip can sequence 60–80 million filtered DNA fragments, with lengths of up to 200 bases, enough to provide several-fold coverage on a human exome (the protein-coding genes). A second-generation chip capable of sequencing a full human genome is scheduled to be released next year.

The machines from each companies read DNA bases in different ways. Illumina reads different colors of light depending on whether A,C,T, or G is incorporated. Life Technologies’ Ion Proton detects tiny changes in pH as different bases are added, and Oxford Nanopore detects disruption in an electrical current as a single DNA molecule slides through a nano-sized hole.

Competition in the clinical setting is set to be fierce. Both Life and Illumina have released products for use with clinical samples, and plan to file for FDA approval of their instruments next year.

Photo: The Nanopores: direct, electronic analysis of single molecules (credit: Oxford Nanopore Technologies)


Remember when it took 13 years to sequence the Human Genome? This increase in speed really demonstrates the exponential nature of expanding human knowledge and research.

Til next time

Sue Tamani

No comment yet.
Scooped by Sue Tamani!

Brainy beverage: study reveals how green tea boosts brain cell production to aid memory | KurzweilAI

Brainy beverage: study reveals how green tea boosts brain cell production to aid memory | KurzweilAI | anti dogmanti |
Green tea leaves steeping in a gaiwan 盖碗 (credit: Wikimol/Wikimedia Commons) It has long been believed that drinking green tea is good for the memory.

Now Chinese researchers have discovered how the chemical properties of China’s favorite drink affect the generation of brain cells, providing benefits for memory and spatial learning.

The researchers, led by Professor Yun Bai from the Third Military Medical University, Chongqing, China, focused on the organic chemical EGCG (epigallocatechin-3 gallate), the major polyphenol in green tea.

EGCG can easily pass through the blood-brain barrier and reach the functional parts of the brain. While EGCG is a known antioxidant, the team believes it could also have a beneficial effect against age-related degenerative diseases.

Green tea increases neurogenesis

“We proposed that EGCG can improve cognitive function by impacting the generation of neuron cells, a process known as neurogenesis,” said Bai. “We focused our research on the hippocampus, the part of the brain that processes information from short-term to long-term memory.”

In humans, hippocampal neurogenesis declines with age, and this decline is involved in various neurological disorders, many of which are associated with cognitive deficits.

The team found that ECGC boosts the production of neural progenitor cells, which, like stem cells, can adapt, or differentiate, into various types of cells. The team then used laboratory mice to discover if this increased cell production gave an advantage to memory or spatial learning.

Smarter mice

Treatment with EGCG increased the expression of “sonic hedgehog” (Shh) signaling pathway components in the adult mouse hippocampus. Left: control; Right: with EGCG. (Credit: Yanyan Wang et al./Molecular Nutrition & Food Research)

“We ran tests on two groups of mice, one which had imbibed ECGC and a control group,” said Bai. “First the mice were trained for three days to find a visible platform in their maze. Then they were trained for seven days to find a hidden platform.”

The team found that the ECGC treated mice required less time to find the hidden platform. Overall, the results revealed that EGCG enhances learning and memory by improving object recognition and spatial memory.

“We have shown that the organic chemical EGCG acts directly to increase the production of neural progenitor cells, in both in vitro tests and in mice,” concluded Bai. “This helps us to understand the potential for EGCG, and green tea which contains it, to help combat degenerative diseases and memory loss.”

“These findings warrant a general recommendation to consume green tea regularly for disease prevention and provide support that EGCG may have therapeutic uses for treating neurodegenerative disorders,” the researchers conclude.

Human epidemiological data show that green tea consumption is inversely correlated with the incidence of dementia, Alzheimer’s disease, and Parkinson’s disease.

The research is published as an open-access article in Molecular Nutrition & Food Research. It is part of a collection of articles bringing together high quality research on the theme of food science and technology with particular relevance to China.

Browse free articles from Wiley’s food science and technology publications including the Journal of Food Science, Journal of the Science of Food and Agriculture and Molecular Nutrition & Food Research.

UPDATE Sept. 9, 2012: Prof. Yun Bai responds to this question: “The paper mentions that ‘it is likely that a daily 1500–1600 mg bolus of EGCG in humans would achieve physiological levels.’ How many cups of tea does that correspond to?”

“I see someone said in the comments ‘it would take at least 50 cups.’ Of course, we need not to do so. First, I have to clarify the difference between green tea and coffee. Because when you have a cup of coffee, you must add new coffee and water to the second cup. But for green tea, the tea is always there, you only need to add water to make the effective components release. We know EGCG is about 10–15% the weight of green tea, so you need 10–15g [.4--.5 oz.] of green tea, and just add hot water, and the EGCG will be in the water.

“We plan to do some research on EGCG and disease, such as Alzheimer’s disease and diabetes, maybe from new target cells and molecules.”

Thanks to

Til next time

Sue Tamani

No comment yet.
Scooped by Sue Tamani!

Life Without Free Will : Sam Harris

Life Without Free Will : Sam Harris | anti dogmanti |
Sam Harris, neuroscientist and author of the New York Times bestsellers, The End of Faith, Letter to a Christian Nation, and The Moral Landscape.


One of the most common objections to my position on free will is that accepting it could have terrible consequences, psychologically or socially. This is a strange rejoinder, analogous to what many religious people allege against atheism: Without a belief in God, human beings will cease to be good to one another. Both responses abandon any pretense of caring about what is true and merely change the subject. But that does not mean we should never worry about the practical effects of holding specific beliefs.

I can well imagine that some people might use the nonexistence of free will as a pretext for doing whatever they want, assuming that it’s pointless to resist temptation or that there’s no difference between good and evil. This is a misunderstanding of the situation, but, I admit, a possible one. There is also the question of how we should raise children in light of what science tells us about the nature of the human mind. It seems doubtful that a lecture on the illusoriness of free will should be part of an elementary school curriculum.

In my view, the reality of good and evil does not depend upon the existence of free will, because with or without free will, we can distinguish between suffering and happiness. With or without free will, a psychopath who enjoys killing children is different from a pediatric surgeon who enjoys saving them. Whatever the truth about free will, these distinctions are unmistakable and well worth caring about.

Might free will somehow be required for goodness to be manifest? How, for instance, does one become a pediatric surgeon? Well, you must first be born, with an intact nervous system, and then provided with a proper education. No freedom there, I’m afraid. You must also have the physical talent for the job and avoid smashing your hands at rugby. Needless to say, it won’t do to be someone who faints at the sight of blood. Chalk these achievements up to good luck as well. At some point you must decide to become a surgeon—a result, presumably, of first wanting to become one. Will you be the conscious source of this wanting? Will you be responsible for its prevailing over all the other things you want but that are incompatible with a career in medicine? No. If you succeed at becoming a surgeon, you will simply find yourself standing one day, scalpel in hand, at the confluence of all the genetic and environmental causes that led you to develop along this line. None of these events requires that you, the conscious subject, be the ultimate cause of your aspirations, abilities, and resulting behavior. And, needless to say, you can take no credit for the fact that you weren’t born a psychopath.

Of course, I’m not saying that you can become a surgeon by accident—you must do many things, deliberately and well, and in the appropriate sequence, year after year. Becoming a surgeon requires effort. But can you take credit for your disposition to make that effort? To turn the matter around, am I responsible for the fact that it has never once occurred to me that I might like to be a surgeon? Who gets the blame for my lack of inspiration? And what if the desire to become a surgeon suddenly arises tomorrow and becomes so intense that I jettison my other professional goals and enroll in medical school? Would I—that is, the part of me that is actually experiencing my life—be the true cause of these developments? Every moment of conscious effort—every thought, intention, and decision—will have been caused by events of which I am not conscious. Where is the freedom in this?

If we cannot assign blame to the workings of the universe, how can evil people be held responsible for their actions? In the deepest sense, it seems, they can’t be. But in a practical sense, they must be. I see no contradiction in this. In fact, I think that keeping the deep causes of human behavior in view would only improve our practical response to evil. The feeling that people are deeply responsible for who they are does nothing but produce moral illusions and psychological suffering.

Imagine that you are enjoying your last nap of the summer, perhaps outside in a hammock somewhere, and are awakened by an unfamiliar sound. You open your eyes to the sight of a large bear charging at you across the lawn. It should be easy enough to understand that you have a problem. If we swap this bear for a large man holding a butcher knife, the problem changes in a few interesting ways, but the sudden appearance of free will in the brain of your attacker is not among them.
Should you survive this ordeal, your subsequent experience is liable to depend—far too much, in my view—on the species of your attacker. Imagine the difference between seeing the man who almost killed you on the witness stand and seeing the bear romping at the zoo. If you are like many victims, you might be overcome in the first instance by feelings of rage and hatred so intense as to constitute a further trauma. You might spend years fantasizing about the man’s death. But it seems certain that your experience at the zoo would be altogether different. You might even bring friends and family just for the fun of it: “That’s the beast that almost killed me!” Which state of mind would you prefer—seething hatred or triumphant feelings of good luck and amazement? The conviction that a human assailant could have done otherwise, while a bear could not, would seem to account for much of the difference.

A person’s conscious thoughts, intentions, and efforts at every moment are preceded by causes of which he is unaware. What is more, they are preceded by deep causes—genes, childhood experience, etc.—for which no one, however evil, can be held responsible. Our ignorance of both sets of facts gives rise to moral illusions. And yet many people worry that it is necessary to believe in free will, especially in the process of raising children.
This strikes me as a legitimate concern, though I would point out that the question of which truths to tell children (or childlike adults) haunts every room in the mansion of our understanding. For instance, my wife and I recently took our three-year-old daughter on an airplane for the first time. She loves to fly! As it happens, her joy was made possible in part because we neglected to tell her that airplanes occasionally malfunction and fall out of the sky, killing everyone on board. I don’t believe I’m the first person to observe that certain truths are best left unspoken, especially in the presence of young children. And I would no more think of telling my daughter at this age that free will is an illusion than I would teach her to drive a car or load a pistol.
Which is to say that there is a time and a place for everything—unless, of course, there isn’t. We all find ourselves in the position of a child from time to time, when specific information, however valid or necessary it may be in other contexts, will only produce confusion, despondency, or terror in the context of our life. It can be perfectly rational to avoid certain facts. For instance, if you must undergo a medical procedure for which there is no reasonable alternative, I recommend that you not conduct an Internet search designed to uncover all its possible complications. Similarly, if you are prone to nightmares or otherwise destabilized by contemplating human evil, I recommend that you not read Machete Season. Some forms of knowledge are not for everyone.

Generally speaking, however, I don’t think that the illusoriness of free will is an ugly truth. Nor is it one that must remain a philosophical abstraction. In fact, as I write this, it is absolutely clear to me that I do not have free will. This knowledge doesn’t seem to prevent me from getting things done. Recognizing that my conscious mind is always downstream from the underlying causes of my thoughts, intentions, and actions does not change the fact that thoughts, intentions, and actions of all kinds are necessary for living a happy life—or an unhappy one, for that matter.

I haven’t been noticeably harmed, and I believe I have benefited, from knowing that the next thought that unfurls in my mind will arise and become effective (or not) due to conditions that I cannot know and did not bring into being. The negative effects that people worry about—a lack of motivation, a plunge into nihilism—are simply not evident in my life. And the positive effects have been obvious. Seeing through the illusion of free will has lessened my feelings of hatred for bad people. I’m still capable of feeling hatred, of course, but when I think about the actual causes of a person’s behavior, the feeling falls away. It is a relief to put down this burden, and I think nothing would be lost if we all put it down together. On the contrary, much would be gained. We could forget about retribution and concentrate entirely on mitigating harm. (And if punishing people proved important for either deterrence or rehabilitation, we could make prison as unpleasant as required.)

Understanding the true causes of human behavior does not leave any room for the traditional notion of free will. But this shouldn’t depress us, or tempt us to go off our diets. Diligence and wisdom still yield better results than sloth and stupidity. And, in psychologically healthy adults, understanding the illusoriness of free will should make divisive feelings such as pride and hatred a little less compelling. While it’s conceivable that someone, somewhere, might be made worse off by dispensing with the illusion of free will, I think that on balance, it could only produce a more compassionate, equitable, and sane society.

No comment yet.
Scooped by Sue Tamani! your imagination... your imagination... | anti dogmanti |
Species Discovered This Millennium, and Other Natural Wonders...

Positive Change via
Positive News
This site brings together just a few of the hundreds and hundreds of new species discovered since the year 2000.
Hopefully, it will inspire us to see the world as a place still being explored, and give us the courage to conserve and protect the fragile, shrinking areas of habitat left on Earth...
areas which, as we see here, contain creatures we haven't even yet Imagined...

No comment yet.
Scooped by Sue Tamani!

Singularity University plans massive upgrade | KurzweilAI

Singularity University plans massive upgrade | KurzweilAI | anti dogmanti |
Singularity University is planning to exponentially advance itself, transforming from a provider of short supplemental classes into a sort of innovation pipeline, with a rich website and conference series on one end, an expanding array of classes in the middle, and at the other end incubation labs for startups and corporate skunkworks teams, as well as a strong global alumni network, Wired Business reports.

The ongoing expansion is meant not only to make the university a bigger player in the world of business, but also to influence elected leaders and other policymakers, to spread ideas and values from the university to dozens of foreign countries, and to change the way humans are educated at a time of rapid technological progress.

No comment yet.
Scooped by Sue Tamani!

Humans can learn new information during sleep, researchers confirm | KurzweilAI

Humans can learn new information during sleep, researchers confirm | KurzweilAI | anti dogmanti |

A new Weizmann Institute study appearing today in Nature Neuroscience has found that if certain odors are are presented after tones during sleep, people will start sniffing when they hear the tones alone — even when no odor is present — both during sleep and, later, when awake.

(Credit: Mark J.Sebastian/Wikimedia Commons)

No comment yet.
Scooped by Sue Tamani!

Secrets of ‘SuperAger’ brains | KurzweilAI

Secrets of ‘SuperAger’ brains | KurzweilAI | anti dogmanti |
Memory performance on a word list, showing the SuperAgers performed significantly better than elderly controls.
No comment yet.
Scooped by Sue Tamani!

Why do your fingertips go wrinkly in water? › Ask an Expert (ABC Science)

Why do your fingertips go wrinkly in water? › Ask an Expert (ABC Science) | anti dogmanti |

Why doesn't the skin on my fingers and underfeet go wrinkly as my grandchildren's skin does when in the swimming pool for a long time? It used to when I was younger. I am 66. Is there something that happens to our skin as we age so that we don't?— Lyndal

No comment yet.