Elon Musk believes Tesla cars will be fully autonomous by 2018, and have an all-electric range of more than 1,000km, double what it is today. He also predicts that by 2035 all new cars will not require a driver.
A renowned futurist and CEO of Tesla and SpaceX, Musk predicts that the range of the Model S can be increased by between 5% and 10% every year, as battery technology improves. He also claims the AutoPilot self-driving feature currently being beta tested by Tesla will be rolled-out to all compatible Model S vehicles by the end of October. AutoPilot provides automatic steering, accelerating and braking on motorways, but only in countries which have updated their road laws to allow it.
In an interview on Dutch television, Musk said: "My guess is that we could probably break 1,000km within a year or two. I'd say 2017 for sure...in 2020 I guess we could probably make a car go 1,200km. I think maybe 5-10% a year [improvement], something like that." A Model S was recently driven 452 miles (723km) on a single charge, but drove at an average speed of just 24mph. Musk says his predictions account for driving at a more realistic speed. Musk added that AutoPilot will be switched on in a month's time, adding: "My guess for when we'll have full autonomy is about three years, approximately three years." This is much sooner than 2020, when analysts had expected to see autonomous cars from Google - and possible Apple - go on sale.
But this is with a caveat. "Regulators will not allow full autonomy for one to two years – maybe one to three years – after that," Musk said. "It depends on the particular market; in some markets the regulators will be more forward leaning than others. But in terms of when [full autonomy] will be technologically possible, I think three years."
Looking even further ahead, Musk predicts that – providing "civilisation is still around" – by 2035 "we'll see a very large percentage of cars being electric [on the road] probably all cars being built will have full autonomy in 20 years." Again, however, a caveat exists, in that cars are not replaced as often as smartphones, so it will take a considerable amount of time for all vehicles on the world's roads (around 2.5 billion) to become electric and autonomous. Musk reckons it would take another 20 years to fully replace all cars and trucks being used in 2035 with electric vehicles.
"The largest earthquake ever recorded by instruments struck southern Chile on May 22, 1960. This 9.5 magnitude earthquake generated a tsunami that crossed the Pacific Ocean, killing as many as 2000 people in Chile and Peru, 61 people in Hilo, Hawaii, and 142 people in Japan as well as causing damage in the Marquesas Islands (Fr. Polynesia), Samoa, New Zealand, Australia, the Philippines, and in Alaska's Aleutian Islands. To see how this tsunami compares with two recent tsunamis from Chile, please watch http://youtu.be/qoxTC3vIF1U "
"Chess, after all, is special; it requires creativity and advanced reasoning. No computer could match humans at chess." That was a likely argument before IBM surprised the world about computers playing chess. In 1997, Deep Blue's entry won the World Chess Champion, Garry Kasparov.
Matthew Lai records the rest: "In the ensuing two decades, both computer hardware and AI research advanced the state-of-art chess-playing computers to the point where even the best humans today have no realistic chance of defeating a modern chess engine running on a smartphone."
Now Lai has another surprise. His report on how a computer can teach itself chess—and not in the conventional way—is on arXiv. The title of the paper is "Giraffe: Using Deep Reinforcement Learning to Play Chess." Departing from the conventional method of teaching computers how to play chess by giving them hardcoded rules, this project set out to use machine learning to figure out how to play chess. Namely, he said that deep learning was applied to chess in his work. "We use deep networks to evaluate positions, decide which branches to search, and order moves."
As for other chess engines, Lai wrote, "almost all chess engines in existence today (and all of the top contenders) implement largely the same algorithms. They are all based on the idea of the fixed-depth minimax algorithm first developed by John von Neumann in 1928, and adapted for the problem of chess by Claude E. Shannon in 1950."
This Giraffe is a chess engine using self-play to discover all its domain-specific knowledge. "Minimal hand-crafted knowledge is given by the programmer," he said.
Results? Lai said ,"The results showed that the learned system performs at least comparably to the best expert-designed counterparts in existence today, many of which have been fine tuned over the course of decades."
OK, not at super-Grandmaster levels, but impressive enough. "With all our enhancements, Giraffe is able to play at the level of an FIDE [Fédération Internationale des Échecs, or World Chess Federation] International Master on a modern mainstream PC," he stated. "While that is still a long way away from the top engines today that play at super-Grandmaster levels, it is able to defeat many lower-tier engines, most of which search an order of magnitude faster."
Addressing the value of Lai's work in this paper, MIT Technology Review, stated that, "In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move." Giraffe, said the review, taught itself to play chess by evaluating positions much more like humans.
"The desperate men, women, and children flooding into Europe from the Middle East and Africa are not the only people moving along ever-shifting and dangerous migration routes. Last year saw the highest levels of global forced displacement on record—59.5 million individuals left their homes in 2014 due to 'persecution, conflict, generalized violence, or human rights violations' according to the United Nations. That's 8.3 million more people than the year before."
In constructor theory, physical laws are formulated only in terms of which tasks are possible (with arbitrarily high accuracy, reliability, and repeatability), and which are impossible, and why – as opposed to what happens, and what does not happen, given dynamical laws and initial conditions. A task is impossible if there is a law of physics that forbids it. Otherwise, it is possible – which means that a constructor for that task – an object that causes the task to occur and retains the ability to cause it again – can be approximated arbitrarily well in reality. Car factories, robots and living cells are all accurate approximations to constructors.
For constructors that survive for long, information in the recipe must be digital, to make reliable error correction possible after copying: if not, there would be a fundamental limit to how well an error can be detected, which would lead to a build up of errors and a limit to the accuracy and resiliency achievable. A self‑reproducing cell must do all this, too. The parent cell contains a recipe – DNA – with all the instructions to construct a new cell (recipe excluded). This means that accurate self‑reproduction can occur only in two steps. Using letter-by-letter replication and error-correction, the parent cell makes a high-fidelity copy of the recipe to be inserted in the new cell; then it constructs the copying mechanism plus the rest of the cell afresh, following the recipe. It was the Hungarian-born physicist John von Neumann who first discovered this logic in the 1940s. He was exploring cellular automata – discrete computational models used, for instance, in Conway’s Game of Life, which rely on unphysical dynamical laws. Constructor theory shows that this is the only possible logic for accurate self-reproduction given any no-design laws.
Constructor theory gives the ‘recipe’ an exact characterisation in fundamental physics. It is digitally coded information that can act as a constructor and has resiliency – the capacity, once it is instantiated in physical systems, to remain so instantiated. In constructor theory, that is called knowledge – a term used here without the usual connotation that it is known by someone: it merely denotes this particular kind of information with causal power and resiliency. And an essential part of the explanation of all distinctive properties of living things (and of accurate constructors in general) is that they contain knowledge in that sense.
Moreover, it is a fundamental idea of constructor theory that any transformation that is not forbidden by the laws of physics can be achieved given the requisite knowledge. There is no third possibility: either the laws of physics forbid it, or it is achievable. This accounts for another aspect of the evolutionary story. Ever better constructors can be produced, without limit, given the relevant knowledge, instantiated in digital recipes.
The early history of evolution is, in constructor-theoretic terms, a lengthy, highly inaccurate, non-purposive construction that eventually produced knowledge-bearing recipes out of elementary things containing none. These elementary things are simple chemicals such as short RNA strands, which can perform only low-fidelity replication, and so do not bear the appearance of design, and are therefore allowed to exist in a pre-biotic environment governed by no-design laws.
Thus the constructor theory of life shows explicitly that natural selection does not need to assume the existence of any initial recipe, containing knowledge, to get started. It shows that, whatever recipes we might find in living things, they do not require ad‑hoc, biocentric or mysterious laws of physics in order to come into existence from elementary initial components. They need only the laws of physics to permit the existence of digital information, plus sufficient time and energy, which are non-specific to life. This adds another deep reason why a unification in our understanding of the phenomena of life and physics is possible. Whatever the laws of physics do not forbid us, we can do. Whether or not we will, depends on how much knowledge we create. It is up to us.
This map points out the highly uneven spatial distribution of (geotagged) Wikipedia articles in 44 language versions of the encyclopaedia. Slightly more than half of the global total of 3,336,473 articles are about places, events and people inside the red circle on the map, occupying only about 2.5% of the world’s land area.
A team of scientists has successfully measured particles of light being “squeezed”, in an experiment that had been written off in physics textbooks as impossible to observe.
Squeezing is a strange phenomenon of quantum physics. It creates a very specific form of light which is “low-noise” and is potentially useful in technology designed to pick up faint signals, such as the detection of gravitational waves.
The standard approach to squeezing light involves firing an intense laser beam at a material, usually a non-linear crystal, which produces the desired effect.
For more than 30 years, however, a theory has existed about another possible technique. This involves exciting a single atom with just a tiny amount of light. The theory states that the light scattered by this atom should, similarly, be squeezed.
Unfortunately, although the mathematical basis for this method – known as squeezing of resonance fluorescence – was drawn up in 1981, the experiment to observe it was so difficult that one established quantum physics textbook despairingly concludes: “It seems hopeless to measure it”.
So it has proven – until now. In the journal Nature, a team of physicists report that they have successfully demonstrated the squeezing of individual light particles, or photons, using an artificially constructed atom, known as a semiconductor quantum dot. Thanks to the enhanced optical properties of this system and the technique used to make the measurements, they were able to observe the light as it was scattered, and proved that it had indeed been squeezed.
Professor Mete Atature, from the Cavendish Laboratory, Department of Physics, and a Fellow of St John’s College at the University of Cambridge, led the research. He said: “It’s one of those cases of a fundamental question that theorists came up with, but which, after years of trying, people basically concluded it is impossible to see for real – if it’s there at all.”
“We managed to do it because we now have artificial atoms with optical properties that are superior to natural atoms. That meant we were able to reach the necessary conditions to observe this fundamental property of photons and prove that this odd phenomenon of squeezing really exists at the level of a single photon. It’s a very bizarre effect that goes completely against our senses and expectations about what photons should do.”
Snails small enough to fit almost 10 times into the eye of a needle have been discovered in Guangxi province, Southern China. With their shells measuring 0.86mm in height, the researchers believe they are the smallest land snails ever found.
The Angustopila dominikae snail – named after the wife of one of the authors ofthe study published in the journal ZooKeys – is just visible to the naked eye but very difficult to spot. Barna Páll-Gergely, co-author and scientist from Shinshu university in Japan said he was excited to find the “really really tiny” snails.
“These are very probably extreme endemic species. If we find them in more than one locality that is somewhat surprising,” he said. The seven species of record-breaking “microsnails” were discovered by the researchers while collecting soil samples from the base of limestone rocks in Guangxi province. They say it is likely they are indigenous to the area, with the most similar species living about 621 miles away in Thailand.
A new smart research system developed at Uppsala University accelerates research on cancer treatments by finding optimal treatment drug combinations. It was developed by a research group led by Mats Gustafsson, Professor of Medical Bioinformatics.
The “lab robot” system plans and conducts experiments with many substances, and draws its own conclusions from the results. The idea is to gradually refine combinations of substances so that they kill cancer cells without harming healthy cells.
Instead of just combining a couple of substances at a time, the new lab robot can handle about a dozen drugs simultaneously. The future aim is to handle many more, preferably hundreds. The method is iterative search for anti-cancer drug combinations. The procedure starts by generating an initial generation (population) of drug combinations randomly or guided by biological prior knowledge and assumptions. In each iteration the aim is to propose a new generation of drug combinations based on the results obtained so far. The procedure iterates through a number of generations until a stop criterion for a predefined fitness function is satisfied.
There are a few such laboratories in the world with this type of lab robot, but researchers “have only used the systems to look for combinations that kill the cancer cells, not taking the side effects into account,” says Gustafsson.
The next step: Make the robot system more automated and smarter. The scientists also want to build more knowledge into the guiding algorithm of the robot, such as prior knowledge about drug targets and disease pathways.
For patients with the same cancer type returning multiple times, sometimes the cancer cells develop resistance against the pharmacotherapy used. The new robot systems may also become important in the efforts to find new drug compounds that make these resistant cells sensitive again.
The research is described in an open-access article published Tuesday (Sept. 22, 2015) in Scientific Reports.
The first brain-to-brain telepathy-like communication between two participants via the Internet has been performed by University of Washington researchers. The experiment used a question-and-answer game. The goal is for the “inquirer” to determine which object the “respondent” is looking at from a list of possible objects. The inquirer sends a question (e.g., “Does it fly?) to the respondent, who answers “yes” or “no” by mentally focusing on one of two flashing LED lights attached to the monitor. The respondent is wearing an electroencephalography (EEG) helmet.
By focusing on the “yes” light, the EEG device generates send a signal to the inquirer via the Internet to activate a magnetic coil positioned behind the inquirer’s head, which stimulates the visual cortex and causes the inquirer to see a flash of light (known as a “phosphene”). A “no” signal works the same way, but is not strong enough to activate the coil.
The experiment, detailed today in an open access paper in PLoS ONE, is the first to show that two brains can be directly linked to allow one person to guess what’s on another person’s mind. It is “the most complex brain-to-brain experiment, I think, that’s been done to date in humans,” said lead author Andrea Stocco, an assistant professor of psychology and researcher at UW’s Institute for Learning & Brain Sciences.
The experiment was carried out in dark rooms in two UW labs located almost a mile apart and involved five pairs of participants, who played 20 rounds of the question-and-answer game. Each game had eight objects and three questions. The sessions were a random mixture of 10 real games and 10 control games that were structured the same way.*
Participants were able to guess the correct object in 72 percent of the real games, compared with just 18 percent of the control rounds. Incorrect guesses in the real games could be caused by several factors, the most likely being uncertainty about whether a phosphene had appeared.
The study builds on the UW team’s initial experiment in 2013, which was the first to demonstrate a direct brain-to-brain connection between humans. Other scientists have connected the brains of rats and monkeys, and transmitted brain signals from a human to a rat, using electrodes inserted into animals’ brains. In the 2013 experiment, the UW team used noninvasive technology to send a person’s brain signals over the Internet to control the hand motions of another person.
The experiment evolved out of research by co-author Rajesh Rao, a UW professor of computer science and engineering, on brain-computer interfaces that enable people to activate devices with their minds. In 2011, Rao began collaborating with Stocco and Prat to determine how to link two human brains together.
In 2014, the researchers received a $1 million grant from the W.M. Keck Foundation that allowed them to broaden their experiments to decode more complex interactions and brain processes. They are now exploring the possibility of “brain tutoring,” transferring signals directly from healthy brains to ones that are developmentally impaired or impacted by external factors such as a stroke or accident, or simply to transfer knowledge from teacher to pupil. The team is also working on transmitting brain states — for example, sending signals from an alert person to a sleepy one, or from a focused student to one who has attention deficit hyperactivity disorder, or ADHD.
“Imagine having someone with ADHD and a neurotypical student,” Prat said. “When the non-ADHD student is paying attention, the ADHD student’s brain gets put into a state of greater attention automatically.”
“Evolution has spent a colossal amount of time to find ways for us and other animals to take information out of our brains and communicate it to other animals in the forms of behavior, speech and so on,” Stocco said. “But it requires a translation. We can only communicate part of whatever our brain processes. “What we are doing is kind of reversing the process a step at a time by opening up this box and taking signals from the brain and with minimal translation, putting them back in another person’s brain,” he said.
"Over one million people in sub-Saharan Africa will contract malaria this year because they live near a large dam, according to a new study which, for the first time, has correlated the location of large dams with the incidence of malaria and quantified impacts across the region. The study finds that construction of an expected 78 major new dams in sub-Saharan Africa over the next few years will lead to an additional 56,000 malaria cases annually."
"A new census that shows that Earth is host to a staggering 3.02 trillion trees — more than scientists expected.The most recent estimate only counted 400 billion trees, reports Rachel Ehrenberg. Because prior studies used satellite technology alone instead of including data from on-the-ground tree density studies, writes Ehrenberg, they missed the mark. They also estimate that since human civilization began, 45.8 percent of all trees been lost."
New drone technologies and innovation are reaching new heights withever increasing need to improve safety and maximize operational efficiency in underground mines. Keeping in view the risks and challenges usually faced during mining operations, new drone technology is on its way to replace labour-intensive methods of surveying, inspection and mapping. This is emerging as a new trend in the mining industry to capture 3D spatial data in hard-to-access underground areas in mines with an aim to remove much of the risk and increase safety on site. 3D mapping of large-scale sub-surface environments, such as stopes, drifts and ore passes will now become easy and cost effective. This heralds a new era for future underground mining.
In terms of safety and operational efficiency, underground mining has unique challenges, some of which can be addressed with technologies, such as 3D laser mapping and unmanned aerial vehicles commonly known as drones. Therefore there is naturally a push toward building such systems that are robust enough for hard underground mining environments as well as cost effective. The future is promising and the baseline technologies are already there that can be used to build high efficiency products. For example, we at Clickmox Solutions have developed a 3D mapping system based on SLAM (Simultaneous Localization And Mapping) algorithm, which can be installed on drones and vehicles. This system is capable of building 3D maps in real time without the need for GPS signal for positioning.
All such technologies are tied to the so called Internet-of-Things or IoT, which envisions a highly connected and intelligent system where each individual component talks to other components and makes decisions based on the need at that time. 3D scanning and mapping that facilitate automated collision avoidance and positioning are important parts of such a system. It is envisioned that soon off the shelf technologies based on IoT will make it convenient for miners to use autonomous vehicles and drones to increase safety and productivity.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.