In this talk, Author/artist Michael Carroll will explore the bizarre methane-filled seas and soaring dunes of Saturn's largest moon, Titan. Recent advances in our understanding of this planet-sized moon provide enough information for authors to paint a realistic picture of this truly alien world. Following his presentation, he will be signing his new science fiction adventure/mystery book, "On the Shores of Titan's Farthest Sea".
"Carroll's descriptions of oily seas and methane monsoons put you in that alien world, front and center…I can imagine future astronauts doing exactly the kinds of things Mike describes. I wish I could be one of them." Alan Bean, Apollo 12 astronaut.
Measurements of the demographics of exoplanets over a range of planet and host star properties provide fundamental empirical constraints on theories of planet formation and evolution. Because of its unique sensitivity to low-mass, long-period, and free-floating planets, microlensing is an essential complement to our arsenal of planet detection methods.
Dr. Gaudi will review the microlensing method, and discuss results to date from ground-based microlensing surveys. Also, Dr. Gaudi will motivate a space-based microlensing survey with WFIRST-AFTA, which when combined with the results from Kepler, will yield a nearly complete picture of the demographics of planetary systems throughout the Galaxy.
When diffraction is employed as the primary collector modality of a telescope instead of reflection or refraction, a new set of performance capabilities emerges. A diffraction-based telescope forms a spectrogram first and an image as secondary data. The results are startling. In multiple object capability, the diffraction telescope on earth can capture 2 million spectra to R bigger than 100,000 in a single night, better for a census of exoplanets by radial velocity than any prior art. In a space telescope in a direct observation mode, this type diffraction primary objective could reveal spectral analyses of individual exoplanets.
Dr Stephen Wolfram, founder & CEO of Wolfram Research, and creator of Mathematica, Wolfram|Alpha and the Wolfram Language will come to the SETI Institute to discuss his latest thinking about the relation between searching for complex behavior in the computational universe of simple programs, using this in creating AI, and searching for intelligence elsewhere in our physical universe.
Andreas Dewes explains why quantum computing is interesting, how it works and what you actually need to build a working quantum computer. He uses the superconducting two-qubit quantum processor which he built during his PhD thesis as an example to explain its basic building blocks. He shows how this processor can be used to achieve so-called quantum speed-up for a search algorithm that can be run on it. Finally, he gives a short overview of the current state of superconducting quantum computing and Google's recently announced effort to build a working quantum computer in cooperation with one of the leading research groups in this field.
Google recently announced that it is partnering up with John Martinis - one of the leading researchers on superconducting quantum computing - to build a working quantum processor. This announcement has sparked a lot of renewed interest in a topic that was mainly of academic interest before. So, if Google thinks it's worth the hassle to build quantum computers then there surely must be something about them after all?
Andrei Linde of Stanford University is a scientist who made significant contributions in developing the inflationary theory that describes our universe's exponential expansion within the tiniest fraction of a second after the Big Bang.
Early philosophers believed Earth was positioned in the center of the universe. Understandings changed as scientists determined that the sun is at the center of our solar system, and scientists now estimate our sun being one of billions of stars in a single galaxy with more than a hundred billion galaxies within the universe.
Today, many think of our universe as a single, uniformly shaped and continuously expanding spherical balloon—a cosmological model supported by observations using the world's most powerful telescopes. According to the inflationary theory Linde helped develop, however, the cosmos may be less like what has been observed, and more like a "multiverse" consisting of many different, exponentially large parts, each with their own laws of physics.
During his BSA Distinguished Lecture, Linde will discuss inflationary theory in light of recent observational data. He will also explain how a new cosmological theory of an inflationary multiverse, which is supported by developments in string theory, would change accepted views on the origin and global structure of our universe, as well as where and how we fit in it.
Linde has authored or coauthored more than 280 papers on topics including particle physics and cosmology, as well as two books, "Inflation and Quantum Cosmology" and "Particle Physics and Inflationary Cosmology." He has been honored with many prestigious awards and recognitions from institutions around the world, ranging from the Academy of Sciences of the Soviet Union's Lomonosov Prize in 1978 and the International Centre for Theoretical Physics' Dirac Medal in 2002 to the Breakthrough Prize for Fundamental Physics in 2012 and Kavli Prize in 2014. He was elected a member of the National Academy of Sciences in the United States in 2008 and the National Academy of Arts and Sciences in 2011.
Linde earned a Ph.D. from Lebedev Physical Institute in Russia in 1975 and continued his work there until 1989. He became a staff member at CERN in Europe in 1989 and a professor of physics at Stanford University in 1990. He was honored with the Harald Trap Friis Professorship at Stanford in 2008.
This BSA Distinguished Lecture is part of the Brookhaven Forum 2015, a workshop being held at Brookhaven Lab devoted to new developments in high energy physics with an emphasis on the latest data from the Large Hadron Collider at CERN.
A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? These questions and more will be discussed as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.
In 1960 two seminal papers in SETI were published, providing two visions for SETI. Giuseppe Cocconi and Philip Morrison’s proposed detecting deliberate radio signals ("communication SETI"), while Freeman Dyson ("artifact SETI"), proposed detecting the inevitable effects of massive energy supplies and artifacts on their surroundings. While communication SETI has now had several career-long practitioners, artifact SETI has, until recently, not been a vibrant field of study.
The launch of the Kepler and WISE satellites have greatly renewed interest in the field, however, and the recent Breakthrough Listen Initiative has provided new motivation for finding good targets for communication SETI. Dr. Wright will discuss the progress of the Ĝ Search for Extraterrestrial Civilizations with Large Energy Supplies, including its justification and motivation, waste heat search strategy and first results, and the framework for a search for megastructures via transit light curves. The last of these led to the identification of KIC 8462852 (a.k.a. "Tabby's Star") as a candidate ETI host. This star, discovered by Boyajian and the Zooniverse Planet Hunters, exhibits several apparently unique and so-far unexplained photometric properties, and continues to confound natural explanation.
How should we balance the benefits of limiting or possibly eliminating a disease that kills 1000 people a day against the possible disruption of an ecosystem? Valentino Gantz, and Ethan Bier recently published a Science paper describing a new mechanism of "gene drive."
This is not just a matter of editing the genes of a single individual, but an opportunity to make a change that will drive that change into all descendants of the original individual. Their publication resulted in international interest because of the broad potential applications of this new technology, which could rapidly produce beneficial genetic changes.
Others have argued that because of the risks and implications of such research the work should not even have been published. Series: "Exploring Ethics" [1/2016] [Science] [Show ID: 30009] (Visit: http://www.uctv.tv/)
Machine learning is typically classified into three broad categories, depending on the nature of the learning "signal" or "feedback" available to a learning system. These are:
Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle), without a teacher explicitly telling it whether it has come close to its goal. Another example is learning to play a game by playing against an opponent.:3
Between supervised and unsupervised learning is semi-supervised learning, where the teacher gives an incomplete training signal: a training set with some (often many) of the target outputs missing. Transduction is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing.
Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience. Developmental learning, elaborated for robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.
Another categorization of machine learning tasks arises when one considers the desired output of a machine-learned system::3
In classification, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one or more (multi-label classification) of these classes. This is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email (or other) messages and the classes are "spam" and "not spam".
In regression, also a supervised problem, the outputs are continuous rather than discrete.
In clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not known beforehand, making this typically an unsupervised task.
Presented at TTI/Vanguard's Networks, Sensors, & Mobility May 3–4, 2016 San Francisco, CA Alex Kendall, Department of Engineering, University of Cambridge
We can now teach machines to recognize objects. However, in order to teach a machine to “see” we need to understand geometry as well as semantics. Given an image of a road scene, for example, an autonomous vehicle needs to determine where it is, what's around it, and what's going to happen next. This requires not only object recognition, but depth, motion and spatial perception, and instance-level identification. A deep learning architecture can achieve all these tasks at once, even when given a single monocular input image. Surprisingly, jointly learning these different tasks results in superior performance, because it causes the deep network to uncover a better deep representation by explicitly supervising more information about the scene. This method outperforms other approaches on a number of benchmark datasets, such as SUN RGB-D indoor scene understanding and CityScapes road scene understanding. Besides cars, potential applications include factory robotics and systems to help the blind.
Viruses are by far the most abundant biological entities in the oceans, comprising approximately 94% of the nucleic-acid-containing particles. However, because of their small size they comprise only approximately 5% of the biomass. By contrast, even though prokaryotes represent less than 10% of the nucleic-acid-containing particles they represent more than 90% of the biomass.
Self-assembling robots are referred to as von Neumann machines after the man responsible for originally proposing them, John von Neumann. Since then, the potential of these machines and their ability to proliferate throughout known space has made galactic colonization seem not only possible but perhaps inevitable.
References: von Neumann, John The Theory of Self-reproducing Automata, Urbana, IL: Univ. of Illinois Press. ed. A. Burks, 1966
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.
The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.
There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...
STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page
Spider silk is a proteinfiber spun by spiders. Spiders use their silk to make webs or other structures, which function as sticky nets to catch other animals, or as nests or cocoons to protect their offspring, or to wrap up prey. They can also use their silk to suspend themselves, to float through the air, or to glide away from predators. Most spiders vary the thickness and stickiness of their silk for different uses.
In some cases, spiders may even use silk as a source of food. While methods have been developed to collect silk from a spider by force, it is difficult to gather silk from many spiders in a small space, in contrast to silkworm 'farms'.
Replicating the complex conditions required to produce fibers that are comparable to spider silk has proven difficult to accomplish in a laboratory environment. What follows is a miscellaneous list of attempts on this problem. However, in the absence of hard data accepted by the relevant scientific community, it is difficult to judge whether these attempts have been successful or constructive.
One approach that does not involve farming spiders is to extract the spider silk gene and use other organisms to produce the spider silk. In 2000, Canadian biotechnology company Nexia successfully produced spider silk protein in transgenicgoats that carried the gene for it; the milk produced by the goats contained significant quantities of the protein, 1–2 grams of silk proteins per liter of milk. Attempts to spin the protein into a fiber similar to natural spider silk resulted in fibers with tenacities of 2–3 grams per denier (see BioSteel). Nexia used wet spinning and squeezed the silk protein solution through small extrusion holes in order to simulate the behavior of the spinneret, but this procedure has so far not been sufficient to replicate the properties of native spider silk.
Extrusion of protein fibers in an aqueous environment is known as "wet-spinning". This process has so far produced silk fibers of diameters ranging from 10 to 60 μm, compared to diameters of 2.5–4 μm for natural spider silk.
In March 2010, researchers from the Korea Advanced Institute of Science & Technology (KAIST) succeeded in making spider silk directly using the bacteria E.coli, modified with certain genes of the spider Nephila clavipes. This approach eliminates the need to milk spiders and allows the manufacture the spider silk in a more cost-effective manner.
The company Kraig Biocraft Laboratories has used research from the Universities of Wyoming and Notre Dame in a collaborative effort to create a silkworm that has been genetically altered to produce spider silk. In September 2010 it was announced at a press conference at the University of Notre Dame that the effort had been successful.
The company AMSilk has succeeded in making spidroin using bacteria, and making it into spider silk. They are now focusing on increasing production rate of the spider silk.
The theory called Holographic Space-time is an attempt to generalize String Theory so that one can discuss local regions of space-time. It's key feature is a mapping between quantum concepts and the geometry of space-time. Causality conditions are imposed, as in quantum field theory, by insisting that things which cannot have mutual quantum interference are things that are causally separated. Geometrical sizes are encoded via the Holographic Principle: the number of quantum states in a region is determined by the area of a certain surface surrounding that region.
In 1995, Jacobson showed that one could derive Einstein's equations by imposing this principle in every space-time region. Einstein's equations are the hydrodynamic equations of a system whose statistics obeys the Holographic connection between space-time and the number of quantum states. Dr. Banks will outline the application of these ideas to a new model of the early inflationary universe, as well as to a rough prediction of the masses of supersymmetric particles.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.