Alex Kipman wants to create a new reality — one that puts people, not devices, at the center of everything. With HoloLens, the first fully untethered holographic computer, Kipman brings 3D holograms into the real world, enhancing our perceptions so that we can touch and feel digital content. In this magical demo, explore a future without screens, where technology has the power to transport us to worlds beyond our own.
What are continued fractions? How can they tell us what is the most irrational number? What are they good for and what unexpected properties do they possess? How did Ramanujan make good use of their odd features to make striking discoveries? We will look at how they have played a role in the study of numbers, chaos, gears and astronomical motions.
Supersymmetry, Grand Unification and String Theory - A revolutionary new concepts about elementary particles, space and time, and the structure of matter began to emerge in the mid-1970s.
Theory got far ahead of experiment with radical new ideas, but the concepts have never been experimentally tested. Now all that is about to change. The LHC — the Large Hadron Collider — has finally been built and is about to confront theory with experiment.
The final quarter of this ongoing physics series with Leonard Susskind is devoted to supersymmetry, grand unification, and string theory. This course was originally presented in Stanford's Continuing Studies program.
On March 21, 2013, the most detailed map of the infant universe to date was publicly released, showing relic radiation from the Big Bang, imprinted when the universe was just 380,000 years old. This was the first release of cosmological data from the Planck satellite, a mission of the European space agency that was initiated in 1996 and involved hundreds of scientists in over thirteen countries. In this lecture, Matias Zaldarriaga, Professor in the School of Natural Sciences, reviews the new results and explains where they fit in our broader understanding of the beginnings and evolution of the universe.
Though robotics seems the technology of the future, it actually has its roots in ancient history. The study and field of robotics can be traced back to approximately 300 B.C. in Ancient China. The Muslim inventor Al-Jazari is credited with inventing a humanoid robot in 1206. By the 20th century, the field of robotics was one that held great fascination and offered virtually limitless possibilities. As technology began to focus on the development of computers, so too did the study of robotics continue to move forward. The 60s was a time that both computers and robotics saw great advancement. In 1961, the first industrial robot was used at the General Motors plant. By the 21st century, the automotive industry and manufacturing plants saw widespread use of robots for commercial production.
When it comes to pinpointing the first robot created; there is a bit of confusion. In addition to the robot created in 1206 by Al-Jazari, Leonardo da Vinci is credited with building a mechanical man referred to as the anthrobat. In 1921, Karel Capek used the word “robota” to describe slave like labor in his play titled Rossum’s Universal Robots. The word became associated with the humanoid prototypes. During the 1940s, Missouri native, William Grey Walter developed a robot named Elise the tortoise. In 1961, while at MIT, Heinrich Ernst created a computer that operated a mechanical hand called the MH-1. By 1962, General Motors used the first industrial arm robot that would ensure people remained safe while performing difficult tasks on the assembly line. Several other robots and computer programs were created that helped advance the field of robotics. t was not until the 1950s that industrial robotics really took off. As technological advancements were made in areas such as electronics and computers, so too, did robotics make vast strides. During the 50s, Alan Turing released the “Turing Test” which attempted to measure whether or not machines or robots could think for themselves.
In 1961, General Motors utilized the first industrial robot. The robot was named Unimate and Devol and Engelberger created it. The robot performed welding and die casting work at the New Jersey plant. As robotics made an impact on manufacturing and industry, its uses to assist humans were being explored. In 1963, the “Rancho Arm” was created to assist handicapped people. Today, robotics has revolutionized the way handicapped people can reclaim the use of lost limbs. More robotic arms were developed and by 1969, the Stanford Arm marked the first robotic arm controlled electrically via a computer. By 1973, industrial robots were controlled by computers and the T3, created by Richard Hohn was available for commercial sale. In 1976, robotic arms were used by both Viking 1 and Viking 2 for space exploration. By 1980, the official robotic age was underway. More robots were developed and perfected during the last part of the 20th century, with robots finding their way to the silver screen, in hospital rooms, in space, and in industries such as automotive and manufacturing. By the 21st century, the science behind Humanoid robots was developed with companies such as Hanson Robotics that developed the humanoid “Jules” and a realistic looking “Albert Einstein” that walks and talks.
The 21st century sees robotics in everyday use. The automotive industry is full of robots that complete tasks often too difficult for humans to accomplish. Many assembly lines and manufacturing companies are manned by robots instead of people. Television stations use robotics for video production and filming. Where once man stood behind a camera and filmed inside a studio, many of these tasks are now accomplished by robots. Robotics has revolutionized the medical industry, as robotic surgery is now a staple in hospital rooms. Amputees are now experiencing the power of robotics with newly designed limbs that can respond to sensations and pressure as human limbs complete with nerves would. The field of robotics continues to advance brining new technological advances to many factors of society.
There are two types of gene cloning: in vivo, which involves the use of restriction enzymes and ligases using vectors and cloning the fragments into host cells (as can be seen in the image above). The other type is in vitro which is using the polymerase chain reaction (PCR) method to create copies of fragments of DNA.
Legendary scientist David Deutsch puts theoretical physics on the back burner to discuss a more urgent matter: the survival of our species. The first step toward solving global warming, he says, is to admit that we have a problem.
TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers are invited to give the talk of their lives in 18 minutes -- including speakers such as Jill Bolte Taylor, Sir Ken Robinson, Hans Rosling, Al Gore and Arthur Benjamin. TED stands for Technology, Entertainment, and Design, and TEDTalks cover these topics as well as science, business, politics and the arts. Watch the Top 10 TEDTalks on TED.com, at http://www.ted.com/index.php/talks/top10
Eric Ladizinsky visited the Quantum AI Lab at Google LA to give a talk "Evolving Scalable Quantum Computers." This talk took place on March 5, 2014.
"The nineteenth century was known as the machine age, the twentieth century will go down in history as the information age. I believe the twenty-first century will be the quantum age". Paul Davies
Quantum computation represents a fundamental paradigm shift in information processing. By harnessing strange, counterintuitive quantum phenomenon, quantum computers promise computational capabilities far exceeding any conceivable classical computing systems for certain applications. These applications may include the core hard problems in machine learning and artificial intelligence, complex optimization, and simulation of molecular dynamics .. the solutions of which could provide huge benefits to humanity.
Realizing this potential requires a concerted scientific and technological effort combining multiple disciplines and institutions ... and rapidly evolving quantum processor designs and algorithms as learning evolves. D-Wave Systems has built such a mini-Manhattan project like effort and in just a under a decade, created the first, special purpose, quantum computers in a scalable architecture that can begin to address real world problems. D-Wave's first generation quantum processors (now being explored in conjunction with Google/NASA as well as Lockheed and USC) are showing encouraging signs of being at a "tipping point" .. matching state of the art solvers for some benchmark problems (and sometimes exceeding them) ... portending the exciting possibility that in a few years D-Wave processors could exceed the capabilities of any existing classical computing systems for certain classes of important problems in the areas of machine learning and optimization.
In this lecture, Eric Ladizinsky, Co-Founder and Chief Scientist at D-Wave will describe the basic ideas behind quantum computation , Dwave's unique approach, and the current status and future development of D-Wave's processors. Included will be answers to some frequently asked questions about the D-Wave processors, clarifying some common misconceptions about quantum mechanics, quantum computing, and D-Wave quantum computers.
Speaker Info: Eric Ladizinsky is a physicist, Co-founder, and Chief Scientist of D-Wave Systems. Prior to his involvement with D-Wave, Mr. Ladizinsky was a senior member of the technical staff at TRW's Superconducting Electronics Organization (SCEO) in which he contributed to building the world's most advanced Superconducting Integrated Circuit capability intended to enable superconducting supercomputers to extend Moore's Law beyond CMOS. In 2000, with the idea of creating a quantum computing mini -Manhattan-project like effort, he conceived, proposed, won and ran a multi-million dollar, multi-institutional DARPA program to develop a prototype quantum computer using (macroscopic quantum) superconducting circuits. Frustrated with the pace of that effort Mr. Ladizinsky, in 2004, teamed with D-Wave's original founder (Geordie Rose) to transform the then primarily IP based company to a technology development company modeled on his mini-Manhattan-project vision. He is also responsible for designing the superconducting (SC) IC process that underlies the D-Wave quantum processors ... and transferring that process to state of art semiconductor production facilities to create the most advanced SC IC process in the world.
The Global Brain can be defined as the self-organizing network formed by all people on this planet together with the information and communication technologies that connect and support them. As the Internet becomes faster, smarter, and more encompassing, it increasingly links its users into a single information processing system, which functions like a nervous system for the planet Earth.
The intelligence of this system is collective and distributed: it is not localized in any particular individual, organization or computer system. It rather emerges from the interactions between all its components—a property characteristic of a complex adaptive system. Such a distributed intelligence may be able to tackle current and emerging global problems that have eluded more traditional approaches. Yet, at the same time it will create technological and social challenges that are still difficult to imagine, transforming our society in all aspects.
Dr. Gabor Forgacs is a theoretical physicist turned tissue-engineer turned entrepreneur. His companies are pioneering 3D bio-printing technologies that will produce tissues for medical and pharmaceutical uses, as well as for consumption, in the form of meat and leather.
Cyborgs, brain uploads and immortality - How far should science go in helping humans exceed their biological limitations? These ideas might sound like science fiction, but proponents of a movement known as transhumanism believe they are inevitable.
In this episode of The Stream, we talk to bioethicist George Dvorsky; Robin Hanson, a research associate with Oxford’s Future of Humanity Institute; and Ari N. Schulman, senior editor of The New Atlantis, about the ethical implications of transhumanism.
Geneticist Jennifer Doudna co-invented a groundbreaking new technology for editing genes, called CRISPR-Cas9. The tool allows scientists to make precise edits to DNA strands, which could lead to treatments for genetic diseases ... but could also be used to create so-called "designer babies." Doudna reviews how CRISPR-Cas9 works -- and asks the scientific community to pause and discuss the ethics of this new tool.
Should scientists edit the human genome, striking out undesirable traits like so many typos? “My own views are still forming,” says Jennifer Doudna, who with her research partner, Emmanuelle Charpentier, developed a powerful gene editing technique at her University of California, Berkeley lab several years ago (TED Talk: We can now edit our DNA. But let’s do it wisely). “I’m still trying to get a handle on how and when and why would we want to use this.”
“This” is a genetic editing process that uses an enzyme with the ungainly name of CRISPR-Cas9 to precisely slice into a strand of DNA, snipping out genetic material with the precision of a scalpel. Aside from offering an unexpectedly high level of precision at removing specific As, Ts, Gs and Cs, the CRISPR-Cas9 technique opens a new Pandora’s box: when used on embryos, the genetic changes can be inherited from parent to child.
Since its invention, the CRISPR-Cas9 technique been used to put lab rats, monkeys, even non-viable human embryos under the genetic knife. But an ethical question hangs over whether the technique should be applied to living human embryos, where an edited gene can be inherited from one generation to the next. One fix could strike out a genetic illness from a family’s bloodline; one mistake could irrevocably alter the human genome in ways we can’t know.
That’s why Doudna, along with a panel of influential genetic scientists, has called for a worldwide pause on any experiment with the human genome. It’s also why she’s helping to convene a three-day summit this December at the National Academy of Sciences in Washington, D.C., where she and others will debate how far the world should take this technology. Doudna hopes the attendees will agree to some framework, any framework, for guiding responsible experimentation.
Gene editing is a polarizing issue, and her informal survey of the research community has turned up wildly divergent opinions. Some researchers favor a complete ban on edits to human embryos, preferring alternative treatments for genetic illnesses (in vitro screening, for instance, that identifies embryos with harmful mutations). Others believe that constraints on research could delay or prevent still-undiscovered cures. Doudna does not expect to solve these differences in three days, but she hopes that the opinions of scientific heavyweights can help shape the conversation. “Highly respected scientists do have a role to play in making a statement that invites people at least to consider their viewpoint,” she says. Bioethicists, lawyers, patient advocacy groups and government regulators will also be there to have their say. If that sounds like an unwieldy conversation, well, it will be. Fortunately, for Doudna, there’s a playbook for this sort of powwow.
The ability to create accurate disease models of human monogenic and complex genetic disorders is very important for the understanding of disease pathogenesis and the development of new therapeutics. Although proof of principle using adult stem cells for disease modeling has been established, induced pluripotent stem cells (iPSCs) have been demonstrated to have the greatest utility for modeling human diseases. Additionally, the latest advances in programmable nucleases have empowered researchers with genome editing tools, such as CRISPR/Cas9, that substantially improve their ability to make precise changes at a defined genomic locus in a broad array of cell types including stem cells. While the utility of these tools is improving, there are several key factors, including design and delivery that should be taken into account to ensure maximum editing efficiency and specificity. Already, these tools have allowed us to efficiently knock out genes and generate single nucleotide polymorphism (SNP) iPSCs. This ability to modify target genomic loci with high efficiency will facilitate the generation of novel genetically modified stem cells for research and therapeutic applications.
UCSC has built the Cancer Genomics Hub (CGHub) for the US National Cancer Institute, designed to hold data for all major NCI projects. To date it has served more than more than 10 petabytes of data to more than 320 research labs. Cancer is exceedingly complex, with thousands of subtypes involving an immense number of different combinations of mutations. The only way we will understand it is to gather together DNA data from many thousands of cancer genomes so that we have the statistical power to distinguish between recurring combinations of mutations that drive cancer progression and "passenger" mutations that occur by random chance. Currently, with the exception of a few international research projects, most cancer genomics research is taking place in research silos, with little opportunity for data sharing. If this trend continues, we lose an incredible opportunity. Soon cancer genome sequencing will be widespread in clinical practice, making it possible in principle to study as many as a million cancer genomes. For these data to also have impact on understanding cancer, we must begin soon to move data into a network of compatible global cloud storage and computing systems, and design mech- anisms that allow genome and clinical data to be used in research with appropriate patient consent.
The Global Alliance for Genomics and Health was created to address this problem. Our Data Working Group is designing the future of large-scale genomics for cancer and other diseases. This is an opportunity we cannot turn away from, but involves both social and technical challenges.
In the future, a woman with a spinal cord injury could make a full recovery; a baby with a weak heart could pump his own blood. How close are we today to the bold promise of bionics—and could this technology be used to improve normal human functions, as well as to repair us? Join Bill Blakemore, John Donoghue, Jennifer French, Joseph J. Fins, and P. Hunter Peckham at "Better, Stronger, Faster," part of the Big Ideas Series, as they explore the unfolding future of embedded technology.
Pattern survival in humans is currently being driven by gene-survival, even though the evolution of humans is itself merely a byproduct of the competition for gene survival (Dawkins, 1976). So how can one motivate pattern survival without gene survival? How can one separate the desire to procreate thought characteristics that support specific memes from the desire to procreate genes in humans?
Deep Learning in Action | A talk by Juergen Schmidhuber, PhD at the Deep Learning in Action talk series in October 2015. He is professor in computer science at the Dalle Molle Institute for Artificial Intelligence Research, part of the University of Applied Sciences and Arts of Southern Switzerland.
Juergen Schmidhuber, PhD: " I review 3 decades of our research on both gradient based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory."
Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses.
Most can learn to direct internal and external spotlights of attention. Some of them are self-referential and can even learn the learning algorithm itself, recursive self-improvement. Without a teacher, some of them can reinforcement learn to solve very deep algorithmic problems — involving billions of steps — not feasible for more recent memory based deep learners.
And algorithms learned by our long short term memory recurrent networks defined the state of the art in handwriting recognition, speech recognition, natural language processing, machine translation, image caption generation, etc. Google and other companies made them available to over a billion users.
Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated impressive object recognition performance making real world applications possible. However, there was no clear understanding of why they perform so well, or how they might be improved. In this talk, Matt covers a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the overall classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that perform exceedingly well.
Understanding how the brain works: “The brain is 400 different computers.”
Modeling and building a brain: “You can’t do it from the bottom up.”
Backing up the brain: “There’s no ‘you.’ … I’m not exactly like I was five minutes ago….”
Is the brain “just” a machine? “There’s no person in here … identity is an illusion.”
Emotional intelligence: “‘Emotions are different from thinking’: that’s nonsense.”
Human-level artificial intelligence: “We will need AIs because longevity is increasing. … There will be no one to do the work. … We’ll need to find something else to do.”
Unfriendly AI: “Machines may re-compile themselves. … People say, ‘Scientists should be more responsible for what they do.’ The fact is, the scientist is no better and possibly worse than the average person at deciding what’s good and what’s bad, and if you ask scientists to spend a lot of time deciding what to invent or not, all you can get from that is that they won’t invent some things that might be wonderful.”
Is the Singularity near? “Yes, depending on what you mean by ‘near’ … It may well be, within our lifetimes.”
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.