Concepts studied by cyberneticists include, but are not limited to: learning, cognition, adaptation, social control, emergence, communication, efficiency, efficacy, and connectivity. These concepts are studied by other subjects such as engineering and biology, but in cybernetics these are abstracted from the context of the individual organism or device.
Could an internet-connected thing — a smart fridge, a thermostat or a home-help robot — become a millionaire? This is not as ridiculous a question as it may seem. If we do indeed move toward a world in which devices are connected to the internet and
Einstein was wrong about at least one thing: There are, in fact, "spooky actions at a distance," as now proven by researchers at the National Institute of Standards and Technology (NIST).
Einstein used that term to refer to quantum mechanics, which describes the curious behavior of the smallest particles of matter and light. He was referring, specifically, to entanglement, the idea that two physically separated particles can have correlated properties, with values that are uncertain until they are measured. Einstein was dubious, and until now, researchers have been unable to support it with near-total confidence.
As described in a paper posted online and submitted to Physical Review Letters (PRL), researchers from NIST and several other institutions created pairs of identical light particles, or photons, and sent them to two different locations to be measured. Researchers showed the measured results not only were correlated, but also—by eliminating all other known options—that these correlations cannot be caused by the locally controlled, "realistic" universe Einstein thought we lived in. This implies a different explanation such as entanglement.
The NIST experiments are called Bell tests, so named because in 1964 Irish physicist John Bell showed there are limits to measurement correlations that can be ascribed to local, pre-existing (i.e. realistic) conditions. Additional correlations beyond those limits would require either sending signals faster than the speed of light, which scientists consider impossible, or another mechanism, such as quantum entanglement.
The NIST results are more definitive than those reported recently by researchers at Delft University of Technology in the Netherlands.
In the NIST experiment, the photon source and the two detectors were located in three different, widely separated rooms on the same floor in a large laboratory building. The two detectors are 184 meters apart, and 126 and 132 meters, respectively, from the photon source.
The source creates a stream of photon pairs through a common process in which a laser beam stimulates a special type of crystal. This process is generally presumed to create pairs of photons that are entangled, so that the photons' polarizations are highly correlated with one another. Polarization refers to the specific orientation of the photon, like vertical or horizontal (polarizing sunglasses preferentially block horizontally polarized light), analogous to the two sides of a coin.
Photon pairs are then separated and sent by fiber-optic cable to separate detectors in the distant rooms. While the photons are in flight, a random number generator picks one of two polarization settings for each polarization analyzer. If the photon matched the analyzer setting, then it was detected more than 90 percent of the time.
In the best experimental run, both detectors simultaneously identified photons a total of 6,378 times over a period of 30 minutes. Other outcomes (such as just one detector firing) accounted for only 5,749 of the 12,127 total relevant events. Researchers calculated that the maximum chance of local realism producing these results is just 0.0000000059, or about 1 in 170 million. This outcome exceeds the particle physics community's requirement for a "5 sigma" result needed to declare something a discovery. The results strongly rule out local realistic theories, suggesting that the quantum mechanical explanation of entanglement is indeed the correct explanation.
The NIST experiment closed the three major loopholes as follows:
Fair sampling: Thanks to NIST's single-photon detectors, the experiment was efficient enough to ensure that the detected photons and measurement results were representative of the actual totals. The detectors, made of superconducting nanowires, were 90 percent efficient, and total system efficiency was about 75 percent.
No faster-than-light communication: The two detectors measured photons from the same pair a few hundreds of nanoseconds apart, finishing more than 40 nanoseconds before any light-speed communication could take place between the detectors. Information traveling at the speed of light would require 617 nanoseconds to travel between the detectors.
Freedom of choice: Detector settings were chosen by random number generators operating outside the light cone (i.e., possible influence) of the photon source, and thus, were free from manipulation. In fact, the experiment demonstrated a "Bell violation machine" that NIST eventually plans to use to certify randomness.
The history of Machine Learning stretches back decades, though for many industries and businesses, it remains an untapped technology. As nearly every sector and company finds itself increasingly awash in growing volumes of data, however, the likelihood that more organizations will be testing the Machine Learning waters grows.
Norwegian researchers have recently found evidence of a generalized active network for cognitive functions of the brain.
“I experienced a kind of moment that may be more common for theoretical physicists: the idea that something just has to be there, even though you cannot see it,” says neuroscientist Kenneth Hugdahl from the Bergen fMRI Group in an interview with the University of Bergen’s newspaper På Høyden.
Initially Hugdahl thought that he was just misunderstanding. But during preparations for a lecture he sat with nine fMRI images in front of him, when he suddenly discovered that the active red and yellow regions in the brain-map appeared in almost the same places in all images. The neuroscientist had to ask himself: could it be possible that there was an existing fn the brain that overlapped between all cognitive functions
The article On the existence of a generalized non-specific task-dependent network was published in the online journal Frontiers in Human Neuroscience.
Although the idea has been mentioned before, no brain researchers previously have been able to empirically prove that there is a cognitive network “for everything”. The idea of something that works as some sort of wiring diagram for the brain is therefore quite revolutionary.
Traditionally, this kind of brain research has focused on looking at individual functions of problem solving in specific areas of the brain. Hugdahl and his colleagues' article could be the first step in a new direction, toward something that can become the neuroscientific version of the “theory of everything” – one single explanation for all active, cognitive functions.
The uncertainty principle is based on how disruptive any act of measurement is. If, for instance, a photon, or particle of light, from a microscope is used to view an electron, the photon will bounce off that electron and disrupt its momentum, said study co-author Tom Purdy, a physicist at JILA, a joint institute of the University of Colorado, Boulder and the National Institute of Standards and Technology.
But the bigger the object, the less of an effect a bouncing photon will have on its momentum, making the uncertainty principle less and less relevant at larger scales.
In recent years, however, physicists have been pushing the limits on which scales the principle appears in. To that end, Purdy and his colleagues created a 0.02-inch-wide (0.5 millimeters) drum made of silicon nitride, a ceramic material used in spaceships, drawn tight across a silicon frame.
They then set the drum between two mirrors, and shined laser light on it. Essentially, the drum is measured when photons bounce off the drum and deflect the mirrors a given amount, and increasing the number of photons boosts the measurement accuracy. But more photons cause greater and greater fluctuations that cause mirrors to shake violently, limiting the measurement accuracy. That extra shaking is the proof of the uncertainty principle in action.
The setup was kept ultra-cold to prevent thermal fluctuations from drowning out this quantum effect. The findings could have implications for the hunt for gravitational waves predicted by Einstein's theory of general relativity. In the next few years, the Laser Interferometer Gravitational Wave Observatory (LIGO), a pair of observatories in Louisiana and Washington, is set to use tiny sensors to measure gravitational waves in space-time, and the uncertainty principle could set limits on LIGO's measurement abilities.
In the current hyper-connected era, modern Information and Communication Technology systems form sophisticated networks where not only do people interact with other people, but also machines take an increasingly visible and participatory role. Such human-machine networks (HMNs) are embedded in the daily lives of people, both or personal and professional use. They can have a significant impact by producing synergy and innovations. The challenge in designing successful HMNs is that they cannot be developed and implemented in the same manner as networks of machines nodes alone, nor following a wholly human-centric view of the network. The problem requires an interdisciplinary approach. Here, we review current research of relevance to HMNs across many disciplines. Extending the previous theoretical concepts of socio-technical systems, actor-network theory, and social machines, we concentrate on the interactions among humans and between humans and machines. We identify eight types of HMNs: public-resource computing, crowdsourcing, web search engines, crowdsensing, online markets, social media, multiplayer online games and virtual worlds, and mass collaboration. We systematically select literature on each of these types and review it with a focus on implications for designing HMNs. Moreover, we discuss risks associated with HMNs and identify emerging design and development trends.
Understanding Human-Machine Networks: A Cross-Disciplinary Survey Milena Tsvetkova, Taha Yasseri, Eric T. Meyer, J. Brian Pickering, Vegard Engen, Paul Walland, Marika Lüders, Asbjørn Følstad, George Bravos
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.