anti dogmanti
Follow
Find
5.8K views | +1 today
anti dogmanti
discoveries based on the scientific method
Curated by Sue Tamani
Your new post is loading...
Your new post is loading...
Scooped by Sue Tamani
Scoop.it!

Response to Controversy : Sam Harris

Response to Controversy : Sam Harris | anti dogmanti | Scoop.it
Sam Harris, neuroscientist and author of the New York Times bestsellers, The End of Faith, Letter to a Christian Nation, and The Moral Landscape.
Sue Tamani's insight:

He analyses with such clarity.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Big Tobacco losing ground on plain packs but homing in on world's poor

Big Tobacco losing ground on plain packs but homing in on world's poor | anti dogmanti | Scoop.it
Plain packing has been a reality for Big Tobacco in Australia for three months now; New Zealand announced last week it will follow suit; and at least four other nations, including India, are also considering…...
Sue Tamani's insight:

bastards

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Degraded microcosms: loss of oral biodiversity will kill you

Degraded microcosms: loss of oral biodiversity will kill you | anti dogmanti | Scoop.it
The more we look, the more we realise just how important intact ecosystems are for our own well-being – and it really doesn’t matter at which scale we are looking.When Alan Cooper, Director of the Australian…...
Sue Tamani's insight:

Funny how we think our health is always better now than in times gone by.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Saga of 'the Hobbit' highlights a science in crisis

Saga of 'the Hobbit' highlights a science in crisis | anti dogmanti | Scoop.it
To state the obvious: human evolution is not without its drama – and the latest salvo in the ongoing Hobbit, or Homo floresiensis, battle confirms this yet again.The 2004 announcement of Homo floresiensis…...
more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Nicholas Stern: 'I got it wrong on climate change – it's far, far worse'

Nicholas Stern: 'I got it wrong on climate change – it's far, far worse' | anti dogmanti | Scoop.it

Author of 2006 review speaks out on danger to economies as planet absorbs less carbon and is 'on track' for 4C rise.

 

Lord Stern, author of the government-commissioned review on climate change that became the reference work for politicians and green campaigners, now says he underestimated the risks, and should have been more "blunt" about the threat posed to the economy by rising temperatures.

 

In an interview at the World Economic Forum in Davos, Stern, who is now a crossbench peer, said: "Looking back, I underestimated the risks. The planet and the atmosphere seem to be absorbing less carbon than we expected, and emissions are rising pretty strongly. Some of the effects are coming through more quickly than we thought then."

 

The Stern review, published in 2006, pointed to a 75% chance that global temperatures would rise by between two and three degrees above the long-term average; he now believes we are "on track for something like four ". Had he known the way the situation would evolve, he says, "I think I would have been a bit more blunt. I would have been much more strong about the risks of a four- or five-degree rise."

 

He said some countries, including China, had now started to grasp the seriousness of the risks, but governments should now act forcefully to shift their economies towards less energy-intensive, more environmentally sustainable technologies.

 

"This is potentially so dangerous that we have to act strongly. Do we want to play Russian roulette with two bullets or one? These risks for many people are existential."

 

Stern said he backed the UK's Climate Change Act, which commits the government to ambitious carbon reduction targets. But he called for increased investment in greening the economy, saying: "It's a very exciting growth story."

David Cameron made much of his environmental credentials before the 2010 election, travelling to the Arctic to highlight his commitment to tackling global warming. But the coalition's commitment to green policies has recently been questioned, amid scepticism among Tory backbenchers about the benefits of wind power, and the chancellor's enthusiasm for exploiting Britain's shale gas reserves.

 

Stern's comments came as Jim Yong Kim, the new president of the World Bank, also at Davos, gave a grave warning about the risk of conflicts over natural resources should the forecast of a four-degree global increase above the historical average prove accurate.

Sue Tamani's insight:

Nice to see someone admit they were wrong. Now let's hear from all those other climate change deniers. I used to be one too, several years ago. It's important for us all to be on the same page with this. As David Suzuki said, we need to stop worrying about where we're sitting on the bus and concentrate on avoiding the wall the bus is speeding towards.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Arctic "death spiral" leaves climate scientists shocked and worried

Arctic "death spiral" leaves climate scientists shocked and worried | anti dogmanti | Scoop.it
A "global disaster" is unfolding rapidly in the Arctic as melting sea ice kick starts global warming feedback loops.
Sue Tamani's insight:

Please spread this everywhere.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Study links ancient Indian visitors to Australia's first dingoes

Study links ancient Indian visitors to Australia's first dingoes | anti dogmanti | Scoop.it


The dingo appeared around the same time as new tool technology and Indian visitors, the researchers suggested. http://www.flickr.com/photos/ogwen

A new study of DNA has found that Indian people may have come to Australia around 4000 years ago, an event possibly linked to the first appearance of the dingo.

Australia was first populated around 40,000 years ago and it was once thought Aboriginal Australians had limited contact with the outside world until the arrival of Europeans.

However, an international research team examining genotyping data from Aboriginal Australians, New Guineans, island Southeast Asians and Indians found ancient association between Australia, New Guinea, and the Mamanwa group from the Philippines.

“We also detect a signal indicative of substantial gene flow between the Indian populations and Australia well before European contact, contrary to the prevailing view that there was no contact between Australia and the rest of the world. We estimate this gene flow to have occurred during the Holocene, 4,230 years ago,” the researchers said in a paper titled ‘Genome-wide data substantiate Holocene gene flow from India to Australia’ and published in the journal PNAS.

“This is also approximately when changes in tool technology, food processing, and the dingo appear in the Australian archaeological record, suggesting that these may be related to the migration from India.”

The researchers said that around the time the Indian visitors arrived on Australia’s shores, stone tools called microliths began appearing for the first time and new plant processing techniques were used.

“It has been a matter of controversy as to whether these changes occurred in situ or reflect contact with people from outside Australia or some combination of both factors. However, the dingo also first appears in the fossil record at this time and must have come from outside Australia. Although dingo mtDNAappears to have a Southeast Asian origin, morphologically, the dingo most closely resembles Indian dogs,” the researchers said.

Lead author Dr Irina Pugach, from Germany’s Max Planck Institute for Evolutionary Anthropology, said it was not clear how many people came from India to Australia.

“It could have been just a very small group,” she said.

“We don’t claim the dingo and changes in stone tool technologies came with these migrants. We suggest that maybe they accompanied the people.”

Professor Alan Cooper, director of the Australian Centre for Ancient DNA at the University of Adelaide, said the research team’s discovery of “a previously unsuspected episode of gene flow with populations from mainland India, estimated to take place around 4,200 years ago… coincides with significant changes in the Aboriginal archaeological record, around 4000 to 5000 years ago.”

“It does not necessarily indicate direct contact with mainland India. For example it could be via populations elsewhere whose original source was mainland India,” said Professor Cooper, who was not involved in the research.

“It would be interesting to compare this theory with the Aboriginal language patterns, where one group (Pama-Nyungan) is thought to have recently spread around all parts of Australia apart from the big end, where amazing diversity remains. The timing of this language movement is thought to also be around 4000 – 5000 years ago, so this is starting to be a very important time in Australian history.”

Professor Maciej Henneberg, Wood Jones Professor of Anthropological and Comparative Anatomy at the University of Adelaide, said the research team’s findings were “logical, though based on a limited sample of genetic material.”

“There are some indications of similarities to Indian Subcontinent in marital customs of Aboriginal Australians as well as in their morphology,” said Professor Henneberg, who was not involved in the original paper.

“It is quite wrong to simplistically assume that Australia was discovered by people only once some 40,000 years ago. Such an assumption denies intelligent behaviour to people in the South and the East of Asia and its fringes as well as to Aboriginal Australians. People were capable of sea travel and exploration for many hundreds of thousands of years and thus should have come to Australia many times during the last 50,000 years or so.”

It is equally simplistic to assume that everyone came out of Africa once some 100,000 years ago, he said.

“Australian people were, for tens of thousands of years, a part of the human population of the world exchanging both genes and cultural information with their neighbours.”

Sue Tamani's insight:

Mind blowing!! Frank and I were saying our Tamil speaking asylum seekers sounded Aborifinal when they speak!!!!!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

A paper-thin flexible tablet computer | KurzweilAI

A paper-thin flexible tablet computer | KurzweilAI | anti dogmanti | Scoop.it

PaperTab (credit: Plastic Logic/Queen’s University)

A flexible paper computer developed at Queen’s University in collaboration with Plastic Logic and Intel Labs could one day revolutionize the way people work with tablets and computers.

The PaperTab tablet looks and feels just like a sheet of paper. However, it is fully interactive with a flexible, high-resolution 10.7” plastic display developed by Plastic Logic, a flexible touchscreen, and powered by the second generation Intel Core i5 Processor.

Instead of using several apps or windows on a single display, users have ten or more interactive displays or “PaperTabs”: one per app in use.

“Using several PaperTabs makes it much easier to work with multiple documents,” says Roel Vertegaal, Director of Queen’s University’s Human Media Lab. “Within five to ten years, most computers, from ultra-notebooks to tablets, will look and feel just like these sheets of printed color paper.”

For example, PaperTab’s intuitive interface allows a user to send a photo simply by tapping one PaperTab showing a draft email with another PaperTab showing the photo. The photo is then automatically attached to the draft email. The email is sent either by placing the PaperTab in an out tray, or by bending the top corner of the display.

Similarly, a larger drawing or display surface is created simply by placing two or more PaperTabs side by side. PaperTab thus emulates the natural handling of multiple sheets of paper by combining thin-film display, thin-film input and computing technologies through intuitive interaction design.

PaperTab can file and display thousands of paper documents, replacing the need for a computer monitor and stacks of papers or printouts. Unlike traditional tablets, PaperTabs keep track of their location relative to each other, and the user, providing a seamless experience across all apps, as if they were physical computer windows.

For example, when a PaperTab is placed outside of reaching distance it reverts to a thumbnail overview of a document, just like icons on a computer desktop. When picked up or touched a PaperTab switches back to a full screen page view, just like opening a window on a computer.

PaperTabs are lightweight and robust, so they can easily be tossed around on a desk while providing a magazine-like reading experience. By bending one side of the display, users can also navigate through pages like a magazine, without needing to press a button.

Plastic Logic and the Queen’s University’s Human Media Lab unveiled PaperLab at the International Consumer Electronics Show (CES 2013) in Las Vegas on January 8.

Topics: Computers/Infotech/UI | Electronics | Entertainment/New Media | Nanotech/Materials Science

Sue Tamani's insight:

I am blown away with the paper like feature of being able to place them side by side and have material linked across them!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Google Glass update | KurzweilAI

Google Glass update | KurzweilAI | anti dogmanti | Scoop.it

Summary of an IEEE Spectrum report

In the next few weeks, Google will start shipping its Google Glass to developers. More-polished consumer models are expected in 2014.

Details about Glass are still sketchy but here’s what we know:

The lightweight browband, which looks like an ordinary pair of reading glasses minus the lenses, connects to an earpiece that has much the same electronics you’d find in an Android phone: a microprocessor, a memory chip, a battery, a speaker, two microphones, a video camera, a Wi-Fi antenna, Bluetooth, an accelerometer, a gyroscope, and a compass. The microdisplay is positioned over one eye.That hardware lets Glass record its wearer’s conversations and surroundings and store those recordings in the cloud; respond to voice commands, finger taps, and swipes on an earpiece that doubles as a touch pad; and automatically take pictures every 10 seconds. Prototypes connect to the Internet through Wi-Fi or through Bluetooth and a smartphone. Future versions will likely include a cellular antenna.Glass will run apps like Google+ and Google Search, but it’s designed to feel more natural and immersive than a PC or a smartphone. Ideally, Babak Parviz, the leader of Project Glass, told developers at the company’s Google I/O conference in June, it will let you access information “so fast that you feel you know it.”

Start-ups Atheer, First Person Vision, Lumus, and Vergence Labs all have Glass-like prototypes in the works. Specialty manufacturer Recon Instruments makes MOD Live, a head-up display for skiers that analyzes their jumps. Established firms like Apple, Microsoft, Olympus, and Sony have been conducting research into smart glasses and head-up displays for years.

Full disclosure: KurzweilAI.net CEO Ray Kurzweil is now a Google executive. 

Sue Tamani's insight:

More info coming out from Google now Ray Kurzweil is involved.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

IBM reveals five innovations that will change our lives within five years | KurzweilAI

IBM reveals five innovations that will change our lives within five years | KurzweilAI | anti dogmanti | Scoop.it

IBM announced today the seventh annual “IBM 5 in 5” — a list of innovations that have the potential to change the way people work, live and interact during the next five years, based on market and societal trends as well as emerging technologies from IBM’s R&D labs. This one is focused on cognitive systems.

Touch

In the next five years, industries such as retail will be transformed by the ability to “touch” a product through your mobile device, using haptic, infrared and pressure-sensitive technologies to simulate touch — such as the texture and weave of a fabric as a shopper brushes their finger over the image of the item on a device screen.

Each object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger sets of vibrations. The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.

Current uses of haptic and graphic technology in the gaming industry, for example, will take the end user into a simulated environment.

Sight

We take 500 billion photos a year[1]. 72 hours of video is uploaded to YouTube every minute[2]. The global medical diagnostic imaging market is expected to grow to $26.6 billion by 2016[3].

In the next five years, “brain-like” capabilities will let computers analyze features in visual media such as color, texture patterns, or edge information and extract insights. This will have a profound impact for industries such as healthcare, retail and agriculture.

These capabilities will be put to work in healthcare by making sense out of massive volumes of medical information, such as MRIs, CT scans, X-Rays and ultrasound images, to capture information tailored to particular anatomy or pathologies. By being trained to discriminate what to look for in images — such as differentiating healthy from diseased tissue — and correlating that with patient records and scientific literature, systems that can “see” will help doctors detect medical problems with far greater speed and accuracy. 

Hearing

Within five years, a distributed system of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies. It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will “listen” to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.

Raw sounds will be detected by sensors, much like the human brain. A system that receives this data will take into account other modalities, such as visual or tactile information, and classify and interpret the sounds based on what it has learned. When new sounds are detected, the system will form conclusions based on previous knowledge and the ability to recognize patterns.

For example, “baby talk” will be understood as a language, telling parents or doctors what infants are trying to communicate. Sounds can be a trigger for interpreting a baby’s behavior or needs. By being taught what baby sounds mean — whether fussing indicates a baby is hungry, hot, tired or in pain — a sophisticated speech recognition system would correlate sounds and babbles with other sensory or physiological information such as heart rate, pulse and temperature.

In the next five years, by learning about emotion and being able to sense mood, systems will pinpoint aspects of a conversation and analyze pitch, tone and hesitancy to help us have more productive dialogues that could improve customer call center interactions, or allow us to seamlessly interact with different cultures.

For example, today, IBM scientists are beginning to capture underwater noise levels in Galway Bay, Ireland to understand the sounds and vibrations of wave energy conversion machines, and the impact on sea life, by using underwater sensors that capture sound waves and transmit them to a receiving system to be analyzed.\

Taste

What if we could make healthy foods taste delicious using a different kind of computing system that is built for creativity?

IBM researchers are developing a computing system that detects flavor, to be used with chefs to create the most tasty and novel recipes. It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavors and smells humans prefer. By comparing this with millions of recipes, the system will be able to create new flavor combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.

A system like this can also be used to help us eat healthier, creating novel flavor combinations that will make us crave a vegetable casserole instead of potato chips.

The computer will be able to use algorithms to determine the precise chemical structure of food and why people like certain tastes. These algorithms will examine how chemicals interact with each other, the molecular complexity of flavor compounds and their bonding structure, and use that information, together with models of perception to predict the taste appeal of flavors.

Not only will it make healthy foods more palatable — it will also surprise us with unusual pairings of foods actually designed to maximize our experience of taste and flavor. In the case of people with special dietary needs, such as individuals with diabetes, it would develop flavors and recipes to keep their blood sugar regulated, but satisfy their sweet tooth.   

Smell

During the next five years, tiny sensors embedded in your computer or cell phone will detect if you’re coming down with a cold or other illness. By analyzing odors, biomarkers and thousands of molecules in someone’s breath, doctors will have help diagnosing and monitoring the onset of ailments such as liver and kidney disorders, asthma, diabetes and epilepsy by detecting which odors are normal and which are not.

In the next five years, IBM technology will “smell” surfaces for disinfectants to determine whether rooms have been sanitized. Using novel wireless “mesh” networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.

Due to advances in sensor and communication technologies in combination with deep learning systems, sensors can measure data in places never thought possible. For example, computer systems can be used in agriculture to “smell” or analyze the soil condition of crops. In urban environments, this technology will be used to monitor issues with refuge, sanitation and pollution — helping city agencies spot potential problems before they get out of hand.

Today IBM scientists are already sensing environmental conditions and gases to preserve works of art. This innovation is beginning to be applied to tackle clinical hygiene, one of the biggest challenges in healthcare today.

Antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus (MRSA), which in 2005 was associated with almost 19,000 hospital stay-related deaths in the United States, is commonly found on the skin and can be easily transmitted wherever people are in close contact. One way of fighting MRSA exposure in healthcare institutions is by ensuring medical staff follow clinical hygiene guidelines.

Topics: AI/Robotics | Biomed/Longevity | Cognitive Science/Neuroscience | Computers/Infotech/UI | Human Enhancement | Innovation/Entrepreneurship

Sue Tamani's insight:

When you look at the comments below this article, you see many nay sayers. I think these 5 will be the least of what is coming!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Should We Live to 1,000? by Peter Singer - Project Syndicate

Should We Live to 1,000? by Peter Singer - Project Syndicate | anti dogmanti | Scoop.it
In developed countries, aging is the ultimate cause of 90% of all human deaths; thus, treating aging is a form of preventive medicine for all of the diseases of old age.
Sue Tamani's insight:

Most interesting quote - "Aubrey de Grey, Chief Science Officer of SENS Foundation and the world’s most prominent advocate of anti-aging research, argues that it makes no sense to spend the vast majority of our medical resources on trying to combat the diseases of aging without tackling aging itself. "
Read more at http://www.project-syndicate.org/commentary/the-ethics-of-anti-aging-by-peter-singer#FJYO1CK1XhQfwtZk.99

more...
No comment yet.
Rescooped by Sue Tamani from 21st Century Concepts- Educational Neuroscience
Scoop.it!

How Technology is Changing the Way Children Think and Focus | Psychology Today

How Technology is Changing the Way Children Think and Focus | Psychology Today | anti dogmanti | Scoop.it

 

 By Jim Taylor, Ph. D.

 

"There is...a growing body of research that technology can be both beneficial and harmful to different ways in which children think. Moreover, this influence isn’t just affecting children on the surface of their thinking. Rather, because their brains are still developing and malleable, frequent exposure by so-called digital natives to technology is actually wiring the brain in ways very different than in previous generations. What is clear is that, as with advances throughout history, the technology that is available determines how our brains develops. For example, as the technology writer Nicholas Carr has observed, the emergence of reading encouraged our brains to be focused and imaginative. In contrast, the rise of the Internet is strengthening our ability to scan information rapidly and efficiently.

 

"The effects of technology on children are complicated, with both benefits and costs. Whether technology helps or hurts in the development of your children’s thinking depends on what specific technology is used and how and what frequency it is used. At least early in their lives, the power to dictate your children’s relationship with technology and, as a result, its influence on them, from synaptic activity to conscious thought.

 

"Over the next several weeks, I’m going to focus on the areas in which the latest thinking and research has shown technology to have the greatest influence on how children think: attention, information overload, decision making, and memory/learning. Importantly, all of these areas are ones in which you can have a counteracting influence on how technology affects your children."


Via Deborah McNelis, Terry Doherty, Meryl Jaffe, PhD, Jim Lerman, Lynnette Van Dyke, Gust MEES, Tom Perran
Sue Tamani's insight:

I can feel online technology changing the way I think!


It must surely be doing something to developing brains.

more...
Linda Buckmaster's comment, December 17, 2012 5:44 PM
Thanks for the rescoop.
Jim Siders's curator insight, March 20, 2013 12:06 PM

to tech or not to tech........that is the question. Not just a casual question if this report is accurate.

sarah's curator insight, May 31, 2013 2:04 AM

Très intéressant.

Scooped by Sue Tamani
Scoop.it!

Alert: you may be living in a simulated universe

Alert: you may be living in a simulated universe | anti dogmanti | Scoop.it

Everything we see around us could be little more than bits in a giant supercomputer. 
As a cosmologist, I often carry around a universe or two in my pocket. Not entire, infinitely large universes, but maybe a few billion light years or so across. Enough to be interesting.

Of course, these are not “real” universes; rather they are universes I have simulated on a computer.

The basic idea of simulating a universe is quite simple. You need “initial conditions” which, for me, is the state of the universe just after the Big Bang.

To this, you add the laws of physics, such as: how gravity pulls on mass, how gas flows into galaxies, and how stars are born, live and die.

You press “go”, and then sit back as the computer calculates all of the complex interactions, and evolves the universe over cosmic time. The video below gives a good introduction:

A wonderful description by Andrew Pontzen on how astronomers synthesize and study their very own galaxies and universes.
What’s more fun is playing “Master of the Universe”, and messing about with the laws of physics, such as changing the properties of gravity, or how black holes swallow matter. Waiting to see the outcome of these mutated universes is always interesting.

I know in my heart that these universes are nothing more than ones and zeros buried within my computer, but in the movies I make of my evolving galaxies and clusters, and the one embedded further down in this article, I can see the mass moving around. It looks real!

Computer simulations of complex phenomena are everywhere in science, and cosmologists aren’t the only ones that marvel at synthetic chunks of the real universe.

It is equally inspiring to watch the flow of air around a newly-designed wing (see video below), or how individual molecules make their way through a biological membrane, and such simulations have revolutionised science.

Of course, these advances have only occurred with the growth of computer power over the last few decades, and the push is always towards the inclusion of more complex physics over an immense range of scales, from the cosmological to the quantum.

We are always limited by the power of computing, but as computers get bigger and faster, so does the detail within our synthetic universes.

“Cosmologists aren’t the only ones that marvel at synthetic chunks of the real universe.”
But let’s imagine a time in the future, a time when computers are powerful enough to fully simulate a human brain, with its vast array of interconnected neurons.

These neurons obey the laws of physics, and fire as their chemical balances change. Thoughts would echo around this synthetic brain, with electrical signals coursing backwards and forwards.

Not being a philosopher, I will ignore the (seemingly endless) debates about free will and consciousness, but if you take a purely mechanical view of the human brain, the synthetic brain will be as “alive” as the organic brain that made it.

Fed with the stimulus from a synthetic body interacting with a synthetic universe, it will experience pain and fear, happiness and love, even boredom and drowsiness.

There are, in fact, some that believe we will all be reborn in a glorious future, where computers are powerful enough to recreate everyone who has ever lived, and then sustain them for eternity.

While this vision of heaven is touted as the Final Anthropic Principle, some have more bluntly labelled it the “Completely Ridiculous Anthropic Principle”, or C.R.A.P. for short.

But we may not have to wait until the distant future!

“In simulations, I can see the mass moving around. It looks real!”
To quote the late, great Douglas Adams, “there is another theory which states that this has already happened”.

Not that someone on Earth, or even within our universe, has created a truly synthetic universe, complete with beings that are clueless to the fact they are nothing but part of a computer experiment.

No, the startling realisation is that we, our very existence, every thing we have seen, have experienced, or will ever experience, could be nothing but the chugging of bits in an unimaginable supercomputer.

As I type this on a laptop, and stare out the train window at the station rolling past, at the people, the trees, the dirt on the ground, surely I would know if I was part of a computer program?

But then again, my brain is simply processing inputs, and if the simulated inputs fed into my simulated brain are good enough, how would I know?

It is important to remember that this picture is different to the “Brain-in-a-vat” presented in the Matrix movies. There, an organic brain is fed information, recreating the synthetic world in which the characters find themselves.

Instead, our picture is that there is no organic brain. We are part of the matrix itself.

So, how can we know if we are part of a computer simulation?

It is important to remember our earthly computers are limited in the way they can represent real numbers, holding only a finite number of digits for typical calculations.

What this means is that my simulated universes are quantised in some sense, with the limited resolution imprinted in the details of the structure that is produced.

If we are living in a computer simulation, then maybe such resolution effects are apparent to us. Our world doesn’t look like the Minecraft universe, and so we expect the resolution scale to be smaller than the scale of individual atoms, rather than large, foot-cubed blocks.

Just last month, researchers from the University of Bonn, Germany suggested we can detect such “chunkiness” of the small scale by looking how high-energy particles, known as cosmic rays, traverse huge distances in the universe. As these rays bounce through this space, their energy properties get modified, and by looking at what arrives on Earth, we can work out the size of the chunks.

But there are problems with this idea.

Firstly, we are working under the assumption that the computer we live in operates like an everyday computer. But these everyday computers are governed by the laws of physics of the synthetic universe in which we reside.

The unimaginably powerful computer that hosts our universe may operate in ways we cannot even think about.

The resolution scale of our universe is considerably smaller than in the “chunky” Minecraft universe.
Another problem is that those trying to understand the nature of the very small have already proposed a quantised backdrop of space and time in which we live.

Is the existence of such a space-time simply a property of a real universe, or the tell-tale sign of a synthetic one? How can we ever tell them apart? Do we even want to?

One way of potentially detecting the real nature of the universe is to search for the extraordinary – or, in the words of my children, who play videogames, glitches – where the program doesn’t do as expected.

Perhaps some of the unexplained things we cannot yet explain are simply glitches in the program (although I am a fan of illusionist Derren Brown and think the human mind can be easily tricked).

The other alternative is more drastic.

When my synthetic universes are running, they can abruptly come to a halt for a variety of reasons, such as disk-space filling up, errors in the memory, or something as simple as the cleaner unplugging the computer to vacuum the floor.

If my synthetic universe is running when the power goes out, it simply ceases to exist.

I do hope the cleaners of our potential-hyperdimensional-universe-simulating overlords are more careful.

Go here >> http://theconversation.edu.au/alert-you-may-be-living-in-a-simulated-universe-10671

to watch the five videos in this article.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

China’s next-generation Internet is a world-beater | KurzweilAI

China’s next-generation Internet is a world-beater | KurzweilAI | anti dogmanti | Scoop.it
Artist rendering of city-sized cloud computing and office complex being built in China (IBM) An open-access report published in the Philosophical
Sue Tamani's insight:

can't wait for it!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

The self: The one and only you - 20 February 2013 - New Scientist

The self: The one and only you - 20 February 2013 - New Scientist | anti dogmanti | Scoop.it
The self: The one and only you20 February 2013 by Jan WesterhoffMagazine issue 2905. Subscribe and saveFor similar stories, visit the The Human Brain Topic Guide

Video: Flashing creates illusion of motion

There are flaws in our intuitive beliefs about what makes us who we are. Who are we really?

Read more: "The great illusion of the self"

THERE appear to be few things more certain to us than the existence of our selves. We might be sceptical about the existence of the worldaround us, but how could we be in doubt about the existence of us? Isn't doubt made impossible by the fact that there is somebody who is doubting something? Who, if not us, would this somebody be?

While it seems irrefutable that we must exist in some sense, things get a lot more puzzling once we try to get a better grip of what having a self actually amounts to.

Three beliefs about the self are absolutely fundamental for our belief of who we are. First, we regard ourselves as unchanging and continuous. This is not to say that we remain forever the same, but that among all this change there is something that remains constant and that makes the "me" today the same person I was five years ago and will be five years in the future.

Second, we see our self as the unifier that brings it all together. The world presents itself to us as a cacophony of sights, sounds, smells, mental images, recollections and so forth. In the self, these are all integrated and an image of a single, unified world emerges.

Finally, the self is an agent. It is the thinker of our thoughts and the doer of our deeds. It is where the representation of the world, unified into one coherent whole, is used so we can act on this world.

All of these beliefs appear to be blindingly obvious and as certain as can be. But as we look at them more closely, they become less and less self-evident.

It would seem obvious that we exist continuously from our first moments in our mother's womb up to our death. Yet during the time that our self exists, it undergoes substantial changes in beliefs, abilities, desires and moods. The happy self of yesterday cannot be exactly the same as the grief-stricken self of today, for example. But we surely still have the same self today that we had yesterday.

There are two different models of the self we can use to explore this issue: a string of pearls and a rope. According to the first model, our self is something constant that has all the changing properties but remains itself unchanged. Like a thread running through every pearl on a string, our self runs through every single moment of our lives, providing a core and a unity for them. The difficulty with this view of the self is that it cannot be most of the things we usually think define us. Being happy or sad, being able to speak Chinese, preferring cherries to strawberries, even being conscious – all these are changeable states, the disappearance of which should not affect the self, as a disappearance of individual pearls should not affect the thread. But it then becomes unclear why such a minimal self should have the central status in our lives that we usually accord to it.

The second model is based on the fact that a rope holds together even though there is no single fibre running through the entire rope, just a sequence of overlapping shorter fibres. Similarly, our self might just be the continuity of overlapping mental events. While this view has a certain plausibility, it has problems of its own. We usually assume that when we think of something or make a decision, it is the whole of us doing it, not just some specific part. Yet, according to the rope view, our self is never completely present at any point, just like a rope's threads do not run its entire length.

It seems then as if we are left with the unattractive choice between a continuous self so far removed from everything constituting us that its absence would scarcely be noticeable, and a self that actually consists of components of our mental life, but contains no constant part we could identify with. The empirical evidence we have so far points towards the rope view, but it is by no means settled.

Even more important, and just as troublesome, is our second core belief about the self: that it is where it all comes together.

It is easy to overlook the significance of this fact, but the brain accomplishes an extremely complex task in bringing about the appearance of a unified world. Consider, for example, that light travels much faster than sound yet visual stimuli take longer to process than noises. Putting together these different speeds means that sights and sounds from an event usually become available to our consciousness at different times (only sights and sounds from events about 10 metres away are available at the same time). That means the apparent simultaneity of hearing a voice and seeing the speaker's lips move, for example, has to be constructed by the brain.

Our intuitive view of the result of this process resembles a theatre. Like a spectator seated in front of a stage, the self perceives a unified world put together from a diverse range of sensory data. It would get confusing if these had not been unified in advance, just as a theatregoer would be confused if they heard an actor's lines before he was on stage. While this view is persuasive, it faces many difficulties.

Consider a simple case, the "beta phenomenon" (see diagram and video above). If a bright spot is flashed onto the corner of a screen and is immediately followed by a similar spot in the opposite corner, it can appear as if there was a dot moving diagonally across the screen. This is easily explained: the brain often fills in elements of a scene using guesswork. But a tweak to this experiment produces a curious effect.

If the spots are different colours – for example a red spot followed by a green spot – observers see a moving spot that changes colour abruptly around the mid-point of the diagonal (see "Spotted trick"). This is very peculiar. If the brain is filling in the missing positions along the diagonal for the benefit of the self in the theatre, how does it know before the green spot has been observed that the colour will switch?

One way of explaining the beta phenomenon is by assuming that our experience is played out in the theatre with a small time delay. The brain doesn't pass on the information about the spots as soon as it can, but holds it back for a little while. Once the green spot has been processed, both spots are put together into a perceptual narrative that involves one moving spot changing colour. This edited version is then screened in the theatre of consciousness.

Unfortunately, this explanation does not fit in well with evidence of how perception works. Conscious responses to visual stimuli can occur at a speed very close to the minimum time physically possible. If we add up the time it takes for information to reach the brain and then be processed, there is not enough time left for a delay of sufficient length to explain the beta phenomenon.

Perhaps there is something wrong with the notion of a self perceiving a unified stream of sensory information. Perhaps there are just various neurological processes taking place in the brain and various mental processes taking place in our mind, without some central agency where it all comes together at a particular moment, the perceptual "now" (see "The self: You think you live in the present?"). It is much easier to make sense of the beta phenomenon if there is no specific time when perceptual content appears in the theatre of the self – because there is no such theatre.

The perception of a red spot turning green arises in the brain only afterthe perception of the green spot. Our mistaken perception of the real flow of events is akin to the way we interpret the following sentence: "The man ran out of the house, after he had kissed his wife". The sequence in which the information comes in on the page is "running–kissing", but the sequence of events you construct and understand is "kissing–running". For us to experience events as happening in a specific order, it is not necessary that information about these events enters our brain in that same order.

The final core belief is that the self is the locus of control. Yet cognitive science has shown in numerous cases that our mind can conjure, post hoc, an intention for an action that was not brought about by us.

In one experiment, a volunteer was asked to move a cursor slowly around a screen on which 50 small objects were displayed, and asked to stop the cursor on an object every 30 seconds or so.

Self-delusion

The computer mouse controlling the cursor was shared, ouija-board style, with another volunteer. Via headphones, the first volunteer would hear words, some of which related to the objects on screen. What this volunteer did not know was that their partner was one of the researchers who would occasionally force the cursor towards a picture without the volunteer noticing.

If the cursor was forced to the image of a rose, and the volunteer had heard the word "rose" a few seconds before, they reported feeling that they had intentionally moved the mouse there. The reasons why these cues combined to produce this effect is not what is interesting here: more important is that it reveals one way that the brain does not always display its actual operations to us. Instead, it produces a post-hoc "I did this" narrative despite lacking any factual basis for it (American Psychologist, vol 54, p 480).

So, many of our core beliefs about ourselves do not withstand scrutiny. This presents a tremendous challenge for our everyday view of ourselves, as it suggests that in a very fundamental sense we are not real. Instead, our self is comparable to an illusion – but without anybody there that experiences the illusion.

Yet we may have no choice but to endorse these mistaken beliefs. Our whole way of living relies on the notion that we are unchanging, coherent and autonomous individuals. The self is not only a useful illusion, it may also be a necessary one.

This article appeared in print under the headline "What are you?"

I am the one and only

Think back to your earliest memory. Now project forward to the day of your death. It is impossible to know when this will come, but it will.

What you have just surveyed might be called your "self-span", or the time when this entity you call your self exists. Either side of that, zilch.

Which is very mysterious, and a little unsettling. Modern humans have existed for perhaps 100,000 years, and more than 100 billion have already lived and died. We assume that they all experienced a sense of self similar to yours. None of these selves has made a comeback, and as far as we know, neither will you.

What is it about a mere arrangement of matter and energy that gives rise to a subjective sense of self? It must be a collective property of the neurons in your brain, which have mostly stayed with you throughout life, and which will cease to exist after you die. But why a given bundle of neurons can give rise to a given sense of selfhood, and whether that subjective sense can ever reside in a different bundle of neurons, may forever remain a mystery.

Graham Lawton

Jan Westerhoff is a philosopher at the University of Durham, UK, and the University of London's School of Oriental and African Studies, and author of Reality: A very short introduction (Oxford University Press, 2011)

Sue Tamani's insight:

Mind blowing, especially last paragraph.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Lunch and dinner with Julian Assange, in prison

Lunch and dinner with Julian Assange, in prison | anti dogmanti | Scoop.it
Everybody warned this would be no ordinary invitation, and they were right.

Drawing strength from distress, disgusted by the hypocrisy of governments, willing to take on the mighty, he’s reminded the world of a universal political truth: arbitrary power thrives on secrets.

Sue Tamani's insight:

Such an interesting interview. And the discussion after is interesting too. Don't wait for the movie - read it now.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Ask Ray | How to Create a Mind thought experiment | KurzweilAI

Ask Ray | How to Create a Mind thought experiment | KurzweilAI | anti dogmanti | Scoop.it

Ray,

I just finished reading How to Create a Mind. I found it both interesting and informative. At the end, I believe that there is an inherent difference between a human brain and an AI system, a difference that can’t be overcome by any amount of added speed and capacity. To illustrate this difference I have included a thought experiment:

Take the most powerful artificial brain in existence. Include all programs necessary to make it function as an independent, self-conscious entity. Let it read everything in existence up to, but not beyond, the birth of Albert Einstein.

With no further human intervention of any kind, how long do you think it will take this artificial brain to develop the theory of relativity?

Feel free to use the artificial intelligence capability you think will exist in 2029; but, again, limiting the knowledge input to that which was available to Einstein.

It is my belief that the actual human brain is sufficiently different from an artificial intelligence system that without any human intervention this theory would never be forthcoming. If you believe otherwise, I would be interested in seeing the process modeled.

Again, since the artificial intelligence system is a self-conscious entity, presumably capable of self-direction, I would expect no human intervention whatsoever in this process.

— Bob Caine

Bob,

Interesting point but keep in mind that all — biological — human brains at the time (except for Einstein’s) did not come up with relativity either.

Einstein’s brain was ahead of the curve, but nonbiological intelligence will continue to improve both in hardware and software (algorithmically) past 2029.

So perhaps it is the AI of 2035 or 2040 who would be able to come up with relativity in your thoughtful thought experiment.

— Ray Kurzweil

My point in using Einstein’s Theory of Relativity in my thought experiment on AI equivalence to the human brain was not related to whether or not Einstein had the support of others or how exceptional his mind was.

Rather, it had to do with the ability of an AI system to have a “sense of purpose” of its own without human intervention. My question had to do with how an AI system would decide, without human assistance, that there is any reason to want to know the exact relationship between matter and energy; the relationship between the speed of light and the relative motion of those observing that light; or, for that matter, the relationship between the cosmic microwave background and the Big Bang.

Given the task, I can readily see the role an AI system could play in deriving a solution. But how would it decide on its own that studies such as these should even be undertaken and then design, execute, and assess the related research to arrive at a verifiable theory?

— Bob Caine

 

Sue Tamani's insight:

The comments are really fascinating.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Earth on brink of 'irreversible' collapse of global ecosystem, new SFU study warns

Earth on brink of 'irreversible' collapse of global ecosystem, new SFU study warns | anti dogmanti | Scoop.it
Add excess consumption, over-population and extinctions to the climate crisis, and an SFU prof says the entire world teeters on the brink – and blink of an eye – according to a ground-breaking study.
Sue Tamani's insight:

What do you think? Are you still a climate change denier?

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

DNA data storage: 100 million hours of HD video in every cup

DNA data storage: 100 million hours of HD video in every cup | anti dogmanti | Scoop.it

Biological systems have been using DNA as an information storage molecule for billions of years. Vast amounts of data can thus be encoded within microscopic volumes, and we carry the proof of this concept in the cells of our own bodies.

Could this ultimate storage solution meet the ever-growing needs of archivists in this age of digital information?

This dream has come a step closer to reality with the publication of a new technique in this week’s edition of the scientific journalNature.

Stored in DNA

A team of researchers headed by Nick Goldman and Ewan Birney at the European Bioinformatics Institute of the European Molecular Biology Laboratory (EMBL-EBI) has dramatically demonstrated the potential of the technique to store and transport human-made data.

Their data included some well-chosen iconic elements:Shakespeare’s 154 sonnets, an audio excerpt from Martin Luther King’s “I have a dream” speech, Watson and Crick’sclassic paper on the structure of DNA, and a colour photograph of the European Bioinformatics Institute.

These files, in common digital formats found on almost every desktop computer, were encoded byte-by-byte as DNA molecules, shipped from the USA to Germany without specialised packaging, and finally decoded back into their original electronic formats.

Although the study involved less than a megabyte of data in total, this is already orders of magnitude more than has previously been encoded as synthesised DNA.

The authors argue convincingly that the technique could eventually be scaled up to create a storage capacity far beyond all the digital information stored globally today (somewhere in the vicinity of 1 zettabyte or 1015 megabytes).

Perfect for data storage

DNA molecules are natural vehicles for digital information. They consist of four chemicals connected end-to-end like characters of an alphabet to form long strings similar to a line of text. DNA molecules are even more similar to the sequences of zeroes and ones that digital computers use to represent information.

DNA has substantial advantages over both printed text and electronic media. For one thing, it can remain stable for long periods of time with a minimum of care. Intact DNA has been extracted from bones (and other organic matter) tens of thousands of years old, and its sequence reconstructed with as much detail as if it had come directly from a living organism.

Another advantage of DNA over electronic media is that it requires no power supply to maintain its integrity, which makes it easy to transport and store, and potentially less vulnerable to technological failure.

Dr Nick Goldman of EMBL-EBI, looking at synthesised DNA in a vial. European Molecular Biology Laboratory

Perhaps the greatest advantage of DNA as a storage medium is its minuteness. For example, EMBL-EBI’s official press releaseclaims that more than 100 million hours of high-definition video could be stored in roughly a cup of DNA.

We’re getting there

DNA storage devices won’t be available in the supermarket any time soon. The major drawback is the current cost of synthesising DNA in the quantities required, estimated at around US$12,400 per megabyte of data stored.

This is cost-effective only for archives intended to last hundreds or even thousands of years – something few of us contemplate.

The main cost of maintaining electronic archives over such a long period of time is that the media have to be periodically replaced and the data copied, whereas DNA has merely to be stored somewhere cool, dry and dark.

But if the cost of synthesising DNA can be reduced by one or two orders of magnitude – which, judging by current trends could occur within a decade – DNA archives intended to last less than 50 years would become feasible.

Another issue is the cost of decoding the information stored in DNA, estimated at about US$220 per megabyte. At that price, DNA archives would only be rarely accessed. Yet this too could change in the near future, given the rapid pace of innovation in DNA-related technologies.

We shouldn’t let these practical issues distract from the significance of this exciting innovation.

As the inventors point out, the technique may already be economically viable and attractive for certain long-term, infrequently accessed archives, including some government and historical records, or science projects that generate massive amounts of data.

Examples of the latter include important large-scale experiments in particle physics, astronomy and medicine.

But perhaps the most exciting aspect of this proof-of-concept study is the impetus it provides to further innovation and the unexplored doors of possibility it opens.

Biological systems have been using DNA as an information storage molecule for billions of years.
Sue Tamani's insight:

More information storage from nature! 

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

PC makers bet on gaze, gesture, voice, and touch | KurzweilAI

PC makers bet on gaze, gesture, voice, and touch | KurzweilAI | anti dogmanti | Scoop.it

Products that could make it common to control a computer, TV, or something else using eye gaze, gesture, voice, and even facial expression were launched at the Consumer Electronics Show in Las Vegas this week, MIT Technology Review reports.

The technology promises to make computers and other devices easier to use, let devices do new things, and perhaps boost the prospects of companies reliant on PC sales. Industry figures suggest that interest in laptop and desktop computers is waning as consumers’ heads are turned by smartphones and tablets.

Intel announced a new webcam-like device and supporting software intended to bring gesture, voice control, and facial expression recognition to PCs. “This will be available as a low-cost peripheral this year,” said Kirk Skaugen, vice president for Intel’s PC client group.

Intel also announced that, before the end of the year, it would release software that adds a voice-activated assistant to PCs, powered by technology from voice-recognition company Nuance.

Intel’s new gesture-sensing hardware device, made in partnership with the software company SoftKinetic and webcam maker Creative, has a combination of conventional and infrared cameras, and several microphones. The supporting software enables applications on a computer to track each of a person’s 10 fingers, recognize faces, and interpret words spoken in nine languages.

Software developers can already download Intel’s enabling software and ask the company to send one of the prototype devices, a move intended to encourage the development of applications that support new forms of interaction.

Gaze control was touted as a crucial feature of future PCs and other gadgets by two companies at CES.

Tobii, a Swedish company, introduced a standalone USB device called the Rexx that allows any Windows 8 PC to track eye movement. The small black box is initially being made available to software developers, but will go on general sale late in 2013. Tobii’s eye-tracking technology shines infrared light at a PC user, and tracks the reflection of it in his pupils.

EyeTech, a smaller company based in the U.S. that has previously focused on users unable to operate mice and keyboards, showed similar technology, touting a new sensor that can be integrated into PC peripherals, large desktop computers, and TVs.

Sue Tamani's insight:

We have a lot to thank Ray Kurzweil for.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

The future of medicine is now | KurzweilAI

The future of medicine is now | KurzweilAI | anti dogmanti | Scoop.it

Six medical innovations are poised to transform the way we fight disease, The Wall Street Journal reports.

Surgeons at Boston Children’s Hospital have developed a way to help children born with half a heart to essentially grow a whole one — by marshaling the body’s natural capacity to heal and develop.Oxford Nanopore Technologies has unveiled the first of a generation of tiny DNA sequencing devices that many predict will eventually be as ubiquitous as cellphones — it’s already the size of one.A test developed by Foundation Medicine Inc. enables doctors to test a tumor sample for 280 different genetic mutations suspected of driving tumor growth.MK3475, being developed by Merck & Co., is among a new category of drugs that unleash an army of immune cells to hunt down a cancer. — Ron WinslowLast month, the FDA cleared a new iPhone add-on that lets doctors take an electrocardiogram just about anywhere. Other smartphone apps help radiologists read medical images and allow patients to track moles for signs of skin cancer.Gene therapy is poised to become a viable option for a variety of often life-threatening medical conditions, especially those resulting from a single defective gene.

Topics: Biomed/Longevity | Biotech

Sue Tamani's insight:

Can't happen fast enough for me!!!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Flutter App How to control music and video on the Web with a wave of your hand | KurzweilAI

Flutter App How to control music and video on the Web with a wave of your hand | KurzweilAI | anti dogmanti | Scoop.it
We are so excited and pleased to release a new version that allows you to control music & videos in Google Chrome using gestures — just in time for the holiday season. Flutter now supports YouTube, Pandora, Grooveshark & Netflix.'

Yeah, like all the time — especially because my calls come in via Skype so callers often get a blast of a YouTube video or Spotify song, so I …

Uh, OK, but how does it work?

Cool, so how do I get it?

“We are so excited and pleased to release a new version [of Flutter] that allows you to control music & videos in Google Chrome using gestures — just in time for the holiday season. Flutter now supports YouTube, Pandora, Grooveshark & Netflix. We will be updating AppStore version in early 2013. For now direct download the new version.”

Nice, I thank you, my callers will thank you!

Amara D. Angelica is editor of KurzweilAI.

Topics: Entertainment/New Media | Innovation/Entrepreneurship

  inShare1 Share / Email PrintComments (4)December 28, 2012 
by Editor

Just installed it on my Mac and tried it with YT and Spotify. Takes some experimentation but wiorks great. Doesn’t work close to the screen .

Reply to this comment

December 28, 2012 
by chandra Citta

Trying it with Pandora, right now fun to install and to use….

Reply to this comment

December 28, 2012 
by Chandra Citta

hmmmm! auto correct me not…..” its crystal ……”

Reply to this comment

December 28, 2012 
by Chandra Citta

Thanks for the app, thanks also for it’s crystal clear presentation…fascinating….

Reply to this comment

 Leave a ReplyLeave CommentALL BLOG POSTSNEWS TIPS?

We welcome ideas for articles, blogs, and news. E-mail us

 
Sue Tamani's insight:

Funny - when I tried to post the name to FB, wouldnt let me! Still a bit of a secret eh? I was number 1,166 to download it.

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

No, you're not entitled to your opinion

No, you're not entitled to your opinion | anti dogmanti | Scoop.it

Every year, I try to do at least two things with my students at least once. First, I make a point of addressing them as “philosophers” – a bit cheesy, but hopefully it encourages active learning.

Secondly, I say something like this: “I’m sure you’ve heard the expression ‘everyone is entitled to their opinion.’ Perhaps you’ve even said it yourself, maybe to head off an argument or bring one to a close. Well, as soon as you walk into this room, it’s no longer true. You are not entitled to your opinion. You are only entitled to what you can argue for.”

A bit harsh? Perhaps, but philosophy teachers owe it to our students to teach them how to construct and defend an argument – and to recognize when a belief has become indefensible.

The problem with “I’m entitled to my opinion” is that, all too often, it’s used to shelter beliefs that should have been abandoned. It becomes shorthand for “I can say or think whatever I like” – and by extension, continuing to argue is somehow disrespectful. And this attitude feeds, I suggest, into the false equivalence between experts and non-experts that is an increasingly pernicious feature of our public discourse.

Firstly, what’s an opinion?

Plato distinguished between opinion or common belief (doxa) and certain knowledge, and that’s still a workable distinction today: unlike “1+1=2” or “there are no square circles,” an opinion has a degree of subjectivity and uncertainty to it. But “opinion” ranges from tastes or preferences, through views about questions that concern most people such as prudence or politics, to views grounded in technical expertise, such as legal or scientific opinions.

You can’t really argue about the first kind of opinion. I’d be silly to insist that you’re wrong to think strawberry ice cream is better than chocolate. The problem is that sometimes we implicitly seem to take opinions of the second and even the third sort to be unarguable in the way questions of taste are. Perhaps that’s one reason (no doubt there are others) why enthusiastic amateurs think they’re entitled to disagree with climate scientists and immunologists and have their views “respected.”

Meryl Dorey is the leader of the Australian Vaccination Network, which despite the name is vehemently anti-vaccine. Ms. Dorey has no medical qualifications, but argues that if Bob Brown is allowed to comment on nuclear power despite not being a scientist, she should be allowed to comment on vaccines. But no-one assumes Dr. Brown is an authority on the physics of nuclear fission; his job is to comment on the policy responses to the science, not the science itself.

So what does it mean to be “entitled” to an opinion?

If “Everyone’s entitled to their opinion” just means no-one has the right to stop people thinking and saying whatever they want, then the statement is true, but fairly trivial. No one can stop you saying that vaccines cause autism, no matter how many times that claim has been disproven.

But if ‘entitled to an opinion’ means ‘entitled to have your views treated as serious candidates for the truth’ then it’s pretty clearly false. And this too is a distinction that tends to get blurred.

On Monday, the ABC’s Mediawatch program took WIN-TV Wollongong to task for running a story on a measles outbreak which included comment from – you guessed it – Meryl Dorey. In a response to a viewer complaint, WIN said that the story was “accurate, fair and balanced and presented the views of the medical practitioners and of the choice groups.” But this implies an equal right to be heard on a matter in which only one of the two parties has the relevant expertise. Again, if this was about policy responses to science, this would be reasonable. But the so-called “debate” here is about the science itself, and the “choice groups” simply don’t have a claim on air time if that’s where the disagreement is supposed to lie.

Mediawatch host Jonathan Holmes was considerably more blunt: “there’s evidence, and there’s bulldust,” and it’s no part of a reporter’s job to give bulldust equal time with serious expertise.

The response from anti-vaccination voices was predictable. On the Mediawatch site, Ms. Dorey accused the ABC of “openly calling for censorship of a scientific debate.” This response confuses not having your views taken seriously with not being allowed to hold or express those views at all – or to borrow a phrase from Andrew Brown, it “confuses losing an argument with losing the right to argue.” Again, two senses of “entitlement” to an opinion are being conflated here.

So next time you hear someone declare they’re entitled to their opinion, ask them why they think that. Chances are, if nothing else, you’ll end up having a more enjoyable conversation that way.

Read more from Patrick Stokes: The ethics of bravery

Sue Tamani's insight:

Yes, must remember to compare apples with oranges, not rocks! And I just love the way Media Watch call a spade a spade!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Voyager 1 is leaving the solar system, but the journey continues

Voyager 1 is leaving the solar system, but the journey continues | anti dogmanti | Scoop.it
At 18.5 billion kilometres from Earth, the Voyager 1 space probe is the most distant human-made object ever to leave our planet.And now the spacecraft, which was launched in September 1977, has discovered…...
Sue Tamani's insight:

Really awe inspiring stuff! From the 1970s no less!

more...
No comment yet.
Scooped by Sue Tamani
Scoop.it!

Explainer: what are fractals?

Explainer: what are fractals? | anti dogmanti | Scoop.it
Fractals are exquisite structures produced by nature, hiding in plain sight all around us.They are tricky to define precisely, though most are linked by a set of four common fractal features: infinite…...
Sue Tamani's insight:

Here is the rest of the article -

Fractals can be found everywhere in the world around you. 

Fractals are exquisite structures produced by nature, hiding in plain sight all around us.

They are tricky to define precisely, though most are linked by a set of four common fractal features: infinite intricacy, zoom symmetry, complexity from simplicity and fractional dimensions – all of which will be explained below.

The next fern you encounter will provide a great illustration of these features if you pause for a closer look. First, notice that the shape of the fern is intricately detailed. Remarkably, you can see that the leaves are shaped like little copies of the branches.

In fact, the entire fern is mostly built up from the same basic shape repeated over and over again at ever smaller scales. Most astonishing of all, fractal mathematics reveals that this humble fern leaf is neither a one- nor an two-dimensional shape, but hovers somewhere in-between.

 

A fern displaying its fractal features. The same shape is repeated in the branches, the fronds and the leaves – and even the veins inside each leaf. Wikimedia CommonsClick to enlarge

 

Exactly what shape does this fern have?

The classical Euclidean geometry taught in high-school leaves us at a loss to answer this simple question. Though cylinders and rectangles may be great for modelling the shapes of technology, there are precious few regular shapes to be found in the natural world.

 

The International Space Station, an engineering wonder whose shape can be modelled by classical Euclidean geometry. Such regular shapes are extremely rare in nature. Wikimedia CommonsClick to enlarge

 

How can we describe a fern as a precise mathematical shape? How can we build a mathematical model of this wonderful object? Enter a completely new world of beautiful shapes: a branch of mathematics known as fractal geometry.

1. Infinite Intricacy

Many patterns of nature are so irregular and fragmented that, compared with Euclid … Nature exhibits not simply a higher degree but an altogether different level of complexity. – Benoît Mandelbröt, The Fractal Geometry of Nature

In 1861, the discovery of the world’s first fractal sent shockwaves through the mathematical community.

If you pick up a pen and doodle a zig-zag, you should end up with a number of sharp corners connected by smooth lines. To show it could be done, the German mathematician Karl Weierstrass constructed a zig-zag that was so jagged, it was nothing but corners – the ultimate mathematical staccato.

No matter how many times the shape was magnified, any glimmer of a smooth line would invariably dissolve into a never-ending cascade of corners, packed ever-more tightly together. Weierstrass' shape had irregular details at every possible scale – the first key feature of a fractal shape.

 

The first fractal shape, discovered by Weierstrass. Zooming in reveals more and more corner points – and no smoothness at all. Wikimedia CommonsClick to enlarge

 

Mathematicians labelled Weierstrass' shape as “pathological,” as it stood in defiance of the tried-and-tested tools of calculus that had been so painstakingly assembled over the previous few hundred years. It remained just a tantalising glimpse of a completely new kind of shape until modern computing power gave mathematicians the keys to the promised land.

2. Zoom Symmetry

I found myself, in other words, constructing a geometry … of things which had no geometry. – Benoît Mandelbröt, 1924-2010

The blossoming of fractal geometry into a new branch of mathematics is largely thanks to the Polish-born mathematicianBenoît Mandelbröt and his seminal 1977 essay The Fractal Geometry of Nature.

 

Mandelbröt’s talk, Fractals and the Art of Roughness at TED2010.

 

Mandelbröt worked for IBM New York in the 1960s. With the company’s immense computing power at his disposal, he was able to explore the strange new world of fractals for the first time.

Perhaps the most famous fractal today is the Mandelbröt set (as shown below), named after its discoverer. To draw it exactly is impossible, but it can be approximated by painstakingly colouring each point in the plane separately.

 

The Mandelbröt set, a famous fractal that can only be drawn by computers. Note how smaller pieces of the set closely resemble the whole. Wikimedia CommonsClick to enlarge

 

To choose the right colour for a specific point, we apply a simple movement rule to the point over and over again and watch how long it takes for the point to “escape” off the page. Though practically impossible to create by hand, modern interactive applets (such as this one, created by British designer Paul Neave) allow you to create and explore these sets in real time.

These computer programs allow you to spot a new kind of symmetry associated with fractals. To mathematicians, a symmetry is an action that when applied to a shape will leave it looking (more or less) the same.

For instance, we say that a square has rotational symmetry because there’s no way to tell if a square has been spun around by 90 degrees when you weren’t looking.

The infinite intricacy of fractals permits them a completely new type of symmetry that isn’t found in ordinary shapes. Incredibly, zooming in on a small region of a fractal leaves you looking at the same shape you started with. Tiny bits of the fractal can look exactly the same as the whole.

Far from being a mathematical curiosity, this zoom symmetry can be found everywhere in nature – once you know to look for it.

 

A bolt of lightning reveals its zoom symmetry for a split second – each branch resembles a small copy of the whole shape. Wikimedia CommonsClick to enlarge

 

3. Complexity from simplicity

Bottomless wonders spring from simple rules which are repeated without end. – Benoît Mandelbröt, 1924-2010

As Mandelbröt was putting fractals under the microscope, the British mathematician Michael Barnsley (currently of the Australian National University) was approaching the same objects from a different angle.

Though the geometry of fractal shapes is infinitely complex, a third trait of fractals is that their complexity arises from very simple core definitions. The shape of a fractal can be completely captured by a small list of mathematical mappings that describe exactly how the smaller copies are arranged to form the whole fractal.

Barnsley’s influential 1988 book Fractals Everywhere contained an algorithm, known as the Chaos Game, that allowed computers to quickly generate any fractal shape from its known mappings.

The Chaos Game took a starting point in space and tracked its motion as it hopped around. Each hop was determined by selecting one of the mappings at random.

Remarkably, no matter the starting point and the order in which the mappings were traversed, the point would quickly be sucked onto a “strange attractor” – the fractal shape – and once there, it would dance around on it forever.

These fractal attractors lie at the heart of Chaos Theory. Since the behaviour of a chaotic system also dances around a fractal attractor, the infinite intricacy of fractal shapes means the slightest nudge to the system can move the point off the attractor entirely.

Crucially, Barnsley found a way to take any desired shape and calculate its list of fractal mappings. Since the complex shape could be completely reconstructed from the simple maps, Barnsley’s algorithms were instrumental in the new field of image compression – allowing the original edition of Microsoft Encarta to pack tens of thousands of images onto a single CD.

 

The Barnsley Fern. No, it’s not a real fern – it’s a mathematical image generated by playing the Chaos Game with four particular maps. Using fractal geometry, complex natural shapes can be encoded with simple mathematical rules Michael RoseClick to enlarge

 

4. Fractional dimensions

Nature has played a joke on the mathematicians. The 19th-Century mathematicians may have been lacking in imagination, but Nature was not. – F J Dyson, as quoted by Benoît Mandelbröt, The Fractal Nature of Geometry

The last and most striking feature of fractals is that are not one-, two- or three-dimensional, but somewhere in-between. Nature seems perfectly happy to use fractional dimensions, so we should be too. In order to do so, we must first clarify what we mean by “dimension”.

The idea of “dimension” has many different (but consistent) mathematical definitions. Intuitively, we can think of a shape’s dimension as a measure of how rough the shape is, or a score that reflects how well the shape fills up its surrounding space.

These intuitive ideas can be made mathematically precise. To illustrate a fractional dimension, think about a piece of paper, which is (practically) 2-dimensional. A solid sphere is 3-dimensional and fills up more space than the piece of paper.

Now crumple the paper into a ball. You now have a fractal-like shape that fills up more space than the paper, but not as much space as the solid sphere. It scores approximately 2.5 for its dimension.

Similarly, your lungs are about 2.97 dimensional – their fractal geometry allows them to pack lots of surface area (a few tennis courts) into a small volume (a few tennis balls). Packing such a huge surface area into your body provides you with the ability to extract enough oxygen to keep you alive.

 

The rings of Saturn arranged in a fractal structure, as seen by the Cassini spacecraft. (Earth is the small dot at the upper-left of the rings) Wikimedia CommonsClick to enlarge

 

Fractals can be found everywhere in the world around you, from a humble fern to the structure of the universe on the largest of scales.

Even certain parts of your anatomy are fractal, including your brain. If you are mindful of fractals, you will be struck by the sheer variety of places you can find them as you go about your daily routine – from clouds, plants and the landscape to church windows and laboratories …

Fractal mathematics not only allows us to begin modelling the shapes of nature, it can also reawaken our childlike wonder at the world around us.


With thanks to Jon Borwein.

more...
No comment yet.