Knowmads, Infocol...
74.3K views | +59 today
Knowmads, Infocology of the future
Exploring the possible , the probable, the plausible
Curated by Wildcat2030
Your new post is loading...
Your new post is loading...
Scooped by Wildcat2030!

If the face fits: science of attraction is based on personal experience – study

If the face fits: science of attraction is based on personal experience – study | Knowmads, Infocology of the future |
If your partner has a face that could curdle milk, you only have yourself to blame. Scientists have found that the faces we fancy are shaped more by our personal experiences than genetics or other influences.

Their study into facial attraction showed that when it came to rating people as hot or not, even identical twins who grew up together disagreed. In fact, genetics turned out to explain only a fifth of the variation in people’s tastes, meaning very little was inherited.

The greatest influence on people’s preferences was their own life experiences - a mass of factors that could include the friends they make, the odd chance encounter, and even the face of their first love.

“If you think about your first romantic relationship, that person’s face, or someone who looks like them, might be attractive to you for years to come,” said Laura Germine, a psychologist who co-led the study at Massachusetts general hospital in Boston.

“On the one hand, it’s common sense that our individual experiences will be important for who we find attractive, but on the other hand, we know that people’s ability to recognise faces is almost entirely down to differences in genes,” she said.

Some aspects of beauty are widely agreed on. For example, most people find symmetrical faces more attractive than wonkier ones. Facial symmetry is thought to reflect good development and to find it attractive might be written in our genes.
No comment yet.
Scooped by Wildcat2030!


Humanity+ » THE END OF THE BEGINNING | Knowmads, Infocology of the future |
The process of creating this book began by collecting formal essays, some of which are quite general and abstract, others of which focus on particular technological innovations. As we read the essays, we found that we had questions for the authors, some of which led to very interesting email exchanges. We decided to edit some of these email discussions into Question and Answer Dialogues. These dialogues follow the chapters, which are organized into the following sections.

Part One: Where are we going? When will we get there?

Following the introduction you are now reading, the first section of the book presents three chapters giving broad (speculative, yet rationally considered) overviews of the potential developments during the next century.

Chapter One: Predicting the Age of Post-Human Intelligences by Ted Goertzel and Ben Goertzel.

Scientific futurism has had some significant successes as well as some well-known bloopers. In this chapter, five traditions are examined for insight into the coming of the age of post-human intelligence: (1) environmental futurism, (2) Kondratiev long-wave analysis, (3) generational cycle analysis, (4) geopolitical futurism and (5) the study of technological revolutions. Three of these traditions offer similar predictions leading us to predict a period of intense technological innovation in the 2040’s and another in the 2100’s. If these theories are correct, Artificial General Intelligence and The Singularity are likely to come in one of these periods, depending on the success of engineering models currently being completed and on the availability of funding to implement them.

The Chapter is followed by a Dialogue between Weaver (David Weinbaum), Ted Goertzel, Ben Goertzel and Viktoras Veitas follows the chapter.

Chapter Two: A Tale of Two Transitions by Robin Hanson

This chapter compares and contrasts two quite different scenarios for a future transition to a world dominated by machine intelligence. One scenario is based on continuing on our current path of accumulating better software, while the other scenario is based on the future arrival of an ability to fully emulate human brains. The first scenario is gradual and anticipated. In it, society has plenty of warnings about forthcoming changes, and time to adapt to those changes. The second scenario, in contrast, is more sudden, and potentially disruptive.

Chapter Three: Longer Lives on the Brink of Global Population Contraction: A Report from 2040 by Max More.

Written from the perspective of an analyst in 2040, this essay explains why the world’s population is shrinking due to birth rates declining more than many experts had predicted. The population would have shrunk even more if the average life span had not increased. Lower birth rates have meant that less expenditure is needed for education, and a declining population places less stress on the environment. But economic growth is slower with a declining population, especially if the proportion of retired people increases. Fortunately, the health of elderly people has improved, and older people continue to be economically active later in life. Life extension research is now valued as one means of slowing the rate of population decline.
No comment yet.
Scooped by Wildcat2030!

The Martian got me cheering, but why go to Mars?

The Martian got me cheering, but why go to Mars? | Knowmads, Infocology of the future |
The penultimate object in the spectacular Cosmonauts exhibition, just opened at the Science Museum in London, is a spacesuit for a mission to Mars. It is lightweight, almost fragile and the pink-brown colour of the Martian sky. It suggests that after the fraught Cold War dynamics of the old space race, the inevitable next destination is the red planet.

Folks at NASA will be cheering Ridley Scott’s new film, The Martian, because its central message is that humanity will take this second giant leap, and, individually and collectively, we have the ingenuity to overcome the immense risks entailed. Inspirationally, it answers the question of how the voyage will be made, but it also begs the deeper question of why.

The story (trying to avoid major spoilers) is simple: astronaut Mark Watney (Matt Damon) is left behind when a sudden storm forces the abandonment of a scientific expedition to Mars. Unexpectedly alive, Watney has to improvise the means of survival until a rescue mission arrives.

The Martian is the latest, and I think brightest, of a meteor shower of recent space films, including the Oscar-winning Gravity, the black hole time travel yarn, Interstellar, the fun Guardians of the Galaxy and lesser blockbusters, such as Oblivion (with Tom Cruise) and Scott’s own Prometheus. We are in the midst of another wave of sci-fi movie enthusiasm, comparable to the cycle from 2001: a Space Odyssey (1968) to Star Wars (1977).

But, especially in terms of science and technology, the films are very different. The Martian is no fantasy monster flick or comedy caper. It takes the technological near future, and science’s contribution to it, to its heart. The science in its closest sibling, Gravity, was in fact positively Aristotelian: Sandra Bullock’s stranded astronaut is in peril because of the alien, dangerous, circular motion of space debris. She is then challenged by each of the Aristotelian elements – water (ice), fire, air (lack of) – before falling to earth. In contrast, the science in The Martian is modern: problem-solving, interdisciplinary creativity, a blend of individual insight and teamwork.
No comment yet.
Scooped by Wildcat2030!

A 21st-century higher education: training for jobs of the future

A 21st-century higher education: training for jobs of the future | Knowmads, Infocology of the future |
Only the brave or foolhardy would claim knowledge about the shape of jobs for the next decade, let alone the rest of the 21st century. We know that the end of local car manufacturing alone will involve the loss of up to 200,000 jobs directly or indirectly, and there will be no large-scale manufacturing to replace them.

We also cannot assume that employment in health and human services will continue to expand in their place. Globally, millions of dollars are being invested in robotic monitors, nurses and companions for the elderly. The driverless car is almost with us, meaning that even Uber’s moment in the sun may be brief.

So if we’re not sure what the jobs of the future will look like, what kind of tertiary education can prepare students for the world of work? Various forces will be at play including economic (such as continued globalisation and intensification of competition), social (such as the ageing of Australia’s population), and technological (automation, digitalisation). There are also powerful environmental constraints.
What kind of education can prepare us for the future?
No comment yet.
Scooped by Wildcat2030!

Hygge: A heart-warming lesson from Denmark - BBC News

Hygge: A heart-warming lesson from Denmark - BBC News | Knowmads, Infocology of the future |
A UK college has started teaching students the Danish concept of hygge - said to make homes nicer and people happier. But what exactly is it and is it exportable?

Sitting by the fire on a cold night, wearing a woolly jumper, while drinking mulled wine and stroking a dog - probably surrounded by candles. That's definitely "hygge".

Eating home-made cinnamon pastries. Watching TV under a duvet. Tea served in a china set. Family get-togethers at Christmas. They're all hygge too.

The Danish word, pronounced "hoo-ga", is usually translated into English as "cosiness". But it's much more than that, say its aficionados - an entire attitude to life that helps Denmark to vie with Switzerland and Iceland to be the world's happiest country.

Morley College, in central London, is teaching students how to achieve hygge as part of its Danish language course. "We have long, cold winters in Denmark," says lecturer Susanne Nilsson. "That influences things. Hygge doesn't have to be a winter-only thing, but the weather isn't that good for much of the year."

With as little as four sunshine hours a day in the depths of winter, and average temperatures hovering around 0C, people spend more time indoors as a result, says Nilsson, meaning there's greater focus on home entertaining.

"Hygge could be families and friends getting together for a meal, with the lighting dimmed, or it could be time spent on your own reading a good book," she says. "It works best when there's not too large an empty space around the person or people." The idea is to relax and feel as at-home as possible, forgetting life's worries.

The recent growth in Scandinavian-themed restaurants, cafes and bars in the UK is helping to export hygge, she adds, with their intimate settings, lack of uniformity in decor and concentration on comforting food. Most customers won't have heard of the term, but they might get a sense of it.

In the US, the wallpaper and fabric firm Hygge West explicitly aims to channel the concept through its cheery designs, as does a Los Angeles bakery, called Hygge, which sells traditional Danish pastries and treats.
No comment yet.
Scooped by Wildcat2030!

"Designless" brain-like chips created through artificial evolution

"Designless" brain-like chips created through artificial evolution | Knowmads, Infocology of the future |
Scientists at the University of Twente in the Netherlands have devised a new type of electronic chip that takes after the human brain. Their device is highly power-conscious, massively parallel, and can manipulate data in arbitrary ways – even though it doesn't need to be explicitely designed to perform any task. The advance could pave the way for computers that think more like we do.
(When the) chips are down

Electronic chips as they are currently designed come with plenty of drawbacks. Even the simplest operations, like adding or subtracting, need large numbers of transistors arranged in a very specific, well thought-out pattern. These transistors quickly add up and drain power even when idle (unless specific measures are taken). Moreover, most circuits can't effectively process information in parallel, leading to a further waste of time and energy.

All of these factors make it especially hard for today's computers to perform many crucial tasks quickly and on little power – particularly the kinds of tasks that the human brain can tackle with ease, like recognizing a visual pattern or understanding human language. In fact, when it comes to simulating brain-like functionality, many researchers have opted to abandon traditional computer architectures altogether.

Alternative chip designs that try to mimic the prowess and efficiency of the brain usually do so by either resorting to massive parallelism or by using neuron-like structures as their basic building blocks. But these approaches retain one big drawback: they still rely on fallible human minds to design their hardware and software.
No comment yet.
Scooped by Wildcat2030!

Zuckerberg to the UN: The Internet Belongs to Everyone

Zuckerberg to the UN: The Internet Belongs to Everyone | Knowmads, Infocology of the future |
A reputation is a hard thing to shake. Like when a UN moderator introduces Mark Zuckerberg by commenting that he is almost unrecognizable without his hoodie. Zuckerberg hasn’t worn a hoodie in nearly three years. In fact, he was looking remarkably comfortable in his suit at the UN last weekend as he joined a group of speakers from global NGOs.

Zuckerberg had come to the United Nations to advocate for universal Internet access. Speaking to a body of heads of state and UN delegates, he made an impassioned plea that the Internet is a key enabler of human rights. “Insuring access is essential to achieving global justice and opportunity,” he said.

He made the speech on the day he partnered with Bono, the rockstar founder of the advocacy group ONE, to publish a connectivity declaration, which calls on global leaders to prioritize Internet access. The pair penned an op-ed for the New York Times in which they announced their intentions to start a global movement. Dozens of people have signed it already, including Richard Branson and Bill and Melinda Gates.

Zuckerberg wants the world to understand that Internet access should be a basic human right, like access to healthcare or water. Secondarily, he wants people to understand that Facebook’s role in this effort is driven primarily by his deep social conviction that such connectivity is the best way to alleviate poverty. “Research shows that when you give people access to the Internet, one in ten people is lifted from poverty,” he said.
No comment yet.
Scooped by Wildcat2030!

Google search chief Amit Singhal looks to the future - BBC News

Google search chief Amit Singhal looks to the future - BBC News | Knowmads, Infocology of the future |
For more than a decade, boxes have helped Google dominate search - both the rectangle on its home page and the web browser address bars that send users to its results.

But smartphones now account for more internet use than PCs, and that changes things.

"When you have a 5in-diagonally-across screen - it's not designed to type," acknowledges Google's search chief Amit Singhal.

"So, on mobile you have to fundamentally give users new ways to interact."

To address the problem, Mr Singhal's team has developed Now on Tap.

The facility - which is being released as part of the latest Android mobile operating system - lets users get related information about whatever is on their handset's screen with a single button press.

As an example, Mr Singhal describes a text chat with his wife, in which he suggests a restaurant.

He explains his spouse could bring up driving directions and the place's opening hours simply by holding down the home button when the restaurant's name was displayed.
No comment yet.
Scooped by Wildcat2030!

The Call for a Storytelling Computer

The Call for a Storytelling Computer | Knowmads, Infocology of the future |
If you were to stand on a street corner and ask 100 people to define Artificial Intelligence, odds are, you’d get 100 different answers. And, according to AI visionary and educational innovator Dr. Roger Schank, it’s likely every one of those answers would be wrong.

“Early attempts at AI were a lot about making computers smart by teaching them to play chess. It was a horrific mistake to define it that way,” he said. “What we got was artificial intelligence which is so confusing, people in the field are still confused as to what it might be about.”

Instead of making a computer look intelligent by teaching it to play chess, think 10,000 moves ahead and do things people could never do, Schank believes those computers should have been taught to play chess like a Grand Master chess player would play.

“I always saw AI as a field that could tell us more about people by getting us to figure out how to imitate people by doing the kinds of things people do,” Schank said. “Why does that matter? It matters because people talk to each other. That’s a sub-section of AI called ‘Natural Language Processing’ and it’s phenomenally hard.”

To illustrate his point, Schank cited the example of the typical intelligent, talking robot in the movies and pointed out that, while that robot may talk, it rarely asks questions.

“Does it have a point of view that’s new? Can it make an interesting argument with you? Does it have something it can teach you or you can teach it? These are the right questions for AI,” he explained. When a computer can stop and think, I’ll be very impressed and say, ‘Wow! We have AI!’”

For Schank, another concern is that many people see big data as representative of the growth of artificial intelligence. While he certainly appreciates the utility of big data, it’s not truly AI in his opinion.
No comment yet.
Scooped by Wildcat2030!

Study adds to evidence that viruses are alive

Study adds to evidence that viruses are alive | Knowmads, Infocology of the future |
A new analysis supports the hypothesis that viruses are living entities that share a long evolutionary history with cells, researchers report. The study offers the first reliable method for tracing viral evolution back to a time when neither viruses nor cells existed in the forms recognized today, the researchers say.

The new findings appear in the journal Science Advances.

Until now, viruses have been difficult to classify, said University of Illinois crop sciences and Carl R. Woese Institute for Genomic Biology professor Gustavo Caetano-Anollés, who led the new analysis with graduate student Arshan Nasir. In its latest report, the International Committee on the Taxonomy of Viruses recognized seven orders of viruses, based on their shapes and sizes, genetic structure and means of reproducing.

"Under this classification, viral families belonging to the same order have likely diverged from a common ancestral virus," the authors wrote. "However, only 26 (of 104) viral families have been assigned to an order, and the evolutionary relationships of most of them remain unclear."

Part of the confusion stems from the abundance and diversity of viruses. Less than 4,900 viruses have been identified and sequenced so far, even though scientists estimate there are more than a million viral species. Many viruses are tiny - significantly smaller than bacteria or other microbes - and contain only a handful of genes. Others, like the recently discovered mimiviruses, are huge, with genomes bigger than those of some bacteria.

The new study focused on the vast repertoire of protein structures, called "folds," that are encoded in the genomes of all cells and viruses. Folds are the structural building blocks of proteins, giving them their complex, three-dimensional shapes. By comparing fold structures across different branches of the tree of life, researchers can reconstruct the evolutionary histories of the folds and of the organisms whose genomes code for them.
No comment yet.
Scooped by Wildcat2030!

New ‘stealth dark matter’ theory may explain mystery of the universe’s missing mass | KurzweilAI

New ‘stealth dark matter’ theory may explain mystery of the universe’s missing mass | KurzweilAI | Knowmads, Infocology of the future |
A new theory that may explain why dark matter has evaded direct detection in Earth-based experiments has been developed by team of Lawrence Livermore National Laboratory (LLNL) particle physicists known as the Lattice Strong Dynamics Collaboration.

The group has combined theoretical and computational physics techniques and used the Laboratory’s massively parallel 2-petaflop Vulcan supercomputer to devise a new model of dark matter. The model identifies today’s dark matter as naturally “stealthy.” But in the extremely high-temperature plasma conditions that pervaded the early universe, it would have been easy to see dark matter via interactions with ordinary matter, the model shows.

A balancing act in the early universe

“These interactions in the early universe are important because ordinary and dark matter abundances today are strikingly similar in size, suggesting this occurred because of a balancing act performed between the two before the universe cooled,” said Pavlos Vranas of LLNL, one of the authors of a paper in an upcoming edition of the journal Physical Review Letters.

Dark matter makes up 83 percent of all matter in the universe and does not interact directly with electromagnetic or strong and weak nuclear forces. Light does not bounce off of it, and ordinary matter goes through it with only the feeblest of interactions. It is essentially invisible, yet its interactions with gravity produce striking effects on the movement of galaxies and galactic clusters, leaving little doubt of its existence.

The key to stealth dark matter’s split personality is its compositeness and the miracle of confinement. Like quarks in a neutron, at high temperatures these electrically charged constituents interact with nearly everything. But at lower temperatures, they bind together to form an electrically neutral composite particle. Unlike a neutron, which is bound by the ordinary strong interaction of quantum chromodynamics (QCD), the stealthy neutron would have to be bound by a new and yet-unobserved strong interaction, a dark form of QCD.
No comment yet.
Scooped by Wildcat2030!

Orange is the new black gold: how peel could replace crude oil in plastics

Orange is the new black gold: how peel could replace crude oil in plastics | Knowmads, Infocology of the future |
Orange juice, both delicious and nutritious, is enjoyed by millions of people across the world every day. However, new research indicates that it could have potential far beyond the breakfast table. The chemicals in orange peel could be used as new building blocks in products ranging from plastics to paracetamol – helping to break our reliance on crude oil.

Today’s society is totally reliant on the chemicals and materials that are obtained from our diminishing supply of fossil fuels. As such, there is an increasing global focus on the development of renewable chemical feedstocks from a variety of sustainable sources such as sugarcane and fatty acids in the production of biofuels. And the chemically rich essential oils contained within waste citrus peels are another such source that is being investigated with real zest.

This is promising, as the orange juice industry uses highly inefficient and wasteful juicing processes, with almost 50% of the fruit thrown away. This gives a real opportunity, then, to develop a sustainable supply of chemicals from the diverse and plentiful molecules locked within the peels.
Limonene – a versatile building block

Recent figures estimate around 20m tonnes of citrus is wasted each year. As some 95% of the oil extracted from these waste rinds is made up of limonene molecules, a colourless liquid hydrocarbon known as C10H16, this waste could yield around 125,000 tonnes of limonene a year.

Current extraction methods rely on distillation, passing steam through the waste solids and simply collecting the resulting oil. But recently, researchers at the University of York began investigating microwave extraction techniques as a greener alternative. The team simply placed orange peel and an organic solvent into a microwave and heated for 30 minutes. Within the peel, the water molecules start to boil, rupturing the cells and allowing the limonene to leach out. The results are favourable; the process is much faster, less energy intensive, produces a higher quality of limonene and in a yield twice as good as conventional methods.
No comment yet.
Scooped by Wildcat2030!

Is Personalized Discovery A Feature, Category Or New Paradigm by @ilparone

Is Personalized Discovery A Feature, Category Or New Paradigm by @ilparone | Knowmads, Infocology of the future |
What do you do when you don’t know what you want to read, watch, listen to or do next? What do you do if you don’t know what to search for? Or can’t describe clearly what you’d be interested in next?

There are so many great choices available in the digital realm, and new stuff is pouring in every second. Many times we feel helpless in front of such an abundance of endless possibilities.

Nevertheless, so far no one has created a solution that would automatically bring all the interesting options to your fingertips without you asking for it. A universal personalized Discovery solution doesn’t exist yet. Why?
Personalized Discovery Today

There have been various attempts and approaches to crack personalized Discovery — at least partially.

StumbleUpon has been around for a while. The app provides content based on selected categories and other “Stumblers” you follow. Flipboard’s personalized magazine has transformed into a social news platform. You personalize your own experience by curating content sources and following people. Pinterest, too, has a follow model for people, their content and topics. Its Guided Search with combinable keywords works as an additional interface alongside the curated feed.

Pocket recently released its Recommended section that provides content based on the things that you saved for later. Google Now delivers useful information based on your previous actions and historical data. And Facebook is just entering the game with its M that supposedly recommends actions and content.
No comment yet.
Scooped by Wildcat2030!

A New Map Traces the Limits of Computation | Quanta Magazine

A New Map Traces the Limits of Computation |  Quanta Magazine | Knowmads, Infocology of the future |
At first glance, the big news coming out of this summer’s conference on the theory of computing appeared to be something of a letdown. For more than 40 years, researchers had been trying to find a better way to compare two arbitrary strings of characters, such as the long strings of chemical letters within DNA molecules. The most widely used algorithm is slow and not all that clever: It proceeds step-by-step down the two lists, comparing values at each step. If a better method to calculate this “edit distance” could be found, researchers would be able to quickly compare full genomes or large data sets, and computer scientists would have a powerful new tool with which they could attempt to solve additional problems in the field.

Yet in a paper presented at the ACM Symposium on Theory of Computing, two researchers from the Massachusetts Institute of Technology put forth a mathematical proof that the current best algorithm was “optimal” — in other words, that finding a more efficient way to compute edit distance was mathematically impossible. The Boston Globe celebrated the hometown researchers’ achievement with a headline that read “For 40 Years, Computer Scientists Looked for a Solution That Doesn’t Exist.”

But researchers aren’t quite ready to record the time of death. One significant loophole remains. The impossibility result is only true if another, famously unproven statement called the strong exponential time hypothesis (SETH) is also true. Most computational complexity researchers assume that this is the case — including Piotr Indyk and Artūrs Bačkurs of MIT, who published the edit-distance finding — but SETH’s validity is still an open question. This makes the article about the edit-distance problem seem like a mathematical version of the legendary report of Mark Twain’s death: greatly exaggerated.
No comment yet.
Scooped by Wildcat2030!

Industrial farming is one of the worst crimes in history

Industrial farming is one of the worst crimes in history | Knowmads, Infocology of the future |
Animals are the main victims of history, and the treatment of domesticated animals in industrial farms is perhaps the worst crime in history. The march of human progress is strewn with dead animals. Even tens of thousands of years ago, our stone age ancestors were already responsible for a series of ecological disasters. When the first humans reached Australia about 45,000 years ago, they quickly drove to extinction 90% of its large animals. This was the first significant impact that Homo sapiens had on the planet’s ecosystem. It was not the last.

About 15,000 years ago, humans colonised America, wiping out in the process about 75% of its large mammals. Numerous other species disappeared from Africa, from Eurasia and from the myriad islands around their coasts. The archaeological record of country after country tells the same sad story. The tragedy opens with a scene showing a rich and varied population of large animals, without any trace of Homo sapiens. In scene two, humans appear, evidenced by a fossilised bone, a spear point, or perhaps a campfire. Scene three quickly follows, in which men and women occupy centre-stage and most large animals, along with many smaller ones, have gone. Altogether, sapiens drove to extinction about 50% of all the large terrestrial mammals of the planet before they planted the first wheat field, shaped the first metal tool, wrote the first text or struck the first coin.
No comment yet.
Scooped by Wildcat2030!

UN battle looms over finance as nations submit climate plans - BBC News

UN battle looms over finance as nations submit climate plans - BBC News | Knowmads, Infocology of the future |
Divisions over money between rich and poor countries re-emerged as nations submitted their plans for tackling climate change to the UN.

India, the last big emitter to publish its contribution, said it would need $2.5 trillion to meet its targets.

The Philippines said that without adequate climate compensation, their cuts in emissions wouldn't happen.

The UN says the plans increase the likelihood of a strong global treaty.

148 countries, out of a total of 196, have met a UN deadline for submitting a plan, termed an Intended Nationally Determined Contribution (INDC).

These INDCs cover close to 90% of global emissions of carbon dioxide. The commitments will form the centrepiece of a new global agreement on climate change that nations hope to agree in Paris in December.

Independent analysts at the Climate Action Tracker said that the plans, when added up, meant the world was on track for temperature rises of 2.7 degrees C above pre-industrial levels.

This is above the 2 degree target generally accepted as the threshold for dangerous climate change. But it is a significant improvement on a previous assessment of 3.1 degrees, made when fewer plans had been submitted.

India's contribution, which promised to reduce the carbon intensity of their emissions but didn't commit to peaking their CO2, drew praise from around the world.

"It's highly significant that India is joining the ranks of so many other developed and developing countries in putting serious commitments on the table ahead of the Paris climate talks," said former UK environment minister Richard Benyon MP.
No comment yet.
Scooped by Wildcat2030!

Peeple app for rating human beings causes uproar - BBC News

Peeple app for rating human beings causes uproar - BBC News | Knowmads, Infocology of the future |
A new app that promises to let users review individuals has caused controversy before it has even launched.

Peeple will allow members to give star ratings to people they know via the app, much as restaurants and hotels are rated on sites such as Yelp.

The app has caused uproar online, with web users describing it as "creepy" and "terrifying".

Peeple's founders say they will pre-screen for negative abuse.

However, users will not be able to delete comments made about them. Nor will they be able to remove themselves from the site once on it.

Among those raising concern was University of East Anglia law lecturer and privacy advocate Paul Bernal.

"The bottom line is this is extremely creepy," he told the BBC. "It is an ideal trolling tool."

Mr Bernal added that he was sceptical that the app could ensure users knew the person they were rating.

"How are you determining whether somebody knows somebody?" he asked.

"If you're using Facebook friends, do people really know all their Facebook friends? Absolutely not."
Legal 'headaches'

There may be legal difficulties too, according to Steven Heffer, a partner at the law firm Collyer-Bristow.

"I can only see a lot of headaches," he told the BBC. "It looks to me like potentially a recipe for a legal disaster."

Mr Heffer said the app was different from existing social media in that it specifically encouraged users to assess others and that negative comments on individuals would be difficult to police.

"They can't be judge and jury, can they?" he said.

"They might have some kind of safety net, but it's not going to stop people being defamed and suffering damage as well."

The website for Peeple says that negative reviews will be stalled for 48 hours before being published, so that they can be checked by the person being rated.

However, if they are not able to resolve the comment with the person making it within that time, it will go live anyway.
No comment yet.
Scooped by Wildcat2030!

Yes, You’re Irrational, and Yes, That’s OK - Issue 21: Information - Nautilus

Yes, You’re Irrational, and Yes, That’s OK - Issue 21: Information - Nautilus | Knowmads, Infocology of the future |
magine that (for some reason involving cultural tradition, family pressure, or a shotgun) you suddenly have to get married. Fortunately, there are two candidates. One is charming and a lion in bed but an idiot about money. The other has a reliable income and fantastic financial sense but is, on the other fronts, kind of meh. Which would you choose?

Sound like six of one, half-dozen of the other? Many would say so. But that can change when a third person is added to the mix. Suppose candidate number three has a meager income and isn’t as financially astute as choice number two. For many people, what was once a hard choice becomes easy: They’ll pick the better moneybags, forgetting about the candidate with sex appeal. On the other hand, if the third wheel is a schlumpier version of attractive number one, then it’s the sexier choice that wins in a landslide. This is known as the “decoy effect”—whoever gets an inferior competitor becomes more highly valued.

The decoy effect is just one example of people being swayed by what mainstream economists have traditionally considered irrelevant noise. After all, their community has, for a century or so, taught that the value you place on a thing arises from its intrinsic properties combined with your needs and desires. It is only recently that economics has reconciled with human psychology. The result is the booming field of behavioral economics, pioneered by Daniel Kahneman, a psychologist at Princeton University, and his longtime research partner, the late Amos Tversky, who was at Stanford University.
No comment yet.
Scooped by Wildcat2030!

The Technological Singularity | KurzweilAI

The Technological Singularity | KurzweilAI | Knowmads, Infocology of the future |
The idea that human history is approaching a “singularity” — that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both — has moved from the realm of science fiction to serious debate. Some singularity theorists predict that if the field of artificial intelligence (AI) continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. Murray Shanahan offers an introduction to the idea of the singularity and considers the ramifications of such a potentially seismic event.

Shanahan’s aim is not to make predictions but rather to investigate a range of scenarios. Whether we believe that singularity is near or far, likely or impossible, apocalypse or utopia, the very idea raises crucial philosophical and pragmatic questions, forcing us to think seriously about what we want as a species.

Shanahan describes technological advances in AI, both biologically inspired and engineered from scratch. Once human-level AI — theoretically possible, but difficult to accomplish — has been achieved, he explains, the transition to superintelligent AI could be very rapid. Shanahan considers what the existence of superintelligent machines could mean for such matters as personhood, responsibility, rights, and identity. Some superhuman AI agents might be created to benefit humankind; some might go rogue. (Is Siri the template, or HAL?) The singularity presents both an existential threat to humanity and an existential opportunity for humanity to transcend its limitations. Shanahan makes it clear that we need to imagine both possibilities if we want to bring about the better outcome.
No comment yet.
Scooped by Wildcat2030!

Google’s Quantum Computer Just Got a Big Upgrade

Google’s Quantum Computer Just Got a Big Upgrade | Knowmads, Infocology of the future |
Google is upgrading its quantum computer. Known as the D-Wave, Google’s machine is making the leap from 512 qubits—the fundamental building block of a quantum computer—to more than a 1000 qubits. And according to the company that built the system, this leap doesn’t require a significant increase in power, something that could augur well for the progress of quantum machines.

Together with NASA and the Universities Space Research Association, or USRA, Google operates its quantum machine at the NASA Ames Research center not far from its Mountain View, California headquarters. Today, D-Wave Systems, the Canadian company that built the machine, said it has agreed to provide regular upgrades to the system—keeping it “state-of-the-art”—for the next seven years. Colin Williams, director of business development and strategic partnerships for D-Wave, calls this “the biggest deal in the company’s history.” The system is also used by defense giant Lockheed Martin, among others.

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California have published research suggesting that the D-Wave exhibits behavior beyond classical physics.
No comment yet.
Scooped by Wildcat2030!

In future, the internet could come through your lightbulb

In future, the internet could come through your lightbulb | Knowmads, Infocology of the future |
The tungsten lightbulb has served well over the century or so since it was introduced, but its days are numbered now with the arrival of LED lighting, which consume a tenth of the power of incandescent bulbs and have a lifespan 30 times longer. Potential uses of LEDs are not limited to illumination: smart lighting products are emerging that can offer various additional features, including linking your laptop or smartphone to the internet. Move over Wi-Fi, Li-Fi is here.

Wireless communication with visible light is, in fact, not a new idea. Everyone knows about using smoke signals on a desert island to try to capture attention. Perhaps less well known is that in the time of Napoleon much of Europe was covered with optical telegraphs, otherwise known as the semaphore.

The photophone, with speech carried over reflected light. Amédée Guillemin

Alexander Graham Bell, inventor of the telephone, actually regarded the photophone as his most important invention, a device that used a mirror to relay the vibrations caused by speech over a beam of light.

In the same way that interrupting (modulating) a plume of smoke can break it into parts that form an SOS message in Morse code, so visible light communications – Li-Fi – rapidly modulates the intensity of a light to encode data as binary zeros and ones. But this doesn’t mean that Li-Fi transceivers will flicker; the modulation will be too fast for the eye to see.
No comment yet.
Scooped by Wildcat2030!

As Privacy Fades, Your Identity Is the New Money (Op-Ed)

As Privacy Fades, Your Identity Is the New Money (Op-Ed) | Knowmads, Infocology of the future |
ob Leslie is chief executive officer of Sedicii, which provides technology for eliminating transmission and storage of private identity data during authentication or identity verification, and reducing identity theft, impersonation and fraud. Leslie is an electronics engineer with more 25 years of experience in information technology and business. This Op-Ed is part of a series provided by the World Economic Forum Technology Pioneers, class of 2015. Leslie contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

You may have heard the phrase, "If the product is free, then you are the product." It was coined at a time in the not-too-distant past when social networks were in their infancy and we were all mesmerized by the fantastic services we could consume to keep in touch and interact with each other — all for free!

Little did we realize at the time what that bargain actually meant. The vast majority of us had no idea social networks would be monitoring and recording all our interactions as they learn everything possible about us as people, our habits, our likes and dislikes, and in some cases, our innermost, private secrets. This information, containing the essence of who each of us is, has been used to target us with advertising and other services, making the companies collecting this information global giants that earn billions of dollars in revenue every year. Personal information is extremely valuable.

So how much are you really worth?
No comment yet.
Scooped by Wildcat2030!

Don’t Worry, Artificial Intelligence Has A Long Way To Go: Baidu Scientist

Don’t Worry, Artificial Intelligence Has A Long Way To Go: Baidu Scientist | Knowmads, Infocology of the future |
Ng: Despite all the hype, artificial intelligence has a ways to go before it is truly intelligent

Just days after participants at the World Economic Forum noted sweeping changes in society were to be expected resulting from intelligent computer applications, Ng recognizes the advancements but puts artificial intelligence into perspective, with emphasis on the “artificial.”

“Computers are getting much better at soaking up data to make predictions,” he said in a recent Fortune Magazine interview, noting that a computer capacity has caught up with the proliferation with data. “Despite all the hype, I think they are much further off than some people think.”

Such early stage applications of such “intelligence” include predicting the advertisement to best illicit a response, recognizing people in pictures as well as predicting the web page most relevant to your search query.

Separate analysis indicates that in many cases the guts of the “brains” behind today’s artificial intelligence is, in part, an advanced version of if-then logic. A mathematical formulation in some sense is required to generate outputs. Such “artificial intelligence” requires human definition and programming. Even today’s high frequency trading applications and algorithmic Hedge Funds are driven by humans who often understand the market underpinnings and can creatively connect non linear dots to develop and operate the formulas.
No comment yet.
Scooped by Wildcat2030!

Why the Human Brain Project Went Wrong--and How to Fix It

Why the Human Brain Project Went Wrong--and How to Fix It | Knowmads, Infocology of the future |
For decades Henry Markram has dreamed of reverse engineering the human brain. In 1994, as a postdoctoral researcher then at the Max Planck Institute for Medical Research in Heidelberg, Germany, he became the first scientist to “patch” two living neurons simultaneously—to apply microscopic pipettes to freshly harvested rat neurons to measure the electrical signals fired between them. The work demonstrated the process by which synapses are strengthened and weakened, making it possible to study and model how the brain learns. His work landed him a position as senior scientist at the prestigious Weizmann Institute of Science in Rehovot, Israel, and by the time he was promoted to professor in 1998, he was one of the most esteemed researchers in the field.
No comment yet.
Scooped by Wildcat2030!

Genomics is about to transform the world – Dawn Field – Aeon

Genomics is about to transform the world – Dawn Field – Aeon | Knowmads, Infocology of the future |
In case you weren’t paying attention, a lot has been happening in the science of genomics over the past few years. It is, for example, now possible to read one human genome and correct all known errors. Perhaps this sounds terrifying, but genomic science has a track-record in making science fiction reality. ‘Everything that’s alive we want to rewrite,’ boasted Austen Heinz, the CEO of Cambrian Genomics, last year.

It was only in 2010 that Craig Venter’s team in Maryland led us into the era of synthetic genomics when they created Synthia, the first living organism to have a computer for a mother. A simple bacterium, she has a genome just over half a million letters of DNA long, but the potential for scaling up is vast; synthetic yeast and worm projects are underway.

Two years after the ‘birth’ of Synthia, sequencing was so powerful that it was used to extract the genome of a newly discovered, 80,000-year-old human species, the Denisovans, from a pinky bone found in a frozen cave in Siberia. In 2015, the United Kingdom became the first country to legalise the creation of ‘three-parent babies’ – that is, babies with a biological mother, father and a second woman who donates a healthy mitochondrial genome, the energy producer found in all human cells.
Commensurate with their power to change biology as we know it, the new technologies are driving renewed ethical debates. Uneasiness is being expressed, not only among the general public, but also in high-profile articles and interviews by scientists. When China announced it was modifying human embryos this April, the term ‘CRISPR/Cas9’ trended on the social media site Twitter. CRISPR/Cas9, by the way, is a protein-RNA combo that defends bacteria against marauding viruses. Properly adapted, it allows scientists to edit strings of DNA inside living cells with astonishing precision. It has, for example, been used to show that HIV can be ‘snipped’ out of the human genome, and that female mosquitoes can be turned male to stop the spread of malaria (only females bite).
No comment yet.