The World Wide Web turned 28 today. But rather than celebrate, its inventor, Tim Berners-Lee, used the occasion to lay out what he sees as its greatest challenges. Specifically, Berners-Lee points to three threats: the loss of control of personal data, the spread of misinformation, and lack of transparency in political advertising.
I recently found myself facing a vending machine in a quiet corridor at the Delft University of Technology in the Netherlands. I was due to speak at a conference called ‘Reinvent Money’ but, suffering from jetlag and exhaustion, I was on a search for Coca-Cola. The vending machine had a small digital interface built by a Dutch company called Payter. Printed on it was a sentence: ‘Contactless payment only.’ I touched down my bank card, but rather than dispensing Coke, it beeped a message: ‘Card invalid.’ Not all cards are created equal, even if you can get one – and not everyone can.
In the economist’s imagining of an idealised free market, rational individuals enter into monetary-exchange contracts with each other for their mutual benefit. One party – called the ‘buyer’ – passes money tokens to another party – called the ‘seller’ – who in turn gives real goods or services. So here I am, the tired individual rationally seeking sugar. The market is before me, fizzy drinks stacked on a shelf, presided over by a vending machine acting on behalf of the cola seller. It’s an obedient mechanical apparatus that is supposed to abide by a simple market contract: If you give money to my owner, I will give you a Coke. So why won’t this goddamn machine enter into this contract with me? This is market failure.
To understand this failure, we must first understand that we live with two modes of money. ‘Cash’ is the name given to our system of physical tokens that are manually passed on to complete transactions. This first mode of money is public. We might call it ‘state money’. Indeed, we experience cash like a public utility that is ‘just there’. Like other public utilities, it might feel grungy and unsexy – with inefficiencies and avenues for corruption – but it is in principle open-access. It can be passed directly by the richest of society to the poorest of society, or vice versa.
A new study increases and strengthens the links that have led scientists to propose the “transposon theory of aging.”
Transposons are rogue elements of DNA that break free in aging cells and rewrite themselves elsewhere in the genome, potentially creating lifespan-shortening chaos in the genetic makeups of tissues.
As cells get older, prior studies have shown, tightly wound heterochromatin wrapping that typically imprisons transposons becomes looser, allowing them to slip out of their positions in chromosomes and move to new ones, disrupting normal cell function. Meanwhile, scientists have shown that potentially related interventions, such as restricting calories or manipulating certain genes, can demonstrably lengthen lifespans in laboratory animals.
If you find yourself torn between cravings and ethical concerns every time you tuck into a chicken nugget, there might soon be a way you can have your meat and eat it too. Memphis Meats has just served up chicken and duck meat cultivated in a lab from poultry cells, meaning no animals were harmed in the making of the meal.
Along with the ethical issues of animal cruelty that surround a carnivorous diet, feeding, breeding and keeping livestock for food has an enormous environmental impact. The animals burp more greenhouse gases into the air than all modes of human transport, and require large swathes of land to be cleared, not to mention all the food, water, and care they need. Studies show that growing meat in a lab setting could go a long way towards solving those problems.
In 2013, the public got a taste of beef that had never actually been a cow, but as impressive as that achievement was, it was reportedly pretty bland and cost as much as a house. Companies like Impossible Burger are working on improving the look and taste, and in February 2016, Memphis Meats unveiled what it called a "clean" meatball.
In the year 2000, logging onto the Internet usually meant sitting down at a monitor connected to a dial-up modem, a bunch of beeps and clicks, and a "You've got mail!" notification. In those days, AOL Instant Messenger was the Internet's favorite pastime, and the king of AIM was SmarterChild, a chatbot that lived in your buddy list.
A chatbot is a computer program designed to simulate human conversation, and SmarterChild was one of the first chatbots the public ever saw. The idea was that you would ask SmarterChild a question — "Who won the Mets game last night?" or "Where did the Dow close today?" — then the program would scour the Internet and, within seconds, respond with the answer. The company that built SmarterChild, a startup called ActiveBuddy, thought it could make money by building custom bots for big companies and made SmarterChild as a test case.
And people did use SmarterChild — a lot. At its height, SmarterChild chatted with 250,000 people a day.
Responding like a human
But most of those people weren't asking SmarterChild about sports or stocks. They were just chitchatting with it, about nothing in particular — like how you'd chat with a friend. "Our goal was to make a bot people would actually use, and to do that we had to make the best friend on the Internet," says Robert Hoffer, one of its creators.
The archerfish can spit water with remarkable accuracy at targets up to six feet away, giving it the evolutionarily advantageous ability to hunt prey on land from the water. Even more intriguing is the idea that archerfish can recognise faces and use water as a tool, making them part of an extremely small – but apparently growing – club of animals with a particular sort of intelligence. Informed by a study published in Nature in 2016, this short video from Deep Look probes what the archerfish can tell us about the increasingly dubious link between brain size and intelligence. Read more about the video at KQED Science.
ebecca Saxe’s first son, Arthur, was just a month old when he first entered the bore of an MRI machine to have his brain scanned. Saxe, a cognitive scientist at the Massachusetts Institute of Technology, went headfirst with him: lying uncomfortably on her stomach, her face near his diaper, she stroked and soothed him as the three-tesla magnet whirred around them. Arthur, unfazed, promptly fell asleep.
All parents wonder what’s going on inside their baby’s mind; few have the means to find out. When Saxe got pregnant, she’d already been working with colleagues for years to devise a setup to image brain activity in babies. But her due date in September 2013 put an impetus on getting everything ready.
Over the past couple of decades, researchers like Saxe have used functional MRI to study brain activity in adults and children. But fMRI, like a 19th-century daguerreotype, requires subjects to lie perfectly still lest the image become hopelessly blurred. Babies are jittering bundles of motion when not asleep, and they can’t be cajoled or bribed into stillness. The few fMRI studies done on babies to date mostly focused on playing sounds to them while they slept.
But Saxe wanted to understand how babies see the world when they’re awake; she wanted to image Arthur’s brain as he looked at video clips, the kind of thing that adult research subjects do easily. It was a way of approaching an even bigger question: Do babies’ brains work like miniature versions of adult brains, or are they completely different? “I had this fundamental question about how brains develop, and I had a baby with a developing brain,” she said. “Two of the things that were most important to me in life temporarily had this very intense convergence inside an MRI machine.”
The morning of the US presidential election, I was leading a graduate seminar on Friedrich Nietzsche’s critique of truth. It turned out to be all too apt.
Nietzsche, German counter-Enlightenment thinker of the late 19th century, seemed to suggest that objective truth – the concept of truth that most philosophers relied on at the time – doesn’t really exist. That idea, he wrote, is a relic of an age when God was the guarantor of what counted as the objective view of the world, but God is dead, meaning that objective, absolute truth is an impossibility. God’s point of view is no longer available to determine what is true.
Nietzsche fancied himself a prophet of things to come – and not long after Donald Trump won the presidency, the Oxford Dictionaries declared the international word of the year 2016 to be “post-truth”.
Indeed, one of the characteristics of Trump’s campaign was its scorn for facts and the truth. Trump himself unabashedly made any claim that seemed fit for his purpose of being elected: that crime levels are sky-high, that climate change is a Chinese hoax, that he’d never called it a Chinese hoax, and so on. But the exposure of his constant contradictions and untruths didn’t stop him. He won.
Nietzsche offers us a way of understanding how this happened. As he saw it, once we realise that the idea of an absolute, objective truth is a philosophical hoax, the only alternative is a position called “perspectivism” – the idea there is no one objective way the world is, only perspectives on what the world is like.
This might seem outlandish. After all, surely we all agree certain things are objectively true: Trump’s predecessor as president is Barack Obama, the capital of France is Paris, and so on. But according to perspectivism, we agree on those things not because these propositions are “objectively true”, but by virtue of sharing the same perspective.
When it comes to basic matters, sharing a perspective on the truth is easy – but when it comes to issues such as morality, religion and politics, agreement is much harder to achieve. People occupy different perspectives, seeing the world and themselves in radically different ways. These perspectives are each shaped by the biases, the desires and the interests of those who hold them; they can vary wildly, and therefore so can the way people see the world. Your truth, my truth
A core tenet of Enlightenment thought was that our shared humanity, or a shared faculty called reason, could serve as an antidote to differences of opinion a common ground that can function as the arbiter of different perspectives. Of course people disagree, but, the idea goes, through reason and argument they can come to see the truth. Nietzsche’s philosophy, however, claims such ideals are philosophical illusions, wishful thinking, or at worst covert way of imposing one’s own view on everyone else under the pretence of rationality and truth.
That’s become the slogan of an increasing number of the global white-collar workforce. People are unleashing themselves from corporations and companies to plug wirelessly into the wider world. The tribe of this digital diaspora is described and named in various ways—among them, location independent—but I prefer digital nomad.
Full disclosure: I number myself among this constituency, breaking the tether to corporate ties last year. I’m writing to you from a somewhat disclosed corner of southwestern Turkey where sugar-cubed-shaped homes tumble down rugged hills toward the Aegean Sea. I can literally see Greece—at least a few of her islands—from my window.
I’m certainly not alone, and the community seems to be growing exponentially, reaching what appears to be a tipping point that is ready to push the 9-to-5 workweek into the dustbin of history, along with pet rocks and pensions.
“[Digital nomads] don’t subscribe to the standards of previous generations for what defines happiness, what defines productivity, what defines success. I think they’re freeing themselves from the shackles of previous generations,” says Brian Solis, a self-described digital anthropologist and principal analyst at technology research firm Altimeter Group, which is part of the marketing firm Prophet Company. He is also the author of X: The Experience When Business Meets Design.
No one has done a complete anthropologic makeup of digital nomadism, but I like the brief history that fellow nomad and now documentary filmmaker Christine Gilbert developed a few years ago, as one way to tell the story. It all started, according to Gilbert, around 1983, with a freelance writer named Steven Roberts.
Machine-to-machine communication forms a vast plexus of precise data on land, at sea, in the air, and in space. Homes and offices and cars: connected. Products and environments and people: smarter, safer, greener. Some of this is easy to imagine. Some of it still boggles the mind. A radical new type of interconnectivity has recently…
Say hello to the decentralized economy -- the blockchain is about to change everything. In this lucid explainer of the complex (and confusing) technology, Bettina Warburg describes how the blockchain will eliminate the need for centralized institutions like banks or governments to facilitate trade, evolving age-old models of commerce and finance into something far more interesting: a distributed, transparent, autonomous system for exchanging value.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.