"It was March 28, 1977, the first Saturday after the "Star Wars" Wednesday opening.
Your new post is loading...
Your new post is loading...
EARLIER this month, at a symposium at the University of Southern California film school, George Lucas and Steven Spielberg predicted the collapse of most megabudget movies, and with it the end of Hollywood as it now exists. This sounds like bad news for popcorn sellers. But Mr. Lucas and Mr. Spielberg had intriguing ideas about what might come next.
Mr. Lucas predicted that blockbusters would eventually become big-ticket events, like ballgames and Broadway plays, and that the rest of the movie business would migrate to online video — a trend that’s already begun to happen.
Mr. Spielberg offered a more radical vision. At a time of ubiquitous screens — video, movie and computer — he predicted an end to on-screen entertainment. Instead, he said he thought we’d have a kind of enveloping, wraparound entertainment.
“We’re never going to be totally immersive as long as we’re looking at a square, whether it’s a movie screen or whether it’s a computer screen,” Mr. Spielberg said. “We’ve got to get rid of that and put the player inside the experience, where no matter where you look you’re surrounded by a three-dimensional experience. That’s the future.”
Though most people treat screens as a window, Mr. Spielberg seems to understand them as a barrier, one that prevents viewers — now “players” — from being fully, actively engaged in their entertainment.
The idea of immersive entertainment — in which you can lose yourself and in which the line between fiction and reality blurs — isn’t new at all. And its impact can be disorienting.
The title character in Cervantes’s 17th-century satire, “Don Quixote,” went tilting at windmills, for example, because he had immersed himself in the practice of reading. “He read all night from sundown to dawn, and all day from sunup to dusk until with virtually no sleep and so much reading he dried out his brain and lost his sanity.” Quixote’s endearing madness suggests the degree to which art in general and reading in particular might literally derange one’s faculties.
Centuries later, Orson Welles showed that radio could have similarly disturbing powers. When Welles broadcast a live radio dramatization of “The War of the Worlds” in 1938, thousands of listeners believed Martians had actually invaded New Jersey. Despite repeated announcements that the radio play was fiction, panicked listeners phoned police stations, rushed into churches to pray, even volunteered to take up arms. Some listeners packed their belongings and prepared to evacuate.
Today, of course, losing oneself in a book or broadcast is familiar and feels safe and even old-fashioned. And immersive entertainment has moved into the realm of video games and beyond, making strides in the direction Mr. Spielberg envisions.
The most advanced immersive entertainment on the horizon now may be the Oculus Rift, a strap-on virtual-reality headset. Virtual reality, a catchall term for digital simulations that can be experienced with goggles, earphones and, in some cases, gloves, enjoyed a brief vogue in the late 1980s; now, with far more advanced computer capabilities, it seems on the verge of a comeback. To demonstrate the Oculus Rift’s capacity, developers created a “guillotine simulator” that, even in its primitive form, seems to be a big — frightening — hit with those who have tried it. Twist your neck and you see crowds of spectators; look down and you see the basket waiting for your head. But at the end of the day, you’re just lying there with a box strapped to your face, staring into a pair of screens.
No one has yet managed to invent a technology that dispenses with screens entirely, as Mr. Spielberg envisions.
But Gene Roddenberry, who created “Star Trek,” offered a blueprint for this kind of entertainment in “Star Trek: The Next Generation,” the 1987 follow-up to his original series. In the pilot episode, the first officer of the Starship Enterprise enters the ship’s “holodeck,” a chamber specially outfitted to project a holographic simulation of reality; the first officer, for example, walks into the room and experiences a verdant woodland.
The fictional holodeck in “Star Trek” didn’t depend entirely on holographic illusion; it also relied on fanciful “matter replicators” capable of transforming energy into a chair you could sit in or tea you could drink. Thanks to sophisticated programming, it provided an extraordinary range of entertainment possibilities for the Enterprise crew: they could, for example, enjoy a simulated ride on the Orient Express or fight a simulated Battle of the Alamo.
Life-size holograms and energy-to-matter converters are probably a ways off. For the moment, the closest we have come to a holodeck-like experience may be immersive theater.
“Sleep No More” and “Then She Fell” are two current theatrical productions that dispense with the traditional stage and dissolve the barrier between performer and viewer. Instead, the audience interacts directly with the characters in the play.
It is not surprising that such productions have been called “theater for the video game generation.” They combine the first person engagement of video games like “Grand Theft Auto” and “BioShock” with the warmth and emotional engagement of flesh-and-blood interactions.
“Sleep No More” is presented in a series of interconnected rooms, and it is viscerally engaging to stand a few feet away as Macbeth and Lady Macbeth hurl each other against the walls, or later as Macbeth cringes naked in a bloody bathtub. It feels personal and intimate in a way that conventional plays or movies cannot — and in a way that digital simulations can’t either.
In that sense, at least, the future Mr. Spielberg imagined is already here — at least in a limited way that is available just to a small group of people who can attend the performances. Now, if we just had some technology that would scale things up. Mr. Spielberg, we’re ready.
Frank Rose is the author of “The Art of Immersion: How the Digital Generation is Remaking Hollywood, Madison Avenue, and the Way We Tell Stories.”
Haynes Publishing's 'Dan Dare Pilot of the Future - Space Fleet Operations Manual' is an engrossing labour of love from a team clearly devoted to the legacy of a hero who inspired a generation – in the hope that he may do so anew. It includes work from some of the original artists, as well as updated cutaways of space-craft, kit and weaponry that remain true to the original spirit.
For a generation of post-war children - largely though not exclusively schoolboys - Dan Dare’s interplanetary adventures were a vibrant and exotic escape from the grey austerity and rationing that persisted long after global hostilities ended.
James Keith's insight:
Follow through for some awesome illustrations!
The Russian multimillionaire Dmitry Itskov wants us all to live forever, our minds inside avatars. And he is spending a bundle to try to make his colossal dream happen.
GET right up close to Dmitry Itskov and sniff all you like — you will not pick up even the faintest hint of crazy. He is soft-spoken and a bit shy, but expansive once he gets talking, and endearingly mild-mannered. He never seems ruffled, no matter what question you ask. Even if you ask the obvious one, which he has encountered more than a few times since 2011, when he started “this project,” as he sometimes calls it.
Namely: Are you insane?
“I hear that often,” he said with a smile, over lunch one recent afternoon in Manhattan. “There are quotes from people like Arthur C. Clarke and Gandhi saying that when people come up with new ideas they’re called ‘nuts.’ Then everybody starts believing in the idea and nobody can remember a time when it seemed strange.”
It is hard to imagine a day when the ideas championed by Mr. Itskov, 32, a Russian multimillionaire and former online media magnate, will not seem strange, or at least far-fetched and unfeasible. His project, called the 2045 Initiative, for the year he hopes it is completed, envisions the mass production of lifelike, low-cost avatars that can be uploaded with the contents of a human brain, complete with all the particulars of consciousness and personality.
What Mr. Itskov is striving for makes wearable computers, like Google Glass, seem as about as futuristic as Lincoln Logs. This would be a digital copy of your mind in a nonbiological carrier, a version of a fully sentient person that could live for hundreds or thousands of years. Or longer. Mr. Itskov unabashedly drops the word “immortality” into conversation.
Yes, we have seen this movie and, yes, it always leads to evil robots enslaving humanity, the Earth reduced to smoldering ruins. And it’s quite possible that Mr. Itskov’s plans, in the fullness of time, will prove to be nothing more than sci-fi bunk.
But he has the attention, and in some cases the avid support, of august figures at Harvard, M.I.T. and Berkeley and leaders in fields like molecular genetics, neuroprosthetics and other realms that you’ve probably never heard of. Roughly 30 speakers from these and other disciplines will appear at the second annual 2045 Global Future Congress on June 15 and 16 at Alice Tully Hall, in Lincoln Center in Manhattan.
Though billed as a congress, the event is more like a showcase and conference that is open to the public, with general admission tickets starting at $750. (About 400 tickets, roughly half the total available, have been sold so far.) Attendees will hear people like Sir Roger Penrose, an emeritus professor of mathematical physics at Oxford, who appears on the 2045.com Web site with a video teaser about “the quantum nature of consciousness,” and George M. Church, a genetics professor at Harvard Medical School, whose video on the site concerns “brain healthspan extension.”
As these videos suggest, scientists are taking tiny, incremental steps toward melding humans and machine all the time. Ray Kurzweil, the futurist and now Google’s director of engineering, argued in “The Singularity Is Near,” a 2005 book, that technology is advancing exponentially and that “human life will be irreversibly transformed” to the point that there will be no difference between “human and machine or between physical and virtual reality.”
Mr. Kurzweil was projecting based on the scientific and intellectual ferment of the time. And technological achievements have continued their march since he wrote the book — from creating computers that can that can outplay humans (like Watson, the “Jeopardy” winner from I.B.M.) to technology that tracks a game player’s heartbeat and perhaps his excitement (like the new Kinect) to digital tools for those with disabilities (like brain implants that can help quadriplegics move robotic arms).
But most researchers do not aspire to upload our minds to cyborgs; even in this crowd, the concept is a little out there. Academics seem to regard Mr. Itskov as sincere and well-intentioned, and if he wants play global cheerleader for fields that generally toil in obscurity, fine. Ask participants in the 2045 conference if Mr. Itskov’s dreams could ultimately be realized and you’ll hear everything from lukewarm versions of “maybe” to flat-out enthusiasm.
“I have a rule against saying something is impossible unless it violates laws of physics,” Professor Church says, adding about Mr. Itskov: “I just think that there’s a lot of dots that aren’t connected in his plans. It’s not a real road map.”
Martine A. Rothblatt, another speaker at the coming conference and founder of United Therapeutics, a biotech company that makes cardiovascular products, sounds more optimistic.
“This is no more wild than in the early ‘60s, when we saw the advent of liver and kidney transplants,” Ms. Rothblatt says. “People said at the time, ‘This is totally crazy.’ Now, about 400 people have organs transplanted every day.”
CANNES, France –
According to “Drive” director Nicolas Winding Refn (who’s also here this year with the ultra-violent “Only God Forgives”), the legendary unmade mid-‘70s film version of Frank Herbert’s “Dune” by Chilean-born mad genius Alejandro Jodorowsky actually exists – and he’s seen it. OK, even Refn hasn’t seen a version of it that can be projected on a screen or played on a high-def monitor, the version that was supposed to star David Carradine, Orson Welles, Mick Jagger and Salvador Dalì. That doesn’t exist. But Refn says he spent a long evening in Jodorowsky’s Paris apartment while the latter went through the storyboards for “Dune” with him page by page, talking through every shot and every line of dialogue. “I am the only spectator who has ever seen this movie,” Refn concludes. “And I have to tell you: It was awesome.”
I don’t hope to see a movie at this festival, or all year long, that’s as inspiring as Frank Pavich’s documentary “Jodorowsky’s Dune,” the story of an enormously influential film that was never made. That may sound strange on a number of levels: How does one of the most famous collapsed productions in cinema history, a failure so dire that it derailed its director’s career for many years, become a source of inspiration? Especially when the resulting documentary largely consists of a man in his 80s sitting around and talking? Well, when the old guy talking is as brilliant, passionate, ferocious and hilarious as Jodorowsky, and when the stories he tells convince you that his quixotic dream of making an enormous science-fiction spectacle that combined star power, cutting-edge technology, philosophical depth and spiritual prophecy nearly came true, it’s as if you glimpse his vision of a transformed world where everything is possible.
The rain-sodden crowd of movie buffs who packed into the Théâtre Croisette here on Saturday night for the premiere of “Jodorowsky’s Dune” (in the Director’s Fortnight sidebar competition) rode with the film for every second; there were several outbreaks of spontaneous applause and a standing ovation for director Pavich when it was over. I gather that aficionados of Jodorowsky and his “Dune” project have seen a good deal of the storyboard art before, along with Chris Foss’ color paintings of sets and design elements. But for the more casual sci-fi fan, this movie delivers a treasure trove of half-familiar images and ideas and opens a window onto an unexplored world that almost was, a world that – as critic Devin Faraci observes in the film – altered the course of pop-culture history without ever existing on its own terms.
Now, it seems likely, in the cold light of hindsight, that Jodorowsky’s “Dune” would have collapsed for other reasons even if someone had funded it. (He and producer Michel Seydoux fell about $5 million short on a proposed $15 million budget, a vast sum for an art-house director in the 1970s.) Just because Dalì, Welles and Jagger had all said yes, albeit on ludicrous terms – Welles was promised a private chef; Dalì was to be paid $100,000 for every minute he appeared in the final film, and requested a “burning giraffe” – doesn’t mean they would actually have showed up. Furthermore, Jodorowsky’s films to that point, like the international cult hits “El Topo” and “The Holy Mountain,” were surrealistic, visionary LSD-style freakouts that followed no discernible rules of narrative. Could he really have made a densely plotted space opera with many interlocking sets of characters, especially combined with special effects that were barely possible at the time and deep-focus Wellesian cinematography?
It seems impossible. But what you come away from “Jodorowsky’s Dune” thinking is: Hell, just maybe. Jodorowsky had been trained in Europe and South America as a theater artist and circus performer and knew almost nothing about the film industry. But the team he gathered – largely through his own personal charisma and by instinct – was truly extraordinary. He walked out on a meeting with “Star Wars” design guru Doug Trumbull, who was then the highest-paid special-effects whiz in the business, and instead hired the little-known Dan O’Bannon after seeing his work in John Carpenter’s “Dark Star.” French comic-book artist Moebius (aka Jean Giraud) drew the storyboards and character-costume sketches, while Foss, a British artist well known for his science-fiction book covers, supplied color paintings of spaceships and sets. To design the nightmarish sets and costumes for the sinister House of Harkonnen, Jodorowsky found a Swiss surrealist painter named H.R. Giger.
Is all of that starting to sound oddly familiar? Fans of Ridley Scott’s 1979 “Alien,” a movie that did get made and blew the minds of sci-fi devotees around the globe, don’t need me to tell them that O’Bannon co-wrote its screenplay after the collapse of “Dune” left him broke and homeless. Moebius and Foss both worked on the film as well, and of course Giger designed the scariest extraterrestrial monster anyone had ever seen. Remember that Jodorowsky’s “Dune” would presumably have come out before “Star Wars”; when we look at the designs for this unproduced film, we see the birth of a design aesthetic that has shaped both pop culture and the physical world for the last four decades. Oh, and if you’re wondering about David Lynch’s 1984 film of “Dune,” yes, Jodorowsky went to see it, with great trepidation. He was delighted it was awful; not the most noble reaction, he admits, but human.
Our visions of the future tend to be forged in the pages of science fiction. But for the past half-century, a number of prominent thinkers, activists, and scientists have made significant contributions to our understanding of what the future could look like. Here are 10 recent futurists you absolutely need to know about.
1. Robert Ettinger
Man Who Vowed to Live Forever Robert C.W. Ettinger, who famously said that death was for the unprepared and the unimaginative,
He’s known as the intellectual father of the cryonics movement. Physicist Robert Ettinger, who only died recently and is currently in cryonic stasis, was an early advocate of immortalism, or what we would today call radical life extension. In his 1964 book, The Prospect of Immortality, Ettinger argued that whole body or head-only freezing should be used to place the recently deceased into a state of suspended animation for later revival. To that end, he made the case that governments should immediately start a mass-freezing program. He also believed that the onset of immortality would endow humanity with a higher, nobler nature.
"Someday there will be some sort of psychological trigger that will move all these people to take the practical steps they have not yet taken,” he wrote, “When people realize that their children and grandchildren will enjoy indefinite life, that they may well be the last generation to die."
Today, organizations like Alcor and the Cryonics Institute (which he founded) have put his ideas into action.
Ettinger is also considered a pioneer in the transhumanist movement by virtue of his 1972 book,Man Into Superman.
2. Shulamith Firestone
Back in 1970, at the tender age of 25, Shulamith Firestone kickstarted the cyberfeminist movement by virtue of her book, The Dialectic of Sex: The Case for Feminist Revolution. To come up with her unique feminist philosophy, Firestone took 19th and 20th century socialist thinking and fused it with Freudian psychoanalysis and the existentialist perspectives of Simone de Beauvoir.
Firestone argued that gender inequality was the result of a patriarchal social structure that had been imposed upon women on account of their necessary role as incubators. She felt that pregnancy, childbirth, and child-rearing imposed physical, social, and psychological disadvantages upon women — and that the only way for women to free themselves from these biological impositions would be to seize control of reproduction. She advocated for the development of cybernetic and assistive reproductive technologies, including artificial wombs, gender selection, and in vitro fertilization. In addition, she advocated for the dissemination of contraception, abortion, and state support for child-rearing.
She would prove to be a major influence on later thinkers like Joanna Russ (author of "The Female Man"), sci-fi author Joan Slonczweski, and Donna Haraway (who we’ll get to in just a bit).
3. I. J. Good
British mathematician I. J. Good was one of the first thinkers — if not the first — toproperly articulate the problem that is the pending Technological Singularity. Predating Hans Moravec, Ray Kurzweil, and Vernor Vinge by several decades, Good penned an article in 1965 warning about the dramatic potential for recursively improving artificial intelligence.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The phrase intelligence explosion has since been adopted by futurists critical of “soft” Singularity scenarios, like a slow takeoff event, or Kurzweilian notions of the steady, accelerating growth of all technologies (including intelligence). His work has influenced AI theorists like Eliezer Yudkowsky, Ben Goertzel, and of course, the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence).
Interestingly, Good served as a cryptologist at Bletchley Park with Alan Turing during World War II. He also worked as a consultant on supercomputers for Stanley Kubrick for the 1968 film, 2001: A Space Odyssey.
4. K. Eric Drexler
Back in 1959, the renowned physicist Richard Feynman delivered an extraordinary lecture titled “There’s Plenty of Room at the Bottom” in which he talked about the “experimental physics” of “manipulating and controlling things on a small scale.” This idea largely languished, probably because it was ahead of its time. It wouldn’t be until 1986 and the publication of K. Eric Drexler’s Engines of Creation: The Coming Era of Nanotechnology that the idea of molecular engineering would finally take root and take its modern form.
Drexler, by virtue of this book and his subsequent lectures and writings, was the first futurist to give coherency to the prospect of molecular nanotechnology. Given the potential for working at such a small scale, Drexler foresaw the rise of universal assemblers (also called molecular assemblers, or simply “fabs”) — machines that can build objects atom by atom (basically Star Trek replicators). He predicted that we’ll eventually use nanotech to clear the environment of toxins, grow rockets from a single seed, and create biocompatible robots that will be injected into our bodies. But unlike Robert Ettinger, Drexler actually came up with a viable technique for reanimating individuals in cryonic suspension; he envisioned fleets of molecular robots guided by sophisticated AI that would reconstruct a person thawed from liquid nitrogen.
But he also foresaw the negative consequences, such as weaponized nanotechnology and the potential for grey goo — an out-of-control scourge of self-replicating micro-machines.
As an aside, Drexler also predicted hypertext.
5. Timothy Leary
Timothy Leary is typically associated with drug culture and the phrase, "tune in, turn on, and drop out," but his contributions to futurism are just as significant — and surprisingly related. He developed his own futurist philosophy called S.M.I2.L.E, which stands for Space Migration, Increased Intelligence, and Life Extension. These ideas developed out of Leary’s life-long interest in seeing humanity evolve beyond its outdated morality, which would prove to be highly influential within certain segments of the transhumanist community.
As a futurist, Leary is also important in that he was an early advocate for cognitive liberty and potential for neurodiversity. Through his own brand of psychedelic futurism, he argued that we have the right to modify our minds and create our own psychological experiences. He believed that each psychological modality — no matter how bizarre or unconventional — could still be ascribed a certain value. What's more, given the extreme nature of certain psychedelic experiences, he also demonstrated the potential for human consciousness to function beyond what’s considered normal.More.
6. Donna Haraway
Donna Haraway made a name for herself after the publication of her 1984 essay, “A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s.” At the time, it was seen as a reaction to the rise of anti-technological ecofeminism, but it has since been interpreted and reinterpreted by everyone from postmodernist lefties through to transhumanist postgenderists.
Referring to Haraway as a Cyborgian Socialist-Feminist, the futurist and sociologist James Hughes describes her legacy this way:
Haraway argued that it was precisely in the eroding boundary between human beings and machines, and between women and machines in particular, that we can find liberation from the old patriarchal dualisms. Haraway says she would rather be a cyborg than a goddess, and proposes that the cyborg could be the liberatory mythos for women. This essay, and Haraway’s subsequent writings, have inspired a new cultural studies sub-discipline of “cyborgology,” made up of feminist culture and science fiction critics, exploring cyborgs and the woman-machine interface in various permutations.
And as Wired’s Hari Kunzru noted, “Sociologists and academics from around the world have taken her lead and come to the same conclusion about themselves. In terms of the general shift from thinking of individuals as isolated from the "world" to thinking of them as nodes on networks, the 1990s may well be remembered as the beginning of the cyborg era.”
7. Peter Singer
He’s primarily regarded as a philosopher, ethicist, and animal rights advocate, but Princeton’s Peter Singer has also made a significant impact to futurist discourse — albeit it through rather unconventional channels.
Singer, as a utilitarian, social progressive, and personhood-centered ethicist, has argued that the suffering of animals, especially apes and large mammals, should be put on par with the suffering of children and developmentally disabled adults. To that end, he founded the Great Ape Project, an initiative that seeks to confer basic legal rights to non-human great apes, namely chimpanzees, bonobos, gorillas, and orangutans. It’s a precursor to my own Rights of Non-Human Persons Program, which also includes dolphins, whales, elephants — and makes provisions for artificial intelligence. Singer has also suggested that chickens be genetically engineered so that they experience less suffering.
And in 2001, Singer’s A Darwinian Left: Politics, Evolution, and Cooperation argued that there is a biological basis for human selfishness and hierarchy — one that has thwarted our attempts at egalitarian reform. What’s needed, says Singer, is the application of new genetic and neurological sciences to identify and modify the aspects of human nature that cause conflict and competition — what today would be regarded as moral enhancement. He supports voluntary genetic improvement, but rejects coercive eugenic pseudo-science.
8. Freeman Dyson
Theoretical physicist and mathematician Freeman Dyson is one of the first thinkers to consider the potential for megascale engineering projects.
His 1959 paper, "Search for Artificial Stellar Sources of Infrared Radiation," outlined a way for an advanced civilization to utilize all of the energy radiated by their sun — an idea that has since inspired other technologists to speculate about similar projects, like Matrioshka and J-Brains.
9. Nick Bostrom
Swedish philosopher and neuroscientist Nick Bostrom is one of the finest futurists in the business, who is renowned for taking heady concepts to the next level. He has suggested, for example, that we may be living in a simulation, and that an artificial superintelligence may eventually take over the world — if not destroy us all together. And indeed, one of his primary concerns is in assessing the potential for existential risks. An advocate of transhumanism and human enhancement, he co-founded the World Transhumanist Association in 1998 (now Humanity+), and currently runs the Future of Humanity Institute at Oxford.
10. Aubrey de Grey
Love him or hate him, gerontologist Aubrey de Grey has revolutionized the way we look at human aging.
He’s an advocate of radical life extension who believes that the application of advanced rejuvenation techniques may help many humans alive today live exceptionally long lives. What makes de Grey particularly unique is that he’s the first gerontologist to put together an actual action plan for combating aging; he’s one of the first thinkers to conceptualize aging as a disease unto itself. Rather than looking at the aging process as something that’s inexorable or overly complicated, his macro-approach (Strategies for Engineered Negligible Senescence) consists of a collection of proposed techniques that would work to not just rejuvenate the human body, but to stop aging altogether.
Back in 2006, MIT’s Technology Review offered $20,000 to any molecular biologist who could demonstrate that de Grey’s SENS is “so wrong that it was unworthy of learned debate.” No one was able to claim the prize. But a 2005 EMBO report concluded that none of his therapies "has ever been shown to extend the lifespan of any organism, let alone humans." Regardless of the efficacy of de Grey’s approach, he represents the first generation of gerontologists to dedicate their work to the problem that is human aging. Moreover, he’s given voice to the burgeoning radical life extension movement.
Science fiction is notoriously difficult to define, but Fredrick Pohl’s definition is one of the lovelier ones.
1. Light thinks it travels faster than anything but it is wrong. No matter how fast light travels, it finds the darkness has always got there first, and is waiting for it.
2. The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
3. It is often said that before you die your life passes before your eyes. It is in fact true. It’s called living.
4. He was the sort of person who stood on mountaintops during thunderstorms in wet copper armor shouting ‘All the Gods are bastards.’
5. It’s not worth doing something unless you were doing something that someone, somewere, would much rather you weren’t doing.
6. In the beginning there was nothing, which exploded.
7. Some humans would do anything to see if it was possible to do it. If you put a large switch in some cave somewhere, with a sign on it saying ‘End-of-the-World Switch. PLEASE DO NOT TOUCH’, the paint wouldn’t even have time to dry.
8. I’ll be more enthusiastic about encouraging thinking outside the box when there’s evidence of any thinking going on inside it.
9. I’d rather be a rising ape than a falling angel.
10. Coming back to where you started is not the same as never leaving.
11. He’d been wrong, there was a light at the end of the tunnel, and it was a flamethrower.
12. Sometimes glass glitters more than diamonds because it has more to prove.
Washington: NASA is planning to send DNA of famed British science fiction writer Sir Arthur C Clarke into space - five years after his death.
The author of the novel '2001: A Space Odyssey' died in 2008 in Sri Lanka, and now NASA scientists have announced plans to send his DNA into orbit around the Sun in 2014 aboard the Sunjammer, a solar-powered spacecraft which gets its name from the writings of Clarke.
Called the Sunjammer Cosmic Archive (SCA), the flying time capsule is a first in the history of space travel, carrying digital files of human DNA including Clarke's aboard the sun-powered space ship
Will McIntosh's novel "Love Minus Eighty" started as a short story called "Bridesicle," which won the 2010 Hugo Award for Best Short Story.
In this future, questionable suitors revive legally dead women in cryogenic dating centers and offer to save them from death--if they're willing to get hitched. Suddenly, "'til death do us part" doesn't sound very effective. McIntosh has certainly pondered how today's already bizarre dating rituals can easily turn more alien in just a few centuries. Here, he shares seven futuristic dating rules from classic and new sci-fi novels.
So you’re single, and for some reason you suddenly find yourself in the future. Usually a) you were cryogenically frozen, b) you stumbled into a time machine or c) you have become immortal. How are you going to find your soul mate? You don't know the first thing about dating in the future.
Don't worry--science fiction writers are on the case, providing you with dating etiquette guidelines regardless of whether you've been cast forward a few decades or a few millennia. Let's begin in the near future.
1. Super Sad True Love Story by Gary Shteyngart
Rule #1: You won't have to guess whether people think you're hot or not
A few decades from now, when you walk into a bar, everyone will indicate how attractive (or unattractive) they find you on a virtual scoreboard--whether you want them to or not. At least, that's how Gary Shteyngart envisions dating in "Super Sad True Love Story."You'll be judged not only by your looks (hint: try to look young), but also on whether you're keeping up with the breakneck pace of fashion (if you're a woman, you'll need to pick up some transparent undergarments), and by your credit score, which hovers over your head in flashing red numbers. Being adept at navigating the Internet of the future--and using it to draw attention to yourself--won't hurt, either.
Love Minus Eighty By Will McIntosh
Rule #2: Find someone you can love until you die, or vice versa
Jump ahead a half-century, to my novel "Love Minus Eighty." If you got there via cryogenics and you're an attractive woman, you may find yourself in a cryogenic dating farm, where meeting the right (i.e., rich) man is your only chance to be unfrozen and be given a second chance at life. If you're there under less dire and depressing circumstances, you should be aware that not many people find love "in the wild" anymore. Now, you meet people through dating services. If you need help, you can employ a professional dating coach who will stay with you every step of the way, from creating your online profile to feeding you witty or provocative lines via a private link while you're on your date.
Time Enough for Love By Robert A. Heinlein
Rule #3: Why buy when you can lease?
In the future according to Robert A. Heinlein's "Time Enough for Love," you'll be able to, well, put a time limit on your love. The notion of limited-term marriage contracts is a robust one in science fiction, but no one explored it more thoroughly than Heinlein. You can get married for a year. At that point, if things aren't working out, no problem; you've fulfilled the terms of the contract. If you and your partner are still in love after the year is up, you can go down to the Justice of the Peace and renew your contract for another year--or, if you're feeling really confident, you can take the plunge and make a five-year commitment.
If you're going to find love in the future, you're going to have to open your mind to a broader range of possible partners. In "Time Enough for Love," that pertains not to the qualities of your partner, but the number. Heinlein foresees a future where polyamory--that is, having more than one intimate relationship at a time with everyone's consent--is the rule rather than the exception. Here, relationships are formed in groups of six, with three men and three women entering a committed partnership. And you thought romantic relationships were complicated now.
But we're just getting warmed up...
The Dispossessed Ursula K. Le Guin
Rule #4: Pay close attention to your partner's cycles
In Ursula K. Le Guin's "The Dispossessed," there is a planet where people are neither men nor women… except once a month, when they can become either men or women. I did mention dating would get complicated in the future, didn't I?
Trouble on Triton By Samuel R. Delany
Rule #5: To find the right person, you may have to change
"But wait," you say, "this doesn't seem fair. Why can't I be one sex or the other?" You can, in Samuel R. Delaney's version of the future in "Trouble on Triton." On Triton, all things are possible: You can be a man or a woman, straight or gay. While this may broaden your dating pool substantially, it may not lead you to a blissful relationship, because being anyone you want can get really complicated.
The Silver Metal Lover By Tanith Lee
Rule #6: If you can’t find someone to love, make someone to love
Maybe it would be simpler to just manufacture your perfect partner, thus dispensing with the whole messy problem of seeking out someone compatible. If you find yourself in Tanith Lee’s "The Silver Metal Lover," consider taking a chance on an android. In her future, artificial beings have become advanced enough that it's possible to love them and to be loved by them.
Lilith's Brood By Octavia E. Butler
Rule #7: Consider dating outside your species
Your final option is to find a nice alien. There are just a few dateable aliens to choose from in the annals of science fiction. I've selected the Oankali from Octavia E. Butler's trilogy "Lilith's Brood."
So there you have it--you're ready to date in the future. The rules and rituals will be complicated. You'll have many difficult decisions to make. With all of these disparate visions of future dating, one familiar touchstone remains constant: It's all about love. It may be love for a machine, or for five people at once, and you may need see-through underwear and a dating coach. But if sci-fi novels are any good at predicting the future, romantic love will still be the ultimate gauge of successful dating.
Will McIntosh is a Hugo Award winner and Nebula finalist whose short stories have appeared in Asimov's (where he won the 2010 Reader's Award for short story), Strange Horizons, Interzone and Science Fiction and Fantasy: Best of the Year, among others. His first novel, "Soft Apocalypse," was released in 2011 from Night Shade Books, and his second novel, "Hitchers," was released in February 2012.
THE concentration of carbon dioxide in the earth’s atmosphere recently surpassed 400 parts per million for the first time in three million years. If you are not frightened by this fact, then you are ignoring or denying science.Relentlessly rising greenhouse-gas emissions, and the fear that the earth might enter a climate emergency from which there would be no return, have prompted many climate scientists to conclude that we urgently need a Plan B: geoengineering.
Geoengineering — the deliberate, large-scale intervention in the climate system to counter global warming or offset some of its effects — may enable humanity to mobilize its technological power to seize control of the planet’s climate system, and regulate it in perpetuity.
But is it wise to try to play God with the climate? For all its allure, a geoengineered Plan B may lead us into an impossible morass.
While some proposals, like launching a cloud of mirrors into space to deflect some of the sun’s heat, sound like science fiction, the more serious schemes require no insurmountable technical feats. Two or three leading ones rely on technology that is readily available and could be quickly deployed.
Some approaches, like turning biomass into biochar, a charcoal whose carbon resists breakdown, and painting roofs white to increase their reflectivity and reduce air-conditioning demand, are relatively benign, but would have minimal effect on a global scale. Another prominent scheme, extracting carbon dioxide directly from the air, is harmless in itself, as long as we can find somewhere safe to bury enormous volumes of it for centuries.
But to capture from the air the amount of carbon dioxide emitted by, say, a 1,000-megawatt coal power plant, it would require air-sucking machinery about 30 feet in height and 18 miles in length, according to a study by the American Physical Society, as well as huge collection facilities and a network of equipment to transport and store the waste underground.
The idea of building a vast industrial infrastructure to offset the effects of another vast industrial infrastructure (instead of shifting to renewable energy) only highlights our unwillingness to confront the deeper causes of global warming — the power of the fossil-fuel lobby and the reluctance of wealthy consumers to make even small sacrifices.
Even so, greater anxieties arise from those geoengineering technologies designed to intervene in the functioning of the earth system as a whole. They include ocean iron fertilization and sulfate aerosol spraying, each of which now has a scientific-commercial constituency.
How confident can we be, even after research and testing, that the chosen technology will work as planned? After all, ocean fertilization — spreading iron slurry across the seas to persuade them to soak up more carbon dioxide — means changing the chemical composition and biological functioning of the oceans. In the process it will interfere with marine ecosystems and affect cloud formation in ways we barely understand.
Enveloping the earth with a layer of sulfate particles would cool the planet by regulating the amount of solar radiation reaching the earth’s surface. One group of scientists is urging its deployment over the melting Arctic now.
Plant life, already trying to adapt to a changing climate, would have to deal with reduced sunlight, the basis of photosynthesis. A solar filter made of sulfate particles may be effective at cooling the globe, but its impact on weather systems, including the Indian monsoon on which a billion people depend for their sustenance, is unclear.
Some of these uncertainties can be reduced by research. Yet if there is one lesson we have learned from ecology, it is that the more closely we look at an ecosystem the more complex it becomes. Now we are contemplating technologies that would attempt to manipulate the grandest and most complex ecosystem of them all — the planet itself. Sulfate aerosol spraying would change not just the temperature but the ozone layer, global rainfall patterns and the biosphere, too.
Spraying sulfate particles, the method most likely to be implemented, is classified as a form of “solar radiation management,” an Orwellian term that some of its advocates have sought to reframe as “climate remediation.”
Yet if the “remedy” were fully deployed to reduce the earth’s temperature, then at least 10 years of global climate observations would be needed to separate out the effects of the solar filter from other causes of climatic variability, according to some scientists.
If after five years of filtered sunlight a disaster occurred — a drought in India and Pakistan, for example, a possible effect in one of the modeling studies — we would not know whether it was caused by global warming, the solar filter or natural variability. And if India suffered from the effects of global dimming while the United States enjoyed more clement weather, it would matter a great deal which country had its hand on the global thermostat.So who would be turning the dial on the earth’s climate? Research is concentrated in the United States, Britain and Germany, though China recently added geoengineering to its research priorities.
Some geoengineering schemes are sufficiently cheap and uncomplicated to be deployed by any midsize nation, or even a billionaire with a messiah complex.
We can imagine a situation 30 years hence in which the Chinese Communist Party’s grip on power is threatened by chaotic protests ignited by a devastating drought and famine. If the alternative to losing power were attempting a rapid cooling of the planet through a sulfate aerosol shield, how would it play out? A United States president might publicly condemn the Chinese but privately commit to not shooting down their planes, or to engage in “counter-geoengineering.”
Little wonder that military strategists are taking a close interest in geoengineering. Anxious about Western geopolitical hubris, developing nations have begun to argue for a moratorium on experiments until there is agreement on some kind of global governance system.
Engineering the climate is intuitively appealing to a powerful strand of Western technological thought that sees no ethical or other obstacle to total domination of nature. And that is why some conservative think tanks that have for years denied or downplayed the science of climate change suddenly support geoengineering, the solution to a problem they once said did not exist.
All of which points to perhaps the greatest risk of research into geoengineering — it will erode the incentive to curb emissions. Think about it: no need to take on powerful fossil-fuel companies, no need to tax gasoline or electricity, no need to change our lifestyles.
In the end, how we think about geoengineering depends on how we understand climate disruption. If our failure to cut emissions is a result of the power of corporate interests, the fetish for economic growth and the comfortable conservatism of a consumer society, then resorting to climate engineering allows us to avoid facing up to social dysfunction, at least for as long as it works.
So the battle lines are being drawn over the future of the planet. While the Pentagon “weaponeer” and geoengineering enthusiast Lowell Wood, an astrophysicist, has proclaimed, “We’ve engineered every other environment we live in — why not the planet?” a more humble climate scientist, Ronald G. Prinn of the Massachusetts Institute of Technology, has asked, “How can you engineer a system you don’t understand?”
Jack Glass is Adam Roberts's most fan-friendly novel to date, but will that be enough to win him a Hugo award?
The worst thing that ever happened to science fiction was getting confused with genre fiction. If any kind of literature relies on the new and the innovative to excite the reader it is SF. Genre fiction recycles, repeats and repackages the same old ideas. Space exploration, faster-than-light travel, cybernetic implants and virtual realities all stirred that fabled "sense of wonder" in the kids who grew up with them. But now those kids are running out of middle age and wonder has been replaced with nostalgia. The SF genre today is like your dad's prog rock LP collection, a last link to a lost youth.
Adam Roberts's Jack Glass is a science fiction novel about our nostalgia for science fiction novels, replete with the favourite devices of Golden Age SF. It's also a detective novel, a locked-room mystery in the style of Dorothy L Sayers or Ellery Queen. The fact that Ellery Queen was a "house name" for many pulp writers, among them SF legend Jack Vance, underlines the fact that these stories have more in common than separates them. In an illuminating review of Jack Glass, critic Jonathan McCalmont cracks open Adam Roberts's love-hate relationship with SF's self-regarding nostalgia. Roberts is clearly a fan. But he is also a critic, and his fiction can not help but reflect both.
Reading any of Roberts's 13 published science fiction novels I often find myself thinking of their author as the last true science fiction writer. It's an exaggeration, there are other original voices in the field, but few as consistently and startlingly original. In a field where most writers can be relied upon to write the same book over and over again, Roberts insists on writing an entirely different book every time. Worse, far from writing a breathless homage to the giants upon whose shoulders he stands, Roberts is more often to be found affectionately taking the piss out of the genre science fiction has spawned.
(Roberts is, on his days off, also author of parodies The Soddit, Bored of the Rings and The Va Dinci Cod under the pseudonym ARRR Roberts.)
Swiftly: A Novel is an enlightenment-era steampunk fantasy, spun from the what if? question of how the British Empire might have evolved had it enslaved the Lilliputian people of Jonathan Swift's Gulliver's Travels. Yellow Blue Tibiariffs on the heavily politicised history of Soviet science fiction to create the ultimate in paranoid conspiracy theories. New Model Army imagines a second English Civil War, and a decentralised army of hackers and tech-heads who wrest military power from the hands of the British establishment. In his novel By Light Alone, Roberts ups the political ante by taking on the new gilded age of our post-economic crash reality, depicting a world of fabulous wealth and extreme deprivation where the poor are genetically engineered to subsist, like plants, on mere daylight and oxygen.
The genuine "sense of wonder" that Adam Roberts's wonderfully original SF novels evoke is winning praise from many quarters. In April he joins China Miéville as one of the few SF authors to become the focus of a major academic conference, "New Genre Army", organised by Christos Callow and Dr Caroline Edwards of Lincoln University, to be followed by an anthology of critical writing on Adam Roberts fiction from Gylphi. Roberts has also picked up nominations for the Kitschies and the British Science Fiction Association awards, although as the youngest of the six white, middle-aged and male candidates for the latter prize he might be considered too diverse to actually win.
Major awards within the genre of science fiction have, to date, eluded Adam Roberts. The Hugo awards, voted for by members of SF fandom attending the annual WorldCon, have demonstrated wide range of tastes in recent years. Shortlisted titles for best novel range from the overtly nostalgic Leviathan Wakes by James SA Corey and Cryoburn by Lois McMaster Bujold to the exceptionally original Palimpsest by Catherynne M Valente and Embassytown by China Miéville. Jack Glass manages to be both nostalgic and original in equal measure, and may be the novel to win Roberts the genre's most coveted award.
Smart machines probably won't kill us all—but they'll definitely take our jobs, and sooner than you think.
THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is thehappy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.
Maybe you think I'm pulling your leg here. Or being archly ironic. After all, this does have a bit of a rose-colored tint to it, doesn't it? Like something from The Jetsons or the cover ofWired. That would hardly be a surprising reaction. Computer scientists have been predicting the imminent rise of machine intelligence since at least 1956, when theDartmouth Summer Research Project on Artificial Intelligence gave the field its name, and there are only so many times you can cry wolf. Today, a full seven decades after the birth of the computer, all we have are iPhones, Microsoft Word, and in-dash navigation. You could be excused for thinking that computers that truly match the human brain are a ridiculous pipe dream.
But they're not. It's true that we've made far slower progress toward real artificial intelligence than we once thought, but that's for a very simple and very human reason: Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. It turns out that this is a very, very hard problem, sort of like filling up Lake Michigan one drop at a time. In fact, not just sort of like. It's exactly like filling up Lake Michigan one drop at a time. If you want to understand the future of computing, it's essential to understand this.
May the Fourth! Tomorrow's the day we celebrate all things Star Wars — which makes it the perfect day to recognize one of the great unsung contributors to the galaxy far, far away: Leigh Brackett wrote the first script draft of Star Wars: The Empire Strikes back, and her contributions helped make the saga epic.
But before Brackett had a major hand in creating the best Star Warsmovie, she was a science fiction novelist in the 1940s, writing a slew of space adventure novelswith titles like The Starmen and Alpha Centauri or Die!. People called her the Queen of Space Opera — and it was not always a compliment.
At that time, space opera (like Star Wars) was looked down upon as less worthy of appreciation than other types of pulp fiction, including other types of science fiction. Brackett also wrote a lot of pulp crime fiction, and had co-written the screenplay for The Big Sleep with William Faulkner. But she chose to spend a lot of her time writing these despised novels. As her friend Michael Moorcock explains in an essay:
As Andrew Liptak quotes in his great piece on Brackett's planetary romances in Kirkus:
Of course, Brackett was a respected member of the L.A. science fiction writer community — and she was a mentor to Ray Bradbury, with whom she traded critiques and collaborated on some stories. But at the same time, her choice to write "science fantasy" or "space opera" wound up tarring her as a representative of a pulpy subgenre that many science fiction writers were embarrassed by, especially as science fiction tried to become more "mature" and sophisticated in the 1950s.
This 1976 interview with Brackett (and Edmond Hamilton) is a must-read, including the parts where she talks about the early hostility she received from some readers as a woman writing SF. She also says that many women became interested in SF after Sputnik was launched, because suddenly all of this stuff seemed real. Also in that interview, she talks about her love for Edgar Rice Burroughs and confesses, " I suppose most of my stuff would be called escape fiction. This is the type of stuff I love to read."
Also, in her introduction to The Best of Planet Stories #1 in 1976, Brackett describes "space opera" as "a pejorative term often applied to a story that has an element of adventure." And she offers a defense of space opera as "the folk-tale, the hero-tale, of our particular niche in history." Sputnik, she writes, startled the wits out of all the high-minded, important people who hadn't wanted to talk about space. But she adds:
(Also quoted in Kramer and Hartwell, Space Opera Renaissance.)
The irony is that, according to Michael Moorcock, Brackett's well-written stories, despite having larger-than-life heroes, actually helped to launch the movement to make the genre more adult, sophisticated and literary. Moorcock has called Brackett "one of the godmothers of the New Wave." She also stretched out in her later work, including one of the great post-apocalyptic novels, The Long Tomorrow.
(By the way, there's a great Leigh Brackett tribute site, run by Blue Tyson, over here.)
But if Brackett was feeling defensive about her contributions to space opera in 1976 (as the Planet Stories introduction shows she was), then she received some amazing vindication — even if some of it arrived after her death. Not only did Star Wars make the genre of space opera suddenly mainstream and huge, but Brackett was hired to write the screenplay for the sequel.
According to John Baxter's book Mythmaker(quoted here), a friend handed Lucas a copy of one of Brackett's books, and told Lucas: "Here is someone who did the Cantina scene better than you did." Baxter describes the phone conversation between Lucas and Brackett thusly:
After that, Lucas started out by having a week-long story conference with Brackett, according to The Secret History of Star Wars. During this time, he hashed out a lot of the story points that wound up in the final film, including the character of Yoda — and the notion that Luke has a twin sister, which isn't brought up until Return of the Jedi. After a Thanksgiving break, they resumed the story conference, which led to a 55-page transcript in which a lot of stuff was hashed out, according to J.W. Rinzler's The Making of The Empire Strikes Back.
What Did The Original Script For The Empire Strikes Back Look Like?Word is the first draft of The Empire Strikes Back has been making the rounds on the web...without… Read…
It's fashionable to disparage Brackett's contributions toEmpire — Lucas himself says that her script wasn't what he wanted at all, and she died of cancer before she could do any rewrites. Lucas is quoted in The Annotated Screenplays as saying, "During the story conferences I had with Leigh, my thought weren't fully formed and I felt that her script went in a completely different direction." (You can read the entire script draft here, and a list of differences from the final film here.)
But it's not true that none of Brackett's storyline winds up in the final movie — the basic story beats are the same. And there is at least one aspect of Brackett's draft that's way better than what Lucas eventually ended up with: the character of Luke's twin sister, named Nellis in Brackett's screenplay. From The Annotated Screenplays:
It's probably true, as Lawrence Kasdan says in Rinzler's book, that Brackett's screenplay doesn't quite get the feel of what George Lucas was going for, and that her work represents the sensibilities of an earlier era. Lucas was in the middle of revolutionizing space opera, for better or worse, and Brackett represented an earlier era, that was closer to the Burroughs planetary romances.
And yet, a lot of what makes Empire great is still traceable to those early story conferences that she and Lucas had together. And in a lot of ways, her credit as screenwriter for one of the greatest space adventures of all time is vindication for someone who chose to write space opera at a time when that term was considered a put-down.