“Today, the Drone Racing League (DRL) announced its inaugural racing season. The league hopes to be the Formula 1, NASCAR and MotoGP of drone racing, and has secured backing from venture capital firms and celebrities to make that a reality.”
I recently watched the XCD drone racing event highlights and have to admit I was rather disappointed. It failed to deliver the excitement of pod racing the way a Star Wars extravaganza could.
There, the only excitement came from pod crashing into neon lights that delivered trippy psychedelic visuals. Fortunately, technical difficulties provided countless occasion to enjoy crashes.
And to me, that’s where the potential for broadcasted drone racing holds: non lethal crashes for all to enjoy!
Joke apart, First Person View (FPV) videos of racing and freestyling drones are mesmerizing! Admittedly, as with VR, motion sickness would have to be addressed, but assuming image stabilization and overall definition will not cease to improve, this is only a matter of time before we can experience these races in Virtual Reality!
“Some of the biggest issues plaguing the fledgling sport right now are technical. The video feed between a drone and the video goggles needed to pilot is currently rather grainy. Higher quality HD videos cause the feeds to lag, which can lead to acute motion sickness for drone pilots and a sense that you’re watching a beat-up old VHS recording for the audience. The DRL’s solution to this problem is a bold one: Forget live viewing (at least for now).”
Live is crucial to compete for the increasingly volatile audience’s attention as well as other mundane activities such as betting, nurturing local rivalries and crunching stats for the sake of fantasy sports virtual competition.
And live video game streaming on Twitch proved to be key to trigger viewers’ engagement. But, there seems to be a number of people who enjoy eSports’ replay, walkthroughs and let’s play videos on demand while maintaining community engagement.
And although “First-person drone racing, where racers fly drones using video goggles connected through radio to the drone, is a burgeoning sport,” the DRL structure borrows to Formula One and downhill ski racing. And those fast paced and action packed races broadcasting doesn’t rely on one camera angle, it’s a mix of views so FPV is not the only thrilling option drone racing could consider.
Other sports broadcast such as rugby provide 15+ different camera angles to choose from.
And yet, being at the stadium adds to the experience: the chants, waves, live social interactions with perfect strangers, and the fact that wherever you are seated, you still have a rather complete look at the action (with more or less granularity). However, for large racing events such as F1 and downhills, being at one specific place provides only a glimpse of what the action is for each competitor. This can be frustrating.
Drone racing would allow both the experience of a live venue event while providing additional features to online (and possibly non-live) broadcasting.
What all of [the mentioned below] AI researchers agree on is that the future of open world game design is going to be less about lots of scripted stuff being fed to the player, and more about the player collaborating with the computer to create fresh, personal stories together.
'Michael Cook, a computational creativity researcher at Goldsmiths University, began development of ANGELINA, a computer program capable of designing its own games using assets drawn from image search engines. He thinks the future of this genre is about handing creative power to the machine itself.'
'Julian Togelius is a university professor who has spent the last five years researching the concept of procedurally generated content in games. In the Togelius infinite world, you'll be able to drive a car in one direction for several miles and find that the game has built a city at the end of the journey, just for you. What's more, that city will be populated with characters who act like real humans rather than bizarre automatons. It will be the player's interactions with these characters that creates the stories.'
'Jeff Orkin [thinks] "last year's Shadow of Mordor did a great job of producing memorable moments through their Nemesis system, which created randomised enemies with a memory of past encounters with the player that they could vocalise in their taunts. This is a step in the right direction. Contextual dialogue ties events together, and explains to the player what the NPCs are thinking, and how decisions made in previous encounters have led to the current situation. However, giving every NPC the ability to dynamically voice contextual dialogue is an intractable problem in an open-world game." [...]
Orkin explains, "by recording several gameplay sessions of each scenario, cross referencing them with other similar scenarios, and doing some tagging and clustering to associate semantic meta-data, we can generate a vast dialogue database to draw from depending on the context."
He also foresees a future of community contributed dialogue, with players able to record their own voice performances, which the GroupPlay system could add to the game.'
'Mark Riedl heads up the Entertainment Intelligence Lab at the Georgia Tech School of Interactive Computing. For the past three years he's been working on a project named Scheherazade, a crowd-powered automated story generation system, which creates its own interactive fiction by studying groups of plot lines developed by humans.'
'What both Orkin and Riedl are working toward then is a concept known as drama management, a program of interlocking AI systems that serve quests and narrative adventures to players - often via intelligent NPCs - within a dynamic open world. "This is a growing and exciting area of research," says Cook. "A lot of work is being done on making game worlds both rich and unpredictable, as well as making them more controllable by designers. At the Experimental AI for Games workshop this year we had a paper called 'Toward Characters Who Observe, Tell, Misremember, and Lie' by folks at UC Santa Cruz.
[...] For me this is a powerful new concept in AI systems - and it's a system, not a script. We're familiar with characters in games lying to us, but only when a writer or designer decides they should. This is a changing, malleable system that demands a new kind of thinking from players. Open world games are falling prey to over-scripting and over-design - they're more like really big closed worlds than truly open ones. We need these kinds of unpredictable and organic AI systems to add life back to our open world games."
" a deeper system, combining the Nemesis concept with the Talk of the Town technology would provide a world filled with dynamic Machiavellian plotters. And that means a very different set of games.
"If NPCs possessed social reasoning, then open-ended action games can be about power dynamics," says Riedl. "They may support strategies in which the player forges and betrays lasting relationships with NPCs instead of just shooting. Think of an interactive version of The Wire, House of Cards, or The Sopranos. These TV shows are about the weaving and unweaving of complex interpersonal relationships as a means to an end. This provides new, complex strategies that may complement the action-oriented aspects of the game. NPCs with innocuous beginnings could become regular companions, or grow into mortal enemies. An indication of success would be the player having a strong emotional response to the procedurally generated actions of an NPC, or to the betrayal or demise of a favourite character."
Across short play sessions, players retell the same myth with different characters the way that a storyteller might continually recast Pandora, The Monkey King, or other ancient heroes. It's a tradition dating back across many cultures, including Polynesian storytelling and classical myth making.
In searching for an opportunity to have players experience “the multiple truths of storytelling,” Short hit upon 3 fundamental lessons for investing players in a randomized quest.
Extensive dialogue trees and multiple truths according to context - deep bro
The newest bit of gimmicky fun from the folks over at Google takes the company's ubiquitous Street View cameras and miniaturizes them to give you an unprecedentedly close look at Hamburg's famed Miniatur Wunderland, the largest model train set in the world.
"The cameras used for pulling this off—seen above—were developed as a joint project between Google and Ubilabs. And with over 8,000 miles of track and more than 200,000 plastic denizens, Miniatur Wunderland has plenty for them to explore. So hop on over to the Google Maps page for the project and check out everything from the functioning airport to the mini Las Vegas strip."
This is a crazy set-up! all those vehicles that are controlled by computer are wonderful!
Intel CEO Brian Krzanich asked marketing director of perceptual computing Anil Nanduri what he would do with 100 flying drones. Nanduri put that challenge to a small group of artists and technology researchers at Ars Electronica Futurelab in Hamburg, Germany.
'To transform the idea into reality, Horst Hoertner, senior director of Futurelab, said he focused on the future.
“It’s the only thing that can be created,” he said. “Everything else is already created. Hope and curiosity is the drive that helps you get things done that have never been done before,” he said.'
The article doesn't simply praise the Interl PR stunt, it mentions regulatory issues surrounding disruptive technologies and this quote from the Futurelab director is a great echoe of Tesla's:
“Let the future tell the truth, and evaluate each one according to his work and accomplishments. The present is theirs; the future, for which I have really worked, is mine”
"You’re building the plane while you’re flying it," says Ted Schilowtiz, 20th Century Fox’s futurist—and per his business card its "consiglieri"—of the Hollywood studio’s production of rich VR experience based on its recent mega-hit film, The Martian. "We didn’t have all the answers when we started, and we still didn’t have all the answers when we finished, but we know we created something meaningful. In scope and scale; it feels like a significant entertainment product."
Read: A significant stand-alone entertainment product.
I referred to Fox's tech lead in VR experiences' production.
But this article elaborates on the "business" aspect of the 20min interactive VR experience built around its movie The Martian.
It's not a marketing stunt - they want to charge for it :)
UPDATE: http://sco.lt/7A5sMj > 10 years till mainstream, now about 1500$ a setup... who's going to p(l)ay it and where? Can't wait to see how this unfold logistically. Arcade of the 21st century with padded walls, 360° treadmill and "4D" cognitive suits?
The article also mentions the never-ending quest for VR story telling.
On this front, I tend to be more on the Pixar's Catmull views.
But I'd love to be proved wrong. Let's say that the story telling in the Martian VR experience is more of an interactive opportunity for the movie wtachers to go deeper - testing themselves in the hero's shoes. It's not story telling to me... but an empathetic immersive trip.
"Nancy Bennett, the chief content officer at Two Bit Circus, a media production company in Los Angeles that often works in VR. [illustrate how Fox might transcend the "real" story telling issue]
"Fox Studios is [...] using VR for what VR is best for, an exploration of an environment that is established by a movie, but takes it to the next level, that’s the right thing to do, and it’s interesting."
I'm interested!!! but I'd like to see how this project perform business wise as it's one of the first Hollywood VR project that is scheduled for market distribution - not marketing VR experience funneling to a movie, more the other way around :)
Disney Research and ETH Zurich designed a prototype called VertiGo robot. It can drive up walls, adjusting its thrust to stick to the surface even if it has to travel over uneven surfaces like bricks using two tilting propellers.
Can't help but wonder why Disney fund such research, but I find it as interesting as watching miniature off-roading such as this
There is a great future in remotely operating cool gears - for fun or even more serious matters. And maybe the turnover of video gamers employed for search 'n' rescue wouldn't be as high as US Army drone pilots quitting so fast they can staff the demand.
This animated short — called Special Delivery and released today — is the latest project released for Spotlight Stories, a smartphone-based video platform run by Google’s Advanced Technology And Projects (ATAP) group. It’s an interactive YouTube video compatible with Android phones, with a non-interactive 360-degree video version for iOS or web users. A collaboration with UK studio Aardman Animations, the project is another small step towards turning a Google experiment into a new artistic medium.
"Aardman’s animation, though, had to look good from several different possible camera angles. It couldn’t be linear, and it had to progress at the viewer’s pace. In Special Delivery, some vignettes only begin when you look closely at them, and major story events will wait until you’re paying attention. Instead of a screen, the team had to imagine something more like a stage. They even built a circular cardboard "set" as a storyboard, blocking out the movement of their characters in physical space.
"You’re giving away the camera to the audience, which is a bit nerve-wracking," says director Tim Ruffle. "After a while, you kind of get the idea that you’re trying to create an experience for people, rather than creating a show."
That’s a sentiment that’s heard more and more often, from directors exploring the nascent field of virtual reality video."
The article mentions the VR limitations and its differences with 360° casual immersion. It also meditates on the line being blurred between motion pictures and video game.
The MUSE platform is a brand-new third-generation blockchain based on the Graphene Toolkit and specifically tailored to meet the music industry’s needs. It can be viewed as a membership organisation in the cloud that acts as a global database for copyright-related information. It is designed for all music-related payments – including royalties, music sales, merchandise and concert ticket sales.
"One [interesting] feature worth mentioning are the Tokens: limited and tradeable tokens that artists can use to give fans a VIP pass into their world, offering anything from discounts on merchandise to concert tickets and backstage passes, and even advertiser give-aways. Tokens are a tool that helps the undiscovered artist gain exposure while helping the already discovered artist engage with his or her fan base."
more on the future of Blockchain technology (and it's development backed by Big Bank and Silicon Valley VCs) in this article referring to Aite reports
C’est ce que proposera désormais la ville de Paris, à travers sa première borne VR à remonter le temps. Son aspect vous rappellera sans doute les jumelles que l’on trouve sur les points de vue de nombreux lieux touristiques, pour admirer de plus près le paysage. Ici, point de grossissement de l’environnement réel mais une plongée dans les méandres du passé, à travers une reconstitution immersive en 3D. Les lunettes fixées sur la borne s’apparentent aux casques VR que nous évoquons quotidiennement sur ce blog. Elles permettent de visionner un film à 360°, développé par un studio parisien sur base d’illustrations et de plans d’époque.
ça paraît prometteur, surtout sur un site comme La Bastille que beaucoup de touristes s'étonnent de ne pas trouver à l'endroit indiqué :)
Sequenced is the first animated series on 360° for virtual reality helmets. The story adapts in real time to the user’s focus: the characters and environment react to the users presence in the scene. Taking place in a near future, the first episode of Sequenced will come out in 2016 for iOS, Android, Oculus Rift and HTC Vive.
"Gaze interactive, The proprietary technology developed in house since 2013 to create the dynamic narrative is under development and will become available for Unity developers and designers around the same time as the release of Sequenced. More information on gaze interactive.com"
"Gaze is a proprietary tool for designers, developers and storytellers of all kinds willing to investigate the creation of interactive experiences for mobile devices and VR helmets.
Gaze helps and simplifies the development of unique interactive scenes by enabling the whole virtual environment to react to the user’s position and orientation in space. Based on triggers and actions, the characters and props come to life, aware of the user’s presence, creating a truly magical experience."
This seems like a legit tech to fuel narrative perspectives according to "player's" behaviours. I'm curious to see in what extent the story line is impacted by various reactions, how does it arch?.. or not...
If you teach or have kids of your own, here’s a great way to get them into chemistry. On the homeschooling blog Teach Beside Me, Karyn Tripp shows how to create a Battleship-esque game with a periodic table.
I would see it more as Guess Who? but I like the idea, because learning this thing is SO boring.
Also, you could have the Biological Pathways Maps as RPGs :)
Oculus s'est pris les pieds dans le tapis (pas virtuel) en annonçant le prix de son casque de réalité virtuelle à 599$. De quoi s'intéresser au casque open source OSVR, développé par un conglomérat de fabricants.
Occulus and the rest of the high-end market are probably going to take quite a blow from this initiative - which seems better than cardboard & Gear but not as advanced yet way cheaper than Vive, Playstation, and the recently announced (without pricetag) http://www.starvr.com/
Open Source + Virtual Reality = 2 pillars of the 21st centuty disruptive blast coming our way
The article explore the differences between the two "mediums". Stressing the fact that "VR’s biggest strength is also its greatest weakness. The immersive nature of VR hinders users from interacting with their surroundings. [It] has already started to revolutionize the way we watch content, but will never be the technology we turn to in our everyday lives."
Whereas AR brings layers of info on our daily lives, the tech has not delivered yet due to the issue of visualization. But "Microsoft is working on HoloLens AR headset glasses. Developer kits are scheduled to hit the market in early 2016. Google invested in a company called Magic Leap, whose technology beams lasers into the viewer’s iris to activate AR. That future will become a reality in another year’s time."
UK based Tesla Studios recently launched a Kickstarter for their Teslasuit. It uses EMS (Electro Muscular Stimulation) fed to your nervous system via a skintight suite covering the user’s body. The latest prototype delivers a simulation of a sense of touch or pressure on your skin, so if someone pokes you in VR, you feel it at the corresponding point on your body.
> NOT Elon Musk's Tesla related <
"The team claim the system is compatible with all VR headsets, but we’re currently waiting on answers to precisely what this means in practice. According Tesla Studios, the suit itself is powered by a suit-mounted processing unit, packing a Quad Core 1Ghz processor and 1GB RAM and a 10000mAh battery. The unit runs a proprietary OS called Tesla OS. The processing units are designed to be modular, with Haptics, Motion Capture and Climate Control all receiving their own units, linked to the main processing unit – worn on the belt.
[...] Questions remain around VR headset integration and how software interfacing, which will presumably have to be specifically integrated to get the most out the the hardware, will work."
Decisions made by AI "cannot make sense of what we are trying to accomplish or why” says Associate Professor Mark Riedl. His Entertainment Intelligence Lab at Georgia Institute of Technology's research into computational narrative intelligence is one potential way of overcoming that barrier by opening the possibility for AI to “create rapport with humans by sharing virtual vignettes.”
In keeping with this, Scheherazade-IF was not designed with entertainment-oriented games in mind and may not be suitable for that role. Riedl says that this is the result of the median-based model of this system, which means that its stories would “largely avoid the dramatic twists that one would want in a strongly-story driven game.” The team has, however, discussed ways to make the system’s vignettes more dramatic, although practical application of such theories has not yet been attempted.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.