There’s no denying that video is increasingly consumed in vertical, rather than horizontal format — think smartphone apps like Snapchat — to the dismay of some purists. “There is a question about vertical video’s potential,” says Aubert. “People are consuming on their phones, so they could be looking for vertical content. But content is distributed on all screens now. Every kind of creativity is interesting, it’s a new form of storytelling.
But as video players adapt to this new way of filming the very same way they slowly but surely switched from 4/3 to 16/9, it invalidate the letter boxes issue, making it a legit form of narration by framing. The IMPACT video perfectly illustrates this point.
ClickHole, the Buzzfeed-parodying offshoot of The Onion's satirical news enterprise, has, improbably, become a small haven for funny text-based games.
"But even within the awkward constraints of the slideshow, ClickHole’s writers have figured out how to do some complicated things. Due to the length and complexity of the ClickVenture—each one might run anywhere from one to a few hundred slides, and a new one drops every Thursday—each is assigned to a single writer, who carries it from conception to a finished product with little oversight."
There are a number of theories and frameworks traversing the world of game design theory, but little progress is being made to help unify these frameworks. By locating crossover territory between MDA/MMDA, PENS, Flow Theory, and Intensity/Engagement Curves, we’re able to see a new theoretical network:
Rough ideas beget MDA/MMDA aesthetics Each MDA/MMDA Aesthetic begets a PENS axis distribution Each PENS axis is developed through Flow, the mental state of optimal experience And each Flow channel is built through an Intensity/Engagement curve Given the fractal nature of intensity/engagement curves, this progression then continues recursively until the mechanical workspace of the designer is too minute to effectively adjust. And while the exact execution of the intensity/engagement curve are still left up to the creative minds of designers, we can begin to construct a more consistent, procedural way to create sustained engagement from even the simplest game ideas.
I discovered this 2013 article and think the intent is pretty interesting. The author tries to unify different theoretical game design framework that stood up: Flow, MDA/MMDA, PENS...
Very interesting read.
Check the massive (and mostly free) online resources by Immersyve, creators of the PENS theory and Self-Determination Theory
"In fact, our data indicates that there is a very important point to keep in mind when developing carrots – they are most motivating when they specifically enhance the player’s experience of competence, autonomy and relatedness."
Completely reengineered for virtual reality, Orion takes VR to the next level. Download the early access beta, reach out, and see what you’ve been missing.
Leap Motion the company behind the sensitive hand tracking hardware have unveiled a new system in the form of a hand tracking engine that has been specifically designed for use in virtual reality applications.
Named Orion the new virtual reality hand tracking engine has been constructed from the ground up specifically for VR and builds on the original hand tracking device the company unveiled a few years ago.
It could lead the way to non-obtrusive&clumsy interface for hand interaction in VR. Promising.
"There's a spatial processing unit inside," he said. "It actually maps the world by itself and is not sensitive to sunlight or any lighting changes [...] That's innate within the system. That's its heart."
This sounds great as VR/AR would considerably enlarge the potential for mind blowing experiences as well increase safety.
And on the other hand, it could well be yet another PR coup because aiming for late spring release when you cannot show a working version of your gear at GDC seems ballsy - epic crunch ahead!
Announced Tech Spec:
2560 x 1440 pixel OLED display with a 110-degree field of view. AMD FX-8800P processor supported by Radeon R7 graphics together with 8GB RAM and 256GB solid state disk (SSD) storage.
Other features include support for connectivity via Bluetooth, Wi-Fi and the inclusion of 3D spatial audio.
The MUSE platform is a brand-new third-generation blockchain based on the Graphene Toolkit and specifically tailored to meet the music industry’s needs. It can be viewed as a membership organisation in the cloud that acts as a global database for copyright-related information. It is designed for all music-related payments – including royalties, music sales, merchandise and concert ticket sales.
"One [interesting] feature worth mentioning are the Tokens: limited and tradeable tokens that artists can use to give fans a VIP pass into their world, offering anything from discounts on merchandise to concert tickets and backstage passes, and even advertiser give-aways. Tokens are a tool that helps the undiscovered artist gain exposure while helping the already discovered artist engage with his or her fan base."
more on the future of Blockchain technology (and it's development backed by Big Bank and Silicon Valley VCs) in this article referring to Aite reports
C’est ce que proposera désormais la ville de Paris, à travers sa première borne VR à remonter le temps. Son aspect vous rappellera sans doute les jumelles que l’on trouve sur les points de vue de nombreux lieux touristiques, pour admirer de plus près le paysage. Ici, point de grossissement de l’environnement réel mais une plongée dans les méandres du passé, à travers une reconstitution immersive en 3D. Les lunettes fixées sur la borne s’apparentent aux casques VR que nous évoquons quotidiennement sur ce blog. Elles permettent de visionner un film à 360°, développé par un studio parisien sur base d’illustrations et de plans d’époque.
ça paraît prometteur, surtout sur un site comme La Bastille que beaucoup de touristes s'étonnent de ne pas trouver à l'endroit indiqué :)
Sequenced is the first animated series on 360° for virtual reality helmets. The story adapts in real time to the user’s focus: the characters and environment react to the users presence in the scene. Taking place in a near future, the first episode of Sequenced will come out in 2016 for iOS, Android, Oculus Rift and HTC Vive.
"Gaze interactive, The proprietary technology developed in house since 2013 to create the dynamic narrative is under development and will become available for Unity developers and designers around the same time as the release of Sequenced. More information on gaze interactive.com"
"Gaze is a proprietary tool for designers, developers and storytellers of all kinds willing to investigate the creation of interactive experiences for mobile devices and VR helmets.
Gaze helps and simplifies the development of unique interactive scenes by enabling the whole virtual environment to react to the user’s position and orientation in space. Based on triggers and actions, the characters and props come to life, aware of the user’s presence, creating a truly magical experience."
This seems like a legit tech to fuel narrative perspectives according to "player's" behaviours. I'm curious to see in what extent the story line is impacted by various reactions, how does it arch?.. or not...
If you teach or have kids of your own, here’s a great way to get them into chemistry. On the homeschooling blog Teach Beside Me, Karyn Tripp shows how to create a Battleship-esque game with a periodic table.
I would see it more as Guess Who? but I like the idea, because learning this thing is SO boring.
Also, you could have the Biological Pathways Maps as RPGs :)
Oculus s'est pris les pieds dans le tapis (pas virtuel) en annonçant le prix de son casque de réalité virtuelle à 599$. De quoi s'intéresser au casque open source OSVR, développé par un conglomérat de fabricants.
Occulus and the rest of the high-end market are probably going to take quite a blow from this initiative - which seems better than cardboard & Gear but not as advanced yet way cheaper than Vive, Playstation, and the recently announced (without pricetag) http://www.starvr.com/
Open Source + Virtual Reality = 2 pillars of the 21st centuty disruptive blast coming our way
The film festival just ended with a custom show called The Bomb which blew some minds with live music and immersive screening.
Earlier this year at SXSW, I met Brazilian film director Ricardo Laganaro who worked on the beautiful semi-spheric experience at the Rio's Museum of Tomorrow. He was also convinced that VR headset were killing the social bond and that other directions integrating a common experience should be explored.
We just had a review pop up a few days ago, and the reviewer basically opened by saying that he can be pretty finicky about survival games because he’s a wilderness survival instructor. He said our game models actual wilderness survival the most closely and accurately of anything he’s played. That’s great praise for me. That made me so happy, because that was the intent. When we hit it, people who know this stuff are like, “Whoa, I recognize this. They did it correctly.”
the game creator admit that they part from real survival techniques at times when it was necessary for design / entertainment reasons - yet this seems like a legit way to learn a thing or two while having fun.
A complete and total gamechanger for the animation industry moving forward.
"With one announcement, the animation software game may have changed forever. Toonz, the software used by Studio Ghibli to produce films like The Tale of the Princess Kaguya, Howl’s Moving Castle, Ponyo and The Wind Rises, will be made free and open source to the animation community beginning March 26, 2016."
in another article pointing to download link, the auhor accounts that
"OpenToonz has already generated tremendous interest from the animation community. A user forum launched in the past 24 hours already boasts hundreds of discussions, while a development forum is launching deeper discussions of how to build out the software. Animators have also launched unofficial resource guides like this one on Tumblr and have started posting basic video tutorials"
Introducing Amazon Lumberyard, a free AAA game engine deeply integrated with AWS and Twitch
Lumberyard is a free and cross-platform game engine which offers developers tight integration with the cloud based Amazon Web Services (AWS) platform and with Twitch (now owned by Amazon).
The engine is based on Crytek’s CRYENGINE and currently supports PC, PlayStation 4 and Xbox One platforms, with additional support for Mac/Linux and iOS/Android coming soon.
Amazon GameLift is another new service for deploying, operating, and scaling session-based multiplayer games. With Amazon GameLift, Amazon Lumberyard developers can quickly scale high-performance game servers up and down to meet player demand, without any additional engineering effort or upfront costs.
We start to see their strategy unfolding after Amazon got into entertainment and production: from AWS (cloud, work services and places) to game studios, TV/films and the recent acquisition of Twitch... They have ambition beyond renting goats :)
In addition to their core distribution business, they work both content production and infrastructures - which wll likely make them a very strong actor of the entertainment industry in the years to come.
“Today, the Drone Racing League (DRL) announced its inaugural racing season. The league hopes to be the Formula 1, NASCAR and MotoGP of drone racing, and has secured backing from venture capital firms and celebrities to make that a reality.”
I recently watched the XCD drone racing event highlights and have to admit I was rather disappointed. It failed to deliver the excitement of pod racing the way a Star Wars extravaganza could.
There, the only excitement came from pod crashing into neon lights that delivered trippy psychedelic visuals. Fortunately, technical difficulties provided countless occasion to enjoy crashes.
And to me, that’s where the potential for broadcasted drone racing holds: non lethal crashes for all to enjoy!
Joke apart, First Person View (FPV) videos of racing and freestyling drones are mesmerizing! Admittedly, as with VR, motion sickness would have to be addressed, but assuming image stabilization and overall definition will not cease to improve, this is only a matter of time before we can experience these races in Virtual Reality!
“Some of the biggest issues plaguing the fledgling sport right now are technical. The video feed between a drone and the video goggles needed to pilot is currently rather grainy. Higher quality HD videos cause the feeds to lag, which can lead to acute motion sickness for drone pilots and a sense that you’re watching a beat-up old VHS recording for the audience. The DRL’s solution to this problem is a bold one: Forget live viewing (at least for now).”
Live is crucial to compete for the increasingly volatile audience’s attention as well as other mundane activities such as betting, nurturing local rivalries and crunching stats for the sake of fantasy sports virtual competition.
And live video game streaming on Twitch proved to be key to trigger viewers’ engagement. But, there seems to be a number of people who enjoy eSports’ replay, walkthroughs and let’s play videos on demand while maintaining community engagement.
And although “First-person drone racing, where racers fly drones using video goggles connected through radio to the drone, is a burgeoning sport,” the DRL structure borrows to Formula One and downhill ski racing. And those fast paced and action packed races broadcasting doesn’t rely on one camera angle, it’s a mix of views so FPV is not the only thrilling option drone racing could consider.
Other sports broadcast such as rugby provide 15+ different camera angles to choose from.
And yet, being at the stadium adds to the experience: the chants, waves, live social interactions with perfect strangers, and the fact that wherever you are seated, you still have a rather complete look at the action (with more or less granularity). However, for large racing events such as F1 and downhills, being at one specific place provides only a glimpse of what the action is for each competitor. This can be frustrating.
Drone racing would allow both the experience of a live venue event while providing additional features to online (and possibly non-live) broadcasting.
What all of [the mentioned below] AI researchers agree on is that the future of open world game design is going to be less about lots of scripted stuff being fed to the player, and more about the player collaborating with the computer to create fresh, personal stories together.
'Michael Cook, a computational creativity researcher at Goldsmiths University, began development of ANGELINA, a computer program capable of designing its own games using assets drawn from image search engines. He thinks the future of this genre is about handing creative power to the machine itself.'
'Julian Togelius is a university professor who has spent the last five years researching the concept of procedurally generated content in games. In the Togelius infinite world, you'll be able to drive a car in one direction for several miles and find that the game has built a city at the end of the journey, just for you. What's more, that city will be populated with characters who act like real humans rather than bizarre automatons. It will be the player's interactions with these characters that creates the stories.'
'Jeff Orkin [thinks] "last year's Shadow of Mordor did a great job of producing memorable moments through their Nemesis system, which created randomised enemies with a memory of past encounters with the player that they could vocalise in their taunts. This is a step in the right direction. Contextual dialogue ties events together, and explains to the player what the NPCs are thinking, and how decisions made in previous encounters have led to the current situation. However, giving every NPC the ability to dynamically voice contextual dialogue is an intractable problem in an open-world game." [...]
Orkin explains, "by recording several gameplay sessions of each scenario, cross referencing them with other similar scenarios, and doing some tagging and clustering to associate semantic meta-data, we can generate a vast dialogue database to draw from depending on the context."
He also foresees a future of community contributed dialogue, with players able to record their own voice performances, which the GroupPlay system could add to the game.'
'Mark Riedl heads up the Entertainment Intelligence Lab at the Georgia Tech School of Interactive Computing. For the past three years he's been working on a project named Scheherazade, a crowd-powered automated story generation system, which creates its own interactive fiction by studying groups of plot lines developed by humans.'
'What both Orkin and Riedl are working toward then is a concept known as drama management, a program of interlocking AI systems that serve quests and narrative adventures to players - often via intelligent NPCs - within a dynamic open world. "This is a growing and exciting area of research," says Cook. "A lot of work is being done on making game worlds both rich and unpredictable, as well as making them more controllable by designers. At the Experimental AI for Games workshop this year we had a paper called 'Toward Characters Who Observe, Tell, Misremember, and Lie' by folks at UC Santa Cruz.
[...] For me this is a powerful new concept in AI systems - and it's a system, not a script. We're familiar with characters in games lying to us, but only when a writer or designer decides they should. This is a changing, malleable system that demands a new kind of thinking from players. Open world games are falling prey to over-scripting and over-design - they're more like really big closed worlds than truly open ones. We need these kinds of unpredictable and organic AI systems to add life back to our open world games."
" a deeper system, combining the Nemesis concept with the Talk of the Town technology would provide a world filled with dynamic Machiavellian plotters. And that means a very different set of games.
"If NPCs possessed social reasoning, then open-ended action games can be about power dynamics," says Riedl. "They may support strategies in which the player forges and betrays lasting relationships with NPCs instead of just shooting. Think of an interactive version of The Wire, House of Cards, or The Sopranos. These TV shows are about the weaving and unweaving of complex interpersonal relationships as a means to an end. This provides new, complex strategies that may complement the action-oriented aspects of the game. NPCs with innocuous beginnings could become regular companions, or grow into mortal enemies. An indication of success would be the player having a strong emotional response to the procedurally generated actions of an NPC, or to the betrayal or demise of a favourite character."
Across short play sessions, players retell the same myth with different characters the way that a storyteller might continually recast Pandora, The Monkey King, or other ancient heroes. It's a tradition dating back across many cultures, including Polynesian storytelling and classical myth making.
In searching for an opportunity to have players experience “the multiple truths of storytelling,” Short hit upon 3 fundamental lessons for investing players in a randomized quest.
Extensive dialogue trees and multiple truths according to context - deep bro
The newest bit of gimmicky fun from the folks over at Google takes the company's ubiquitous Street View cameras and miniaturizes them to give you an unprecedentedly close look at Hamburg's famed Miniatur Wunderland, the largest model train set in the world.
"The cameras used for pulling this off—seen above—were developed as a joint project between Google and Ubilabs. And with over 8,000 miles of track and more than 200,000 plastic denizens, Miniatur Wunderland has plenty for them to explore. So hop on over to the Google Maps page for the project and check out everything from the functioning airport to the mini Las Vegas strip."
This is a crazy set-up! all those vehicles that are controlled by computer are wonderful!
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.