4K streaming, mobile broadcasting for the crowd, generalized delinearization, worldwide video events…
OTT delivery is just multiplying the challenges, as customers’ expectations are raising each day in terms of video fast-start, instant channel switching, lack of buffer and high frame size/rate – on all devices in all network conditions. To answer those challenges, OTT delivery answer today is basically more unicast sessions, more servers, more peering – and less and less guarantee of satisfying end-user experience as long as there is no specific end-to-end paid agreement to guarantee that the path will be provisioned from the origin server up to the video device. Even in this ideal scheme, the device might still suffer from poor wireless conditions which jeopardize the experience. So, how do we deal with all this stack of potential problems: do we stick to the aging receipts, rely blindly on Moore’s law and perpetuate a hopeless CDN weapons race? Or do we try to find smarter ways to make the OTT growth reach a sustainable delivery model ?
A live demonstration of a unique satellite and terrestrial hybrid 4K transmission system is in the works for the upcoming ABU DBS (Digital Broadcasting Symposium) 2014 Exhibition in Malaysia.
For the demonstration, Village Island and Aviindos developed with MEASAT the idea to empower the live transmission through a DVB-S2 transponder. This follows a request from the DVB group to demonstrate dual 4K services on a single DVB-T2 channel. The demonstration is also in response to generic deployment trends in South-East Asia to deploy digital TV concurrently via satellite and terrestrial means.
As we reported previously, the Broadpeak solution introduces very lightweight clients into the home router/gateway and these intercept the unicast stream requests made by a tablet or smartphone to an origin server. The client looks for a multicast stream of the linear content instead. The platform operator works with a content owner to make the most popular channels available in multicast, perhaps during peak times or for popular shows or live sports, or even 24/7. The nanoCDN client receives this multicast stream instead, then converts it to unicast ABR (adaptive bit rate) inside the home so it can be watched on the multiscreen devices without any changes to their apps.
By replacing multiple unicast streams with a single multicast stream in the broadband network, nanoCDN reduces the bandwidth demands for linear/live video. nanoCDN is pioneering because of the way it harnesses multicast within a CDN environment and because it makes in-home devices an extension of the CDN.
FFmpeg and its forked Libav have each added an H.265 / HEVC encoder today to their respective code-bases.
Going back to the middle of last year there's been the open-source x265 project for implementing the High Efficiency Video Coding (HEVC) video format that succeds H.264/MPEG-4 AVC. We have also seen open-source HEVC / H.265 support come via libde265 as a decoder for this video format that doubles the data compression ratio of H.264 while at the same video quality level.
NTV Plus pioneered the initiative to broadcast the Sochi 2014 Winter Games in 4K. Following an extensive period of testing during 2013, NTV Plus designed and deployed an end-to-end 4K HEVC workflow that features Sony 4K cameras, Elemental Live video encoders, NTV Plus satellite uplink and signal receiving systems and Broadcom-enabled real-time decoders for playback on Panasonic 4K TVs. A single video stream encoded by Elemental enables NTV Plus to reduce the 100Mb of bandwidth which would otherwise have been required for delivery of 4K HEVC content.
The launch of the Joint Task Force on File Formats and Media Interoperability was announced today by its sponsors, the North American Broadcasters Association (NABA), Advanced Media Workflow Association (AMWA), Society of Motion Picture and Television Engineers (SMPTE), International Association of Broadcast Manufacturers (IABM), American Association of Advertising Agencies (4A’s), and Association of National Advertisers (ANA). The European Broadcasting Union (EBU) is participating as an observer.
Bringing together manufacturers, broadcasters, advertisers, ad agencies, and industry organizations (standards bodies and trade associations) serving the professional media market, the Task Force has an ultimate goal to create greater efficiencies and cost savings for exchange of file-based content. The group’s initial focus will be to gather and analyze requirements for a machine-generated and readable file interchange and delivery specification — including standardized and common structured metadata — for the professional media industry. Use case examples include promo, spot, and program delivery from a provider to a broadcaster.
Live streaming every phase of every sporting event of the Olympics is a big task. This year NBC expanded their targeted devices to include iOS, Android and Windows smartphones, PCs, Macs, and tablets. The combination of the increased number of live feeds and the increased number of targeted devices creates a complex live video workflow requiring coordination between a number of technologies and partners.
NBC caught a lot of flak for its live video streams during the London Olympics, but for Sochi, the network promises to stream every event live. Just like during the London Olympics, NBC’s partner for making these streams possible (and authenticating cable subscribers) is Adobe – and Adobe itself is partnering with Microsoft to power the streams.
Adobe’s Primetime platform will power the video delivery and video ads on the NBC Sports website and the NBC Sports Live Extra App for iOS and Android. All events will be available live and on demand.
In the back-end, Adobe will use Microsoft’s Windows Azure Media Services to power all of the encoding and streaming. This partnership, Helfand said, will continue even after the Olympics. As the broadcasters move from experimentation to going live with their online video streams for big events, he argues, they also need to ensure that the streams live up to their audience’s exceptions.
While the rest of the world is wondering about when we will see 4K broadcasts, NHK in Japan is conducting test transmissions of 8K Super Hi-Vision signals over the air. Its latest test demonstrated a signal with 16 times the resolution of high-definition sent over a single terrestrial television channel.
The standards for Ultra-High-Definition or UHDTV include both 8K and 4K profiles. An 8K Super Hi-Vision image has a resolution of 7680×4320 pixels, or over 33 megapixels, which is four times the resolution of a 4K or 3840×2160 image, or 16 times the resolution of a full-frame high-definition picture.
In its latest test, at the research laboratories of NHK in Hitoyoshi in southern Japan, an 8K signal was transmitted over a standard 6MHz broadcast channel and received 27 kilometres away.
Alors que l'on évoquait il y a quelques jours les tests menés par la NHK au Japon sur la diffusion hertzienne de flux vidéo en 8K (et réussis sur une distance de 27 Km), on apprend aujourd'hui que l'équipementier Broadcom est, en partenariat avec l'opérateur scandinave Teracom, parvenu à diffuser un flux vidéo en 2160p (sans doute de l'Ultra HD en 3840 x 2160) via le codec HEVC (ex-H.265) en exploitant la technologie DVB-T2 (une évolution du DVB-T utilisé pour notre TNT).
Norme DVB-T2 qui devrait, logiquement, être choisie pour la future TNT2, tandis que malgré les tentatives de Google d'imposer son codec VP9, l'industrie devrait se tourner massivement vers le HEVC pour la compression des flux vidéo Ultra HD à l'avenir. HEVC qui permet, gross modo, de diviser par deux la bande passante nécessaire pour la diffusion d'un flux vidéo de qualité identique par rapport à l'actuel H.264. Soit un précieux allié pour ne pas voir la taille des fichiers trop exploser lors du passage à l'Ultra HD.
InAiR turns your TV into a Minority Report-like experience, with layers of Web content, inline with the programs you're watching.
InAiR brings you the world's first Augmented TV experience
With InAir plugged in, your TV becomes an Augmented Television. You can turn any ordinary television into a new and wonderful medium, filled with rich and dynamic information from the Web. InAiR uses your TV screen and layers in additional content from the Web and social media.
The text of ISO/IEC 23009-1 2nd edition PDAM1 has been approved which may be referred to as MPEG-DASH v3 (once finalized and integrated into the second edition, possibly with further amendments and corrigenda, if applicable). This first amendment to MPEG-DASH v2 comprises accurate time synchronization between server and client for live services as well as a new profile, i.e., ISOBMFF High Profile which basically combines the ISOBMFF Live and ISOBMFF On-demand profiles and adds the Xlink feature.
Additionally, a second amendment to MPEG-DASH v2 has been started featuring Spatial Relationship Description (SRD) and DASH Client Authentication and Content Access Authorization (DAA).
Other DASH-related aspects include the following:
The common encryption for ISOBMFF has been extended with a simple pattern-based encryption mode, i.e., a new method which should simply content encryption.
The CD has been approved for the carriage of timed metadata metrics of media in ISOBMFF. This allows for the signaling of quality metrics within the segments enabling QoE-aware DASH clients.
Today, MulticoreWare is announcing the availability of accelerated VP9 decoding solutions for mobile and embedded devices. VP9 is Google’s Open-Source video codec, available for free as part of the WebM project. VP9 will be used for YouTube and Google Hangouts as well as other web-based video applications. VP9 is supported today in Google’s Chrome browser, with support in v28 of the Mozilla Firefox browser scheduled to be released on March 18th.
As I begin my journey to this year’s Mobile World Congress in Barcelona, I am thinking through the major trends and inflections that will most impact mobile technology and usage models going forward. One of those I wrote about last week was quick charging. What I would like to talk about now is 4K capabilities on mobile devices and how mobile could be the primary driver for 4K. Let me start with explaining what 4K is and how it applies to mobile devices. I also wanted to touch on some interesting things Qualcomm is doing in this arena.
As before, I was very excited when Google released VP9 – for one, because I was one of the people involved in creating it back when I worked for Google (I no longer do). How good is it, and how much better can it be? To evaluate that question, Clément Bœsch and I set out to write a VP9 decoder from scratch for FFmpeg. The goals never changed from the original ffvp8 situation (community-developed, fast, free from the beginning). We also wanted to answer new questions: how does a well-written decoder compare, speed-wise, with a well-written decoder for other codecs?
Ace Thought Technologies, a leading provider of media processing software, today announced the release of highly optimized, multi-threaded, power efficient and real-time H.265/HEVC software video decoder for ARM Cortex-A series and x86 processors. Ace Thought’s HEVC video decoder software enables rich entertainment experience across smart-phones, tablets, set-top boxes, smart TVs and connected devices.
Ace Thought’s HEVC video decoder for ARM processor provides scalable performance by utilizing ARM NEON SIMD and multi-core processor technologies. The HEVC video decoder is scalable up to 8 cores and enables 60 fps playback of 1080p HD resolution content on quad-core ARM Cortex-A15 processor. Ace Thought’s HEVC video decoder software enables the creation of high-quality player application on existing Android, iOS and Windows Phone 8 based tablets and smart-phones.
Adobe, NBCU, Elemental, Deltatre, LiveU, and more are readying streaming platforms that will deliver coverage to desktops and mobile devices around the globe.
Four years ago according to the IOC there was a defining moment in Olympic broadcasting history. Vancouver was the first Winter Games to be fully embraced on digital media platforms where digital coverage accounted for around half of the overall broadcast output.
Globally, on official rights-holding broadcasters’ internet and mobile platforms, there were more than 265 million video views and in excess of 1.2 billion page views during the games. There were also approximately 6,000 hours of 2010 coverage on mobile phone platforms.
Digital coverage from Sochi will surpass this, with many more broadcasters drawing on the clear consumer demand from London 2012 for any time, any device viewing.
The IOC places such draconian restraints on rights holders and anyone working for them to report involvement in the Olympics, which extends to technology contractors, that it's tricky to unearth details on this story. With that caveat, here are some of the large-scale video streaming activities set to go live from Sochi at the end of this week.
The age of 4K may have officially begun, but the world still has some catching up to do. Case in point: You can't actually watch this weekend's Super Bowl in 4K (a.k.a. Ultra HD), even if you have a 4K TV, since there isn't yet a broadcast or cable standard for ultra-high-def format. Even the live stream is "just" in 720p.
That doesn't mean 4K won't make a difference at the big game. Fox will have six 4K cameras at MetLife Stadium — two on the sidelines, two on the goal lines and two on the end lines — specifically for the network's "Super Zoom" feature. When the broadcast needs to get in tight on some action, the feed will crop a 720p "window" from the 4K picture captured by those cameras. That way, Fox can get tight, high-res images without needing to zoom in optically.
Transparent caching specialist Qwilt has introduced Qwilt Live Stream Cache, a software solution that enhances the company's QB-Series Video Fabric Controllers. The solution replaces the existing unicast model, where each viewer has a dedicated HTTP session, and replaces it with a shared stream model. Qwilt Live Stream Cache is able to identify popular live streams on its own, then store those streams to the controller's FastCache. This dedicated storage area has been optimized to deliver live streams quickly. Next, the Qwilt Live Stream Cache creates a local live video transmission point for each area, letting a large group of viewers share a single video stream.
The Google Cast SDK is simple to integrate because there’s no need to write a new app. Just incorporate the SDK into your existing mobile and web apps to bring your content to the TV. You are in control of how and when you develop and publish your cast-ready apps through the Google Cast developer console. The SDK is available on Android and iOS as well as on Chrome through the Google Cast browser extension.
For non-media applications, or for more flexibility and design options, you can build your own custom receiver application using standard web technologies. With a custom receiver you can build virtually any application while including support for many streaming protocols, including MPEG-DASH, HLS, and Microsoft Smooth Streaming, all of which are available in the Media Player Library.
A heavyweight panel that included Google, Microsoft Open Technologies, and Digital Primates guided Streaming Media West attendees through the creation of an open source DASH-AVC/264 player. Will Law, chief architect for Akamai, moderated and kicked off the discussion.
"We're trying to cook up something delicious here, and we have three ingredients to do it. We've got MPEG-DASH, we've got MSE/EME, and we've got dash.js," Law began. "You may not know what these are. I'm going to describe very briefly what our core ingredients are, and then we'll see how they're mixed by our panelists."
The EBU has published a new specification for the distribution of subtitles: EBU-TT-D (Tech 3380). The XML based EBU-TT-D format is a low-complexity way to combine subtitle text, styling, timing information, and positioning details to allow implementers to provide users with a subtitle experience at least as good as that on current TVs, regardless of the platform on which they are watching the content.
EBU-TT-D was developed in less than a year, by taking into account expertise from users, distribution parties, hybrid TV organizations and CE manufacturers. The work built on the EBU XML Subtitles group’s knowledge gained when creating the EBU-TT subtitle format for production interchange and archiving (EBU Tech 3350). The specification is derived from the base W3C TTML specification. It strongly constrains the feature set of TTML to make it easier for decoder/renderer implementers to add subtitle overlays to video without the complexity that is present in TTML to support other scenarios.