MPEG-DASH is slowly but surely becoming the main competitor to HLS, driven by adoption by major players and intrinsic strengths. Here's who's using it now, who's going to be soon, and what challenges still need to be addressed.
Last year at the European Broadasting Union’s BroadThinking conference, the DASH Industry Forum (DASH-IF) conducted a survey of 13 major European broadcasters on MPEG-DASH adoption. At the time, about three-quarters of them projected to have DASH deployed by end of first half of 2014. Primary sources of concern for the broadcasters were the availability of DASH enabled clients and packaging tools. One year later, we haven’t seen many broadcasters deploying DASH in production, but the traction seems to have shifted to over-the-top (OTT) content distributors and operators.
So, who are the actors already in production or close to production with DASH? What are the remaining roadblocks for its adoption? How will DASH be positioned against existing Adaptive Bitrate technologies in the coming months? What is the exact status of the DASH standard and its most promising evolutions? What are the upcoming initiatives aiming at fostering DASH adoption? Let’s get a handle on where DASH is today, and where it’s headed.
What would a live studio need if it worked directly on IP networks? That was the task BBC R&D set itself with a project that began in 2012 and which will hit a peak when it plays a central role in the world’s first live end-to-end IP production in Ultra HD to be conducted at the Commonwealth Games.
IP has been used by many broadcasters, BBC included, to link a studio to remote locations but the missing piece has been an IP production experience using internet protocols to switch and mix the videos.
The BBC R&D trials conducted during the CWG next month promise to do just that, while also testing the limits of network performance by shunting 4K data around the UK in a collaborative production workflow – live.
“The concept is to introduce software and IP into the overall chain so it can be used alongside existing technology like DTT,” said Matthew Postgate, controller, BBC R&D. “IP will enable us to be more flexible with services we already produce, and longer-term, to introduce new kinds of services.”
BBC R&D describes IP Studio as an open source software framework for handling video, audio and data content, composed of off-the-shelf IT components and adhering to standards like IP packet synchronisation protocol IEEE1558.
For broadcasters and high-volume producers, Amazon's Elastic Transcoder has too many limitations. For everyone else, it's an appealing, if flawed, solution.
Amazon’s Elastic Transcoder is a service that can encode files living in the Amazon cloud for delivery into the Amazon cloud. In this overview, I’ll walk you through the workflow for using the service and discuss the service’s performance, quality, and pricing. Just to set expectations, this isn’t a full out, “bang it till it breaks” competitive review as much as a “Here’s how it works, and by the way we compared some aspects to other services and here’s what we found.”
Building on a research trial during the World Cup, the BBC will use the Commonwealth Games in Glasgow to trial what it calls the world's first live Ultra HD production and transmission entirely in IP.
The BBC prides itself on pioneering media technology, and its renowned research and development team has come up trumps again with what it claims is a world first live UHD production produced and transmitted in an entirely IP domain.
The trial is planned to take place during the Commonwealth Games, a quadrennial international multi-sport event hosted in Glasgow for two weeks from July 23.
It extends the BBC's trial of UHD live feeds from the World Cup in Brazil delivered simultaneously over conventional digital terrestrial and IP networks. The first of three matches delivered in the format for these tests airs on June 28.
From Glasgow, Scotland the BBC intends to replicate the Brazil trial by sending UHD signals across DTT and IP networks in partnership with telco BT and infrastructure provider Arqiva with the feed compressed in HEVC. This will again test the quality of service to a home environment.
HEVC is among us. On January 25, 2013, the ITU announced the completition of the first stage approval of the H.265 video codec standard and in the last 1 year several vendors/entities have started to work on the first implementations of H.265 encoders and decoders. Theoretically HEVC is said to be from 30 to 50% more efficient than H.264 (especially at higher resolutions) but is it really that simple ? is H.264 so close to retirement ? This is what we will try to find. First of all let’s start with a technical analysis of H.265 compared to AVC and then, in the next blog post, we will take a look at the current level of performance that is realistic to obtain in today’s H.265 encoders.
sffmpeg is a simple cmake-based full-featured FFmpeg static build helper.
It currently works on Linux, OpenBSD, FreeBSD, and MacOSX. It has been tested the most heavily on Linux/x86_64 (Ubuntu 12.04). The helper will grab the latest versions of most FFmpeg dependencies, providing a way to effectively build, test and compare multiple static builds of FFmpeg on the same host.
New version of the HEVC/H. 265 codec offers high quality at extremely low-bandwidth video streaming
The free video codec libde265 is now available in version 0.7. libde265 is based on the video compression technology HEVC and allows a 50-percent reduction of bit rates with the same image quality. As libde265 without special hardware, video streams can be in H. 265 quality play easily on mobile devices (smartphones and tablets). So, you can greatly reduce bandwidth costs for streaming and network. In addition, ultra HD includes much more than just four times the resolution to full HD. The color space and color resolution are also significantly larger — benefits the presentation at the 4K-TV regardless of pixel resolution the better dynamic range and is considered by companies in the consumer electronics industry as a driver for the next generation. With the standard-compliant version libde265 0.7, the speed has been improved up to a factor of 2. Framedropping and flexibility the frame rate also high-resolution video images even on slower processors are possible. (...)
“Over the top” (OTT) is one of the most overused and ambiguous buzzwords in our industry.
In order to understand linear video delivery in OTT models, first, you have to look at what OTT means outside of video. To mobile operators, OTT is a scary proposition. Calls, text messaging, and image messaging had been entirely within operators’ control until now, and therefore presented an opportunity for revenue. For those operators, OTT services are an almost unavoidable symptom of smartphones requiring open internet access, and bring with them many services that compete with operators’ traditional revenue models. King of all these is Skype, and it provides a clear example of what “top” the service comes “over” to earn the moniker OTT: Namely the pay wall that is the per-minute billing system of the mobile operator.
In exactly the same way I often dogmatically emphasize that “cloud” is an economic term defining the move of CAPEX to OPEX when building IT infrastructure, OTT is also an economic term first and foremost. At best, it means that the operators are able to derive data transit and bandwidth-oriented revenues for the delivery of network service on behalf of providers who otherwise charge much higher premiums to end users or sponsors. At worst, operators are loss-leading that data transit to encourage subscribers to stay with them rather than take their business to other operators. All the while, OTT services are taking revenue from network operators’ subscribers and not (necessarily) sharing any of that revenue with the network operator.
However, with this economic common denominator noted, in any specific technical context the term OTT has a range of implementation models that ensure that the cost of this data transit and bandwidth delivery itself is as profitable as possible for the network operator, whether profit is measured in operator CDN revenues or in terms of subscriber retention.
With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy. Read on for some background on how we got here, and details of our implementation.
The digital video industry seems to be gradually succumbing to the allure of the 50% bandwidth savings promised by HEVC, aka h.265. What if those savings can be achieved with the existing AVC, h.264, technology? Cinova thinks its post-processing product, Crunch, can do just that.
The benefits the video industry can realize from a reduction of 50% in the amount of bandwidth necessary to deliver video are enormous. From saving money in bandwidth charges to delivering higher quality video on mobile networks, there are few places in the chain of delivery that don’t benefit in some way.
However, the cost and time required to move to a new codec, like HEVC, are similarly enormous. Devices such as televisions, set-top boxes and smartphones need to be replaced, video encoders upgraded, not to mention all the video that needs to be re-encoded in the new format. The change from MPEG2 encoding to AVC based MPEG4 took the better part of a decade. Likely, HEVC adoption will take a similar length of time.
Sunil Sanghavi, COO of Cinova, believes there is a lot more efficiency to be wrung out of AVC, and that it will get us to 50% savings right now.
Vantrix today officially launched their open source HEVC project website, f265.org. The f265 project was previously announced as an open source version of the H.265 encoder, also known as High Efficiency Video Coding (HEVC). The project aims to accelerate the industry-wide development and adoption of H.265 through a collaborative and free open source model. The project website is now officially available for researchers and commercial entities to obtain the source code and contribute to the refinement and evolution of the code to accelerate the implementation of both software and hardware systems.
Vantrix’s f265 encoder will be licensed under the OSI BSD terms, enabling access to source code, free redistribution, and derived works. The project will target both high quality offline and real-time encoding.
“The f265.org site is maintained by and for developers to help accelerate the development and adoption of HEVC,” noted Francis Labonte, Vantrix research lab director. “We have a working baseline version available that we’ve been demonstrating for UHD/4k live streaming and now want to take the real-time performance and feature set to the next level. We’re hoping to contribute to accelerating the industry transition from H.264 to H.265 and solving network bandwidth issues for high definition video.”
A significant step in the road to Ultra High Definition TV services has been taken with the approval of the DVB-UHDTV Phase 1 specification at the 77th meeting of the DVB Steering Board. The specification includes an HEVC Profile for DVB broadcasting services that draws, from the options available with HEVC, those that will match the requirements for delivery of UHDTV Phase 1 and other formats. The specification updates ETSI TS 101 154 (Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream).
Another specification to gain approval from the Steering Board was the MPEG-DASH Profile for Transport of ISO BMFF Based DVB Services over IP Based Networks. This specification defines the delivery of TV content via HTTP adaptive streaming. MPEG-DASH covers a wide range of use cases and options. Transmission of audiovisual content is based on the ISOBMFF file specification. Video and audio codecs from the DVB toolbox that are technically appropriate with MPEG-DASH have been selected. Conditional Access is based on MPEG Common Encryption and delivery of subtitles will be XML based. The DVB Profile of MPEG-DASH reduces the number of options and also the complexity for implementers. The new specification will facilitate implementation and usage of MPEG-DASH in a DVB environment.
The Consumer Electronics Association has announced updated core characteristics for ultra high-definition TVs, monitors and projectors for the home. As devised and approved by CEA’s Video Division Board, these characteristics build on the first-generation UHD characteristics released by CEA in October 2012.
Under CEA’s expanded characteristics, a TV, monitor or projector may be referred to as Ultra High-Definition if it meets the following minimum performance attributes:
— Display Resolution – Has at least eight million active pixels, with at least 3,840 horizontally and at least 2,160 vertically.
— Aspect Ratio – Has a width to height ratio of the display’s native resolution of 16:9 or wider.
— Upconversion – Is capable of upscaling HD video and displaying it at ultra high-definition resolution.
— Digital Input – Has one or more HDMI inputs supporting at least 3840x2160 native content resolution at 24p, 30p and 60p frames per second. At least one of the 3840x2160 HDMI inputs shall support HDCP revision 2.2 or equivalent content protection.
— Colorimetry – Processes 2160p video inputs encoded according to ITU-R BT.709 color space and may support wider colorimetry standards. — Bit Depth – Has a minimum color bit depth of eight bits.
Could MPEG-DASH be the one online video format to replace all others? In a Streaming Forum 2014 panel on the much-hyped format heavyweights includingCisco, Akamai, the BBC, and Qualcomm offered a shared hope that the industry could standardize behind DASH.
“To me, it’s the young Turk,” said Kevin Murray, system architect for Cisco, comparing DASH to HLS. Broadcasters are slowly centralizing on both options, he noted. DASH, however, lacks a maturity. The format still needs ubiquity (including the ability to play on iOS devices) and integration (DASH-IF needs to act as a gatekeeper). Keep it simple, Murray advised: A unified DASH is easier to deploy and test, and offers a better user experience.
The Hippo Media Server is a simple, standalone HTTP server designed to simplify the delivery of MPEG DASH and Smooth Streaming media. MPEG DASH and Smooth Streaming are both protocols for HTTP-based adaptive streaming. With adaptive Streaming, a media presentation is served to streaming clients as a sequence of small media segments (each segment containing typically 2 to 10 seconds of audio or video). Each segment is accessed over HTTP with an individual URL. In order to serve an adaptive streaming presentation with a regular HTTP server like Apache, Nginx or other populare HTTP servers, one needs to split the original media files into small individual files, one for each segment, so that they can be accessed through separate URLs. This can be very difficult to manage. The Hippo Media Server implements a simple URL virtualization scheme: instead of mapping each URL to a file in the server's filesystem, each URL consists of a pattern, which is parsed by the server when it handles a request, and from which it can locate the appropriate portion of a file in the filesystem. This way, a single media file containing the media data for the segments can be represented as discrete URLs.
One of the biggest challenges for the BBC Archive is how to open up our enormous collection of radio programmes. As we’ve been broadcasting since 1922 we’ve got an archive of almost 100 years of audio recordings, representing a unique cultural and historical resource.
But the big problem is how to make it searchable. Many of the programmes have little or no meta-data, and the whole collection is far too large to process through human efforts alone.
Help is at hand. Over the last five years or so, technologies such as automated speech recognition, speaker identification and automated tagging have reached a level of accuracy where we can start to get impressive results for the right type of audio. By automatically analysing sound files and making informed decisions about the content and speakers, these tools can effectively help to fill in the missing gaps in our archive’s meta-data.
BBC R&D decided to develop these automatic meta-data extraction technologies in a way that would allow large-scale audio processing. Building them into a cloud-based platform (more on this later) allows us to work through very large archives quickly, cheaply and many times over.
H265 is a High Efficiency Video Coding player app for viewing HEVC/H.265 video files and network streams in the MKV video container format. H265 is based on the VLC library with added libde265 HEVC video decoding. In addition to HEVC/H.265 video other formats are also supported. (...)
There are a few different layers and components that are required in order for a developer to take advantage of this, however. This includes an ICE server (which wraps one or more STUN or TURN servers), a data connection to your webserver, whether it’s Ajax, WebSockets, SSE, or something else, properly encoded video (exclusively using Google’s VP8 codec), and properly encoded audio (e.g. Opus)… All of which is encrypted as SRTP (secure RTP).
One of the world’s most famous performance institutions, the Vienna State Opera (VSO), is to stream what is claimed to be the world’s first live production in 4K video coding (HEVC).
Delivered via MPEG-DASH over the Internet for global viewing on Samsung Smart TVs and on special public display in the opera house, the broadcast is set for 7pm CET on 7 May with a production of Verdi’s Nabucco starring Plácido Domingo in the title role will. It will be streamed for global viewing on UltraHD smart TVs as well as to a 65" Samsung UltraHD TV at the opera house.
When it comes to video processing, the biggest story from NAB Show, and indeed the spring trade show season generally, was not the continued progress in HEVC or UHD. Though very important, these were largely anticipated. It was instead the growing interest in the virtualization of video processing, whether that is using on-premise or cloud-hosted compute resources. Most of the major compression vendors made significant announcements to outline their virtualization capabilities.
Ericsson unveiled its Ericsson Virtualized Encoding, a unified software solution that can be implemented on processing platforms containing a combination of dedicated programmable hardware, like Ericsson’s video processing chip, in customer premises, and software or GPU-based servers that are on premise or potentially deployed in the cloud. Using a software abstraction layer, the solution is completely task and service-oriented, intelligently allocating encoding resources regardless of where they are, based on the task in hand and on operator priorities such as deployment speed, video quality and output.
The DASH Talks, a gathering of DASH supporters, compared notes on the standard's advancement, and highlighted the work being done.
“More important than anything else, DASH enables interoperability,” said Iraj Sodagar, a principal multimedia architect at Microsoft and the DASH Industry Forum (DASH-IF) president. “The whole idea of DASH was gathering the best deployed streaming technologies in the market, adding more to it, and creating a standard for interoperability."