It's all about DASH: Adoption is moving at a rapid pace, as industry insiders see a strong need to get DASH implemented in the field in the coming year.
The Pantos spec, as it is known in the industry, is a series of working drafts for HLS submitted by two Apple employees as an information draft for the Internet Engineering Task Force. As of the time of this article, the Pantos spec is currently at informational version 10.
Much has changed between the early versions and the most recent v10 draft, but one constant remains: HLS is based on the MPEG-2 Transport Streams (M2TS), has been in use for almost 2 decades, and is deployed widely for varied broadcast and physical media delivery solutions.
In that time frame, however, little has changed for basic M2TS transport stream capabilities. For instance, M2TS still lacks an integrated solution for digital rights management (DRM). As such, all HLS versions cannot use "plain vanilla" M2TS, and even the modified M2TS used by Apple lacks timed-text or closed-captioning features found in more recent fragmented elementary stream streaming formats.
Yet Apple has been making strides in addressing the shortcomings of both M2TS and the early versions of HLS: In recent drafts, the HLS informational draft allows for the use of elementary streams, which are segmented at the time of demand rather than beforehand. This use of elementary streams means that one Achilles' heel of HLS -- the need to store thousands, tens of thousands, or hundreds of thousands of small segments of long-form streaming content -- is now eliminated.
Google, with its Android mobile operating system platform, has adopted HLS for Android OS 4. Some enterprising companies have even gone back and created HLS playback for earlier versions of Android OS-based devices.
The Netflix API optimization story is an interesting journey from a generic one-size-fits-all static REST API architecture to a more dynamic architecture that lends power to the client team to define and deploy their custom service endpoints.
The next-generation of CDN needs to give service providers much greater control over bandwidth priorities in the home, Alcatel-Lucent believes. If there are multiple devices competing for bandwidth, you need to be able to say that a connected television needs priority for streaming and that a smartphone should be given a lower bitrate stream, for example.
“Service providers need a way to control the quality that is delivered to every screen and that is not possible today with adaptive bitrate streaming,” Mestric explains. “So in the CDN we will include a session manager that makes the CDN aware of all the different devices and session requests so that if you start to watch video on an iPhone and there is no congestion in the access network you will get the highest bitrate possible, but if someone else in the family turns on a connected TV device the service provider can then limit the bitrate profiles that can be accessed by the iPhone, for instance.”
Mestric points out that session prioritization could be useful if there is contention on mobile networks. Then a content provider could prioritize a premium subscriber over a basic subscriber so they get the higher bitrate if there is no chance to provide them both with the best possible experience.
Because content delivery networks (CDNs) are at the heart of next-generation IP video infrastructure, they are the logical place to add personalization. With the right approach, service providers can take advantage of the CDN replication model while delivering personalized content. At the same time, they can ensure the network scales to achieve quality expectations.
There are many good reasons to personalize content, including :
- Targeted ads
- Emergency alerts
- Quality adjustments
To take control of the delivery mechanism, service providers must build a server-side version of the client-based adaptive streaming concept. Introducing two new components will bring more intelligence and processing power into the CDN to improve overall HTTP behavior and performance:
A session manager retrieves the contextual information needed to customize the content and tell the cache in the CDN which changes to apply to the content.A video processor in the cache generates the new content to be sent to end users based on the original content and the information delivered by the session manager.
AmberFin will also announce its strategy for iCR's support of IMF (Interoperable Master Format) a new SMPTE file format designed to create a single - standardized - master file for distribution of content between businesses into multiple territories.
"With IMF, you won't need to create a thousand copies of the same content to suit different audiences, formats and geographies. What the standard does is separate the content into various ingredients or components (namely, AS02 MXF media files), a number of ‘recipes’ (Composition Play Lists) and a selection of instructions (or Output Program Lists) appropriate for each of those audiences. IMF is designed to take the right mix of ingredients, the right recipe and a tailored set of instructions to create a dedicated version for each market, without having to duplicate files. From the end-user perspective, IMF will not be something they ever see, but which will bring significant efficiencies to their workflow; iCR will make it so boring, that it will all just work," adds Bruce Devlin.
"Eureka is a REST based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. We call this service, the Eureka Server. Eureka also comes with a java-based client component, the Eureka Client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. At Netflix, a much more sophisticated load balancer wraps Eureka to provide weighted load balancing based on several factors like traffic, resource usage, error conditions etc to provide superior resiliency. We have previously referred to Eureka as the Netflix discovery service."
Hadoop has become the de facto standard for managing and processing hundreds of terabytes to petabytes of data. At Netflix, our Hadoop-based data warehouse is petabyte-scale, and growing rapidly. However, with the big data explosion in recent times, even this is not very novel anymore. Our architecture, however, is unique as it enables us to build a data warehouse of practically infinite scale in the cloud (both in terms of data and computational power).
In this article, we discuss our cloud-based data warehouse, how it is different from a traditional data center-based Hadoop infrastructure, and how we leverage the elasticity of the cloud to build a system that is dynamically scalable. We also introduce Genie, which is our in-house Hadoop Platform as a Service (PaaS) that provides REST-ful APIs for job execution and resource management.
Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.
How to integrate my topics' content to my website?
Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.
Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.