Akamai has been providing a number of clues about how the company intends to dramatically increase the performance of the Internet for video streaming to support Ultra HD and ultimately match the scale and quality of broadcast TV. One big clue could be found on the Qualcomm stand at International CES 2014 when that company’s subsidiary, Qualcomm Atheros, was demonstrating its IPQ home gateway. Among other things this included a proof-of-concept client software from Akamai that is designed to optimize the delivery and pre-positioning of content like video and software.
The client software is not a product yet but Akamai is certainly talking as if it will be. Kris Alexander, Chief Strategist, Connected Devices & Gaming at Akamai, says, “The demo offers a glimpse into the future of the Akamai platform as we explore ways to move beyond the edge and onto devices of many types – not only gateways but game consoles, set-top-boxes, Blu-ray players, connected TVs and more.”
At TV Connect, Edgeware is demoing the Orbit Unified Server Software running on two hardware platforms. In addition to Edgeware's existing Orbit 3020 hardware platform, this includes Edgeware's latest Orbit 3080 platform, providing up to 80Gbps aggregate throughput per rack unit to both networks from a single set of caching servers. Edgeware's Convoy Software provides all control and management for both delivery types from a single, virtualized application running in a shared data-center or private cloud. By creating a centralized, logical view of all network resources, this SDN architecture enables operators to dynamically provision capacity and to optimize video session routing for the highest-possible quality of experience (QoE) regardless of any rapid change in the mix of video services. A key example for prioritizing is live streaming to premium sports customers.
One infrastructure service that’s gotten a lot of coverage in the media lately is transit, with many using the term incorrectly or defining it as something it’s not. I thought it might be helpful to explain what transit it, the different types of transit services sold, a list of providers who sell it, what it costs and why it is so important to the Internet. There are a lot of pieces that make up the Internet including products like wholesale, transit, wavelengths, backhaul and others which all share the same underlying optical transport infrastructure, which is the foundation for all Internet and IP services. Many of these terms are used interchangeably, but they shouldn’t be as they all provide a very different function in the market.
In its simplest definition, transit is a “network that passes traffic between networks in addition to carrying traffic for its own hosts”. The Internet is made up of a collection of networks, and in order to get traffic from one end user to another, all service providers, hosting providers and ISP networks need to have an interconnection mechanism. These interconnections, which allow the sharing of traffic, can be either direct between two networks or indirect via one or more other networks that agree to take the traffic. Many of these network connections are indirect as most providers don’t have a global network footprint and as a result, the traffic will be sent through several different interconnections to reach the end user.
Netflix is open sourcing a tool called Suro that the company uses to direct data from its source to its destination in real time. More than just serving a key role in the Netflix data pipeline, though, it’s also a great example of the impressive — if not sometimes redunant — ecosystem of open source data-analysis tools making their way out of large web properties.
Netflix’s various applications generate tens of billions of events per day, and Suro collects them all before sending them on their way. Most head to Hadoop (via Amazon S3) for batch processing, while others head to Druid and ElasticSearch (via Apache Kafka) for real-time analysis. According to the Netflix blog post explaining Suro (which goes into much more depth), the company is also looking at how it might use real-time processing engines such as Storm or Samza to perform machine learning on event data.
At a time when the world’s best-known web servers are losing marketshare, Nginx is growing, fueled by a no-frills philosophy and its knack for handling myriad web connections at the same time. Apache is still the king of all web servers, but use of Nginx has nearly doubled over the past two years, according to internet research outfit Netcraft.
It now runs about 15 percent of all websites, including everyone from startups such as CloudFlare and Parse (bought by Facebook earlier this year) to web giants such as Automattic and Netflix. “We use it for everything,” says Automattic’s Abrahamson. “We run as much of our software stack as possible on top of Nginx.”
In many ways, it’s an unlikely success story, but one that underscores the global power of open source software, software that anyone can use and modify — for free.
As operator of one of the world’s largest and most important content delivery networks, Akamai has become one of the poster children for the age of the Cloud and for what may be wrong with it. Greenpeace included it on the list of high-profile tech companies representative of the Cloud’s environmental impact, thereby thrusting the company into the limelight of public scrutiny of the issue.
The CDN provider was one of only two companies to receive an ‘A’ on the environmentalist organization’s 2012 report cardthat scores companies on the lengths they go to in reducing their impact on the environment. The other company with an ‘A’ was Google.
Dynamic content delivery has been available for a number of years and beyond just basic caching functionality, solutions in the market now focus on route, connection and content optimization. As online commerce sites have grown more complex, they rely on every feature and function of a CDN to quickly and optimally serve users. More importantly, these sites have to cater to higher user expectations across an ever-changing set of devices.
EdgeCast new solution, launched last month, which they call “Transact”, combines the concept of a purpose-built platform for commerce with PCI Compliance, front end optimization, more efficient caching and what they say are faster SSL connections.
Netflix’s decision to build its own caching boxes for optimizing its video delivery was in part influenced by the work of online backup provider Backblaze.
OpenConnect boxes are being used to cache Netflix content within an ISP’s data center, bringing it closer to the end user and thus improving the quality of the service. Cockroft said that Netflix opted to build its own systems for this kind of caching because it was much cheaper than buying it from a major hardware vendor. Part of the reason for that is that the legacy vendors build multipurpose systems that have to be tested and optimized for many different use cases. “We wanted a box that would do one thing,” Cockroft said.
There are a lot of different types of content delivery services in the market, pushing content over various types of networks and across many different devices. While the market is crowded with CDN vendors focusing on the delivery of video, fewer solutions exist in the market for delivering web applications, especially over wireless networks. This morning, a new CDN named Instart Logic launched in the market looking to solve the problem of delivering web applications, with good performance, over mobile networks.
For many of you tracking the web performance world closely, you know it’s been some time since we have seen any really new innovative technology come along. Most of the “new” offerings have generally been improvements to existing established technologies such as content and application delivery networks (caching and network acceleration) or front-end-optimization (rewriting web code to implement performance best practices).
Instart Logic has released a new type of web performance service promising to speed up delivery of websites beyond what traditional approaches like a CDN or FEO solution can provide.
We’re in the midst of a revolutionary shift in the enterprise data center that has not been seen in decades. At its core, this shift is being driven by the rise of “soft” infrastructure. Virtual machines and virtual networks and storage can be provisioned and reconfigured rapidly and in a highly automated way, rather than being limited by the constraints of hardware infrastructure that was built for a much less dynamic environment. The “software-defined data center,” as it is commonly known, has business repercussions that go well beyond transforming data center technology. It has shaken long-term alliances between technology giants. Vendors are scrambling to reposition themselves to best exploit this new era of soft IT.
Operating distributed computing systems at scale brings a variety of challenges. Minor issues like having the wrong version of a small software library can take a whole application offline. To create a smooth transition from development to operations with regard to dependencies, environment, and testing OpenDNS has adopted open-source Docker containerization technology. Interfacing Docker containers with existing dedicated infrastructure across 23 data centers provided a unique set of routing challenges which we solved with clever application of Generic Routing Encapsulation and Border Gateway Protocol. - See more at: http://engineering.opendns.com/2014/07/01/ip-routing-aws-docker/#sthash.4SZbQgq7.dpuf
As we reported previously, the Broadpeak solution introduces very lightweight clients into the home router/gateway and these intercept the unicast stream requests made by a tablet or smartphone to an origin server. The client looks for a multicast stream of the linear content instead. The platform operator works with a content owner to make the most popular channels available in multicast, perhaps during peak times or for popular shows or live sports, or even 24/7. The nanoCDN client receives this multicast stream instead, then converts it to unicast ABR (adaptive bit rate) inside the home so it can be watched on the multiscreen devices without any changes to their apps.
By replacing multiple unicast streams with a single multicast stream in the broadband network, nanoCDN reduces the bandwidth demands for linear/live video. nanoCDN is pioneering because of the way it harnesses multicast within a CDN environment and because it makes in-home devices an extension of the CDN.
Ericsson is establishing a partnership programme for global CDNs, essentially creating a plug-in ecosystem for operators based around its Ericsson Media Delivery Network solution. The CDN providers Limelight Networks, CDNetworks and ChinaCache are the first to announce they will integrate their systems with the Ericsson content delivery solution.
Ericsson’s Media Network Delivery unites caching, optimization and acceleration of content within operator networks. The company says the global CDN partnerships will enable operators to improve the efficiency of high quality content delivery by creating a multi service system that extends the global CDN deep into the operator’s network.
A content delivery network helps to host content such as images, media files, static content and style sheets of a large website by reproducing a copy of data through the various nodes of a network. Additionally, content delivery software helps to reduce load of websites and to improve downloading time of the websites. CDN is not just software; instead, it is a mix of hardware and software which helps to solve the problem of web content downloading and management using synchronization, logging, authentication and load balancing.
As a software architect, I have personally preferred to pick a low cost CDN or to create my own CDN using open source tools along with freely available public CDNs just like for jQuery and other popular libraries.
Here is the list of open source projects which can be used to create your own CDN
"Jet-Stream, pioneering federation, is a strong supporter of standards and has been involved in ETSI and IETF efforts to standardize interconnectivity between CDNs. In a utopian world, all CDNs support common open and free standards to interconnect, and the standards support all advanced features between CDNs. However, realistically we can only conclude that CDN interconnection efforts so far have not procuded any real-world implementations -let alone operational usage- after many years. Running pilots only show that the lowest possible common denominator has been standardized. The costs for CDN vendors and CDN operators to implement these standards can rise very high. Everyone has to rewrite large portions of their code and redesign their architectures. In addition, the pace of the industry is killing. New technologies and business opportunities emerge every few months, and standards will simply keep lagging for years. The business case for CDN federation has already been calculated to be weak, and the need for CDN interconnection has been undercut by the availability of software CDNs that can be extended in hours to cloud and virtual footprints all over the world very fast at low costs and with the full feature stack of your own CDN. Another serious threat is that vendors are trying to get specific innovations into standards, locking the standards into proprietary and expensive (and already outdated) patented technologies. We need a faster, more flexible, more open, more pragmatic approach.
Akamai Technologies said it has deployed FastTCP broadly across its network, the Akamai Intelligent Platform, improving the delivery of IP content on behalf of its clients, notably including over-the-top distributors of video.
FastTCP optimizes the throughput of video and other digital content across IP networks. Akamai picked up the technology with its 2012 acquisition of FastSoft.
The algorithms are designed to improve the Internet’s standard Transmission Control Protocol (TCP) performance for high packet loss or high latency network connections. The resulting improvements in streaming media bitrates and reduced file download time help provide an enhanced, higher-quality user experience to Akamai’s worldwide base of Sola Media customers.
Google today took the wraps off a new experimental protocol called Quick UDP Internet Connections (QUIC) and added it to Chrome Canary, the latest version of its browser. QUIC includes a variety of new features, but the main point is that it runs a stream multiplexing protocol on top of UDP instead of TCP.
Google says it has been working on both a QUIC client implementation and prototype server implementation for the past few months. While early tests of UDP connectivity have been promising, the company says it has learned from past experience “that real-world network conditions often differ considerably.”
As such, Google is looking to test the pros and cons of the QUIC design in the real world by experimenting with it for a small percentage of Chrome canary and dev channel traffic to some Google servers. “Users shouldn’t notice any difference–except hopefully a faster load time,” the company says.
Here are the QUIC highlights Google wants to emphasize right now:
- High security similar to TLS.
- Fast (often 0-RTT) connectivity similar to TLS Snapstart combined with TCP Fast Open.
- Packet pacing to reduce packet loss.
- Packet error correction to reduce retransmission latency.
- UDP transport to avoid TCP head-of-line blocking.
- A connection identifier to reduce reconnections for mobile clients.
- A pluggable congestion control mechanism.
In other words, QUIC is yet another protocol that Google is building to help speed up the Web. It has already done so notably with its SPDY protocol, which is now the foundation of the upcoming HTTP 2.0 protocol.
Last week Netflix released ice, a publicly available tool that offers a granular look at Amazon Web Services usage and the associated costs in more detail than Amazon itself provides. Netflix, which accounts for a third of all North American Internet traffic on any given night, relies on the Amazon's cloud platform because it's the only company with the capacity to deliver all of that data to customers.
But when AWS runs into problems, like when its Elastic Load Balancer that routes network traffic went down, causing anembarrassing Christmas Eve outage of Netflix's streaming service, there is no recourse. As a result, Netflix is forced to create its own tools for dealing with problems it encounters in AWS.
Netflix clearly has learned from its Christmas Eve outage, which involved the failure of Amazon Web Services’ Elastic Load Balancing service, and has created a tool called Isthmus to solve the problem.
In a Friday blog post, Netflix’s Ruslan Meshenberg explained that Isthmus manages Elastic Load Balancing services in multiple regions in order to keep latency low for users in the event that ELB goes down in one region. On Christmas Eve, the issue was that state data got deleted, causing issues for the control plane tasked with managing load-balancer configuration and bringing some down some ELB load balancers. Now that sort of error could be less likely to affect Netflix service.
The next-generation of CDN needs to give service providers much greater control over bandwidth priorities in the home, Alcatel-Lucent believes. If there are multiple devices competing for bandwidth, you need to be able to say that a connected television needs priority for streaming and that a smartphone should be given a lower bitrate stream, for example.
“Service providers need a way to control the quality that is delivered to every screen and that is not possible today with adaptive bitrate streaming,” Mestric explains. “So in the CDN we will include a session manager that makes the CDN aware of all the different devices and session requests so that if you start to watch video on an iPhone and there is no congestion in the access network you will get the highest bitrate possible, but if someone else in the family turns on a connected TV device the service provider can then limit the bitrate profiles that can be accessed by the iPhone, for instance.”
Mestric points out that session prioritization could be useful if there is contention on mobile networks. Then a content provider could prioritize a premium subscriber over a basic subscriber so they get the higher bitrate if there is no chance to provide them both with the best possible experience.
As we continue to evolve our mobile treatments, we also monitor their effectiveness alongside other optimization solutions. Today I want to call out some interesting results we noted when, as a fun little in-house exercise, we took the O’Reilly website, de-optimized it, and then iterated through a handful of core performance best practices using our FEO service. The goal was to demonstrate the acceleration benefit (in terms of bytes in, start render time, document complete time, connections, and resources) of each practice for a typical 3G mobile user.
While we saw predictable results for step 1 — enabling keep-alives and compression — we were somewhat surprised by what we saw when we added a content delivery network.