You have a terrible headache: How to design an scalable architecture using Amazon Web Services in a quick and effective way? How to test an architectural approach to a certain problem and share it in a easy way with you fellows? I have an answer for you: MadeiraCloud.
But, What is MadeiraCloud and how it can solve these problems?
It’s a Visual, Integrated Management Environment for AWS, where you can design, deploy,manage, monitor, backup, recover, redeploy, schedule, and finally share your Cloud infrastructure.
Netflix yesterday pushed version 1.1 of its Denominator DNS automation system onto its public GitHub account. This open-source tool allows developers and administrators alike to automate DNS migrations, changes and rules without having to edit DNS records by hand. With this release, the project now includes geographic-based controls for DNS routing, and thus is nearly ready to be deployed internally at Netflix.
When a developer is hired at Netflix, they learn a few different ways of doing things off the bat. For starters, there are very few managers, and all developers are senior level. Secondly, developers tend to find their own niches rather than have one assigned to them. So it went for Adrian Cole, who joined Netflix in December and found himself running a brand new open-source project by January.
That project, proposed by Adrian Cockcroft (Netflix’s cloud architect), came to be called Denominator. “Some of the outages we had last year could be solved by running multiple [cloud] regions,” he said. “We were looking for ways to direct our customers to more than one region. We do that by managing DNS, but we can't have a hand-managed configuration the way most people do DNS; we needed a RESTful API.”
Netflix chief cloud architect Adrian Cockcroft shares five money-saving maneuvers for big Amazon Web Services users.
Instead of trying to build out its own data centers for its rapidly expanding film and video distribution business, Netflix finds the better strategy is to use Amazon Web Services' cloud resources. At Cloud Connect 2013, the architect of that strategy disclosed some of his secrets for optimizing use of the Amazon cloud.
From the PerCDN website:
“PeerCDN automatically serves a site’s static resources (images, videos, and file downloads) over a peer-to-peer network made up of the visitors currently on the site. Offloading part of the web hosting burden to site visitors reduces bandwidth costs.”
- Protect and store your cryptographic keys with industry standard, tamper-resistant HSM appliances. No one but you has access to your keys (including Amazon administrators who manage and maintain the appliance).
- Use your most sensitive and regulated data on Amazon EC2 without giving applications direct access to your data's encryption keys.
- Store and access data reliably from your applications that demand highly available and durable key storage and cryptographic operations.
- Use AWS CloudHSM in conjunction with your compatible on-premise HSMs to replicate keys among on-premise HSMs and CloudHSMs. This increases key durability and makes it easy to migrate cryptographic applications from your datacenter to AWS.
Nicolas Weil's insight:
A potentially disruptive service for ensuring highest security of video source contents in cloud processing workflows.
Amazon Web Services will now offer the option for everyone to have their own virtual private cloud (VPC), another sign of the company’s intent to push into the enterprise market. The service means that every customer using EC2 will see the option for a VPC as an instance type. Until now, the VPC was a separate service.
A VPC lets customers create what AWS calls a “virtual network of logically isolated EC2 instances and an optional VPN connection to your own datacenter.” That development has implications for customers who are weighing the benefits of a data-center centric approach that virtualizes a network of physical centers to create their own elastic infrastructure. The problem comes down to the cost of licensing, new hardware and the IT staff to manage it all. It’s a model promoted by companies like VMware, which are looking to extend the reach of their virtualization technology.
Broadpeak today launched umbrellaCDN™, an innovative solution that offers content providers and pay-TV operators complete control over the allocation of video content to multiple CDNs. Utilizing umbrellaCDN, users can select the best CDN for live or VOD content at all times — according to various criteria — enabling them to provide the best possible quality of experience for end-users at the best possible cost.
Content providers can create parameter-based rules according to a wide range of criteria, including end-user geo-location, end-user ISP, type of content (e.g., live or VOD, pay or free, premium or trailer), and time of day or day of the week. Rules are managed through three distinct modes: quality of service, where the chosen CDN is the one that offers the best quality in a given region; load balancing, whereby sessions are divided between several CDNs; and quotas, with a maximum number of sessions allocated to each CDN.
Five years after the disclosure of a serious vulnerability in the Domain Name System dubbed the Kaminsky bug, only a handful of U.S. ISPs, financial institutions or e-commerce companies have deployed DNS Security Extensions (DNSSEC ) to alleviate this threat.
In 2008, security researcher Dan Kaminsky described a major DNS flaw that made it possible for hackers to launch cache poisoning attacks, where traffic is redirected from a legitimate website to a fake one without the website operator or end user knowing.
While DNS software patches are available to help plug the Kaminsky hole, experts agree that the best long-term fix is DNSSEC, which uses digital signatures and public-key encryption to allow websites to verify their domain names and corresponding IP addresses and prevent man-in-the-middle attacks.
We announced Denominator at our February NetflixOSS meetup, and now we are ready to release the first code as open source. Denominator is a portable Java library for manipulating DNS clouds. Denominator has pluggable back-ends, initially including AWS Route53, Neustar Ultra, DynECT, and a mock for testing. We also ship a command line version so it's easy for anyone to try it out. The reason we built Denominator is that we are working on multi-region failover and traffic sharing patterns to provide higher availability for the streaming service during regional outages caused by our own bugs and AWS issues. To do this we need to directly control the DNS configuration that routes users to each region and each zone. When we looked at the features and vendors in this space we found that we were already using AWS Route53, which has a nice API but is missing some advanced features; Neustar UltraDNS, which has a SOAP based API; and DynECT, which has a REST API that uses a quite different pseudo-transactional model. We couldn’t find a Java based API that grouped together common set of capabilities that we are interested in, so we created one. The idea is that any feature that is supported by more than one vendor API is the highest common denominator, and that functionality can be switched between vendors as needed, or in the event of a DNS vendor outage.
EVCache is a distributed in-memory caching solution based on memcached &spymemcached that is well integrated with Netflix OSS andAWS EC2 infrastructure. Today we are announcing the open sourcing of EVCache client library on Github.
What is an EVCache App? EVCache App is a logical grouping of one or more memcached instances (servers). Each instance can be a
EVCache Server (to be open sourced soon) running memcached and a Java sidecar appEC2 instance running memcachedElastiCache instance instance that can talk memcahce protocol (eg. Couchbase, MemcacheDB)
In a bid to re-architect and improve upon how online video ads are inserted, this morning Akamai is announcing Ad Integration Services. With Ad Integration Services, online video ads are dynamically inserted in the cloud and delivered using Akamai's Sola Sphere delivery network, eliminating the typical client-side process of the video player calling for the ads.
As Kurt Michel, director of product marketing for media solutions, explained to me last week, the traditional process often diminishes the user experience. That's because the content itself and the ad are competing for last-mile bandwidth as the video player transitions from the former to the latter. This contention can lead to delays, which in turn leads to viewer abandonment (if you've ever waited for an ad to play you know about this).
With Ad Integration Services, the player sees just one video stream, as video ads are dynamically inserted from ad decisioning platforms and ad networks in both linear and on-demand programming, across connected devices. Another benefit of Ad Integration Services is that it relieves online video platforms and other video players of the need to constantly be updated to handle changes in ad delivery. With Ad Integration Services, player providers simply integrate a lightweight, IAB-compliant SDK which supports measurement beaconing for the ads.
Since Facebook started open sourcing the datacenter in 2011 in the Open Compute Project (OCP), Facebook and its OCP partners have had some successes in making datacenter computing more open and affordable. Now, the OCP takes on what may be its biggest challenge to date:Creating open-source, high-end network switches.
As the largest Internet TV network, one of the most interesting challenges we face at Netflix is scaling services to the ever-increasing demands of over 36 million customers from over 40 countries.
Each movie or TV show on Netflix is described by a complex set of metadata. This includes the obvious information such as title, genre, synopsis, cast, maturity rating etc. It also includes links to images, trailers, encoded video files, subtitles and the individual episodes and seasons. Finally there are many tags that are used to create custom genres, such as “upbeat”, “cerebral”, “strong female lead”. These all have to be translated into many languages, so the actual text is tokenized and encoded.
This metadata must be made available for several different services, which each require a different facet of the data. Front-end services for display purposes need links to images, while algorithms that do discovery and recommendations use the tags extensively and search thousands of movies looking for the best few to show to a user. Powering this while utilizing resources extremely efficiently is one of the key goals of our Video Metadata Services (VMS) Platform.
Storing, sharing and securing data in the cloud is constantly evolving as clients look for new ways to take advantage of the cloud to add flexibility to and remove costs from their IT infrastructures.The Amazon Web Services (AWS) Summit 2013, held this week in San Francisco, is bringing together several of Amazon's technology partners to showcase a variety of new ways to move and secure their data in the cloud, and as it is migrating to and from the cloud or between clouds.
Saffron Digital, which recently joined Akamai’s NetAlliance Partner Program, has leveraged its relationship with the leading cloud platform provider to launch a comprehensive end-to-end UltraViolet (UV) solution. The new solution will enable retailers, content owners, operators and device manufacturers to take advantage of the business opportunities presented by UV. Using a common file format (CFF), UV aims to engage consumers with the ultimate value proposition for premium video content: buy it anywhere and play it on any device you own.
Saffron Digital, which has been offering UV-like digital locker services since 2009, has integrated its secure online video platform with Akamai’s Sola Media Solutions and global CDN to create an “out-of-the-box” UV solution that simplifies and shortens time to market. As part of the relationship, Saffron Digital will develop UV storefronts that have its proprietary CFF player at their core.
I’ve had many conversations and read multiple articles in the past few months in which people try to predict what the future of technology infrastructure will look like. I’m personally excited about this as it brings to light the importance of a well-planned infrastructure, and it also raises infrastructure awareness amongst everyone from the product and marketing teams to the C-Level executives. However, I feel that the prediction of “hybrid,” as well as the overuse of buzzwords, has left us with a wide gap of missing details and context. Over the next few weeks I’d like to dig into the different types of major infrastructure categories and illustrate how you can find your Optimal Infrastructure Profile by using data and available tools.
Amazon’s cloud services are a great thing. They give you instant access to computing power over the web, letting you instantly store data and run applications. But if you want to set up your own cloud service, Amazon can’t help you. You need software like Riak CS.
Riak CS is a file system for storing data, and it’s designed to be fully compatible with Amazon’s popular cloud storage service, S3. It can be used to create private Amazon-style storage systems in your own data center, to build public services that compete with Amazon, or to power web applications of your own creation. And as of Wednesday, it’s open source.
Because content delivery networks (CDNs) are at the heart of next-generation IP video infrastructure, they are the logical place to add personalization. With the right approach, service providers can take advantage of the CDN replication model while delivering personalized content. At the same time, they can ensure the network scales to achieve quality expectations.
There are many good reasons to personalize content, including :
- Targeted ads
- Emergency alerts
- Quality adjustments
To take control of the delivery mechanism, service providers must build a server-side version of the client-based adaptive streaming concept. Introducing two new components will bring more intelligence and processing power into the CDN to improve overall HTTP behavior and performance:
A session manager retrieves the contextual information needed to customize the content and tell the cache in the CDN which changes to apply to the content.A video processor in the cache generates the new content to be sent to end users based on the original content and the information delivered by the session manager.
Nine months ago, Amazon launched their new dynamic content delivery service in beta and two weeks ago, I posted details on how their product is coming along and outlined new features they have added since launch. Every since Amazon announced their new service, people keep asking me if Amazon will disrupt Akamai’s DSA business and drive pricing down in the market as a whole. While Amazon still has a long way to go before that has the possibility of happening, make no mistake, they are gunning for Akamai, even though they won’t call out Akamai by name.
From purely a features standpoint, Akamai’s dynamic content delivery service still trumps Amazon’s, by a wide margin. But that’s not going to last for too long and little by little, Amazon is already starting to close the gap. Amazon’s working on rolling out a lot more functionality and while they will never be able to compete for 100% of Akamai’s DSA business, they will continue to get stronger and go after more of the business that doesn’t require a ton of professional services. Amazon will never compete for 100% of the market, but they don’t need to. All they have to do is continue to roll out features, add an SLA, lower pricing and prove their dynamic content acceleration services performs well and can scale, with reliability. Of course building all of that out to scale doesn’t happen overnight, but it’s not as hard as some vendors make it out to be and Amazon has the resources to make it happen and more importantly, has multiple lines of businesses to generate revenue from. Amazon can be patient and grow the business over time.
Many applications require each HTTP request, from a particular client, be directed to the same server or cloud instance. This is called session persistence and is common with web applications that do not share application “state” between back-end servers or data stores.
In the past, this has been solved using local Load Balancers such as those provided by F5 or Brocade or more recently with the Elastic Load Balancer from Amazon. However, as application architects look to serve a global audience and deploy their applications to multiple data centers it becomes necessary to ensure session persistence at a global and local level.
Included in the new version of Kona Site Defender are upgraded Web Application Firewall (WAF) capabilities and network layer controls, new user validation capabilities and improved configuration and automation tools that speed both initial deployment and response time to changing attacks. Further, Akamai has developed Application Programming Interfaces (APIs) and other modifications to Kona Site Defender. These are designed to make the technology easier to use by Managed Security Services Providers (MSSP) as well as to facilitate tighter integration with existing on-premises security technology.
Kona Site Defender is an always-on cloud-based web security solution designed to protect an enterprise's most critical online business functions against attacks that can result in millions of dollars in lost transactions and business productivity each year, and even greater harm to brand value and reputation. Using the Akamai Intelligent Platform™ as its foundation, the solution offers highly flexible and scalable protection – that does not negatively impact performance – to customers against a variety of attack vectors including DDoS, as well as web application attacks such as SQL injection, Cross Site Scripting and others.
An approach being researched by the content delivery network provider helps reduce the grid-connected load during peak demand periods, when electricity is most expensive.
The approach being researched by Akamai fellow and University of Massachusetts professor Ramesh Sitaraman, proposes using smart batteries within internet-scale distributed networks. The batteries would automatically begin supplying power when server loads hit specified peak levels, helping reduce the need for grid-connected power during those periods, when power usually costs more. They are recharged or replenished during the night, when server loads are usually at their lowest and electricity rates are usually more cost-effective.
The batteries could be built into the servers themselves or within the rack. The research currently focuses on lead-acid technology, but it could also be applied to lithium-ion models as well, according to Sitaraman.
Each quarter, Akamai publishes a quarterly "State of the Internet" report. This report includes data gathered across Akamai's global server network about attack traffic, average & maximum connection speeds, Internet penetration and broadband adoption, and mobile usage, as well as trends seen in this data over time.