SDN, NFV and cloud in mobile
13.9K views | +4 today
SDN, NFV and cloud in mobile
A collection of articles and posts on how virtualization and software defined networking will affect core and RAN mobile networks.
Curated by Patrick Lopez
Your new post is loading...
Your new post is loading...
Scooped by Patrick Lopez!

OpenStack, CORD Remain Central to AT&T Virtualization Plans

OpenStack, CORD Remain Central to AT&T Virtualization Plans | SDN, NFV and cloud in mobile |

AT&T may have missed its year-end 2016 goal for deploying its OpenStack-based AT&T Integrated Cloud (AIC) zones, but the carrier remains fully committed – for now – to the OpenStack platform. 

 The AIC zones are physical locations where the carrier runs virtual network functions (VNF). Andre Fuetsch, president of AT&T Labs and CTO at the carrier, described them as where it’s “bringing the network to the cloud.

 “OpenStack is still a critical component to our architecture,” Fuetsch said. “We’ve made a significant investment in development resources contributing to the community, either directly through us or through proxy of developers that we hire.” AT&T had initially planned to have more than 100 AIC zones by the end of last year, but ended up with just more than 80 zones deployed. The carrier now plans to hit more than 100 AIC zones by the end of 2017. The OpenStack project formed in 2010 as an open source group to write software for a cloud operating system. It controls large pools of compute, storage, and networking resources in a data center, all managed through a dashboard. OpenStack started as a joint project of Rackspace and NASA. The OpenStack Foundation took over the project in 2016.and now more than 500 companies have joined the project. OpenStack has carved out an important role in the telecom market’s migration to virtualized platforms. Those companies have realized the platform’s benefits in support of their cloud, network functions virtualization (NFV), and software-defined networking (SDN) efforts. Despite some recent concerns over the scalability of OpenStack, AT&T indicated it remains committed to the community. “Certainly some have been concerned about OpenStack in terms of its scaling and complexity, but we’re far enough down the road with OpenStack that we are fully committed,” Fuetsch said. “We absolutely look at a very long horizon in terms of where things are going and are always evaluating alternatives and options, but for now we are all in.” 

CORD in Support of 5G AT&T is also fully behind efforts surrounding the Central Office Re-architected as a Data Center (CORD) program. The initiative is targeted at evolving data centers toward greater software control in support of advanced services like 5G. “What we like about this is for mobile operators with large infrastructure in place is that it puts them into a position to — with the move to 5G — support a more cloud-like architecture with the radio and mobile packet infrastructure,” Fuetsch explained. “It puts you into a position to offer these real-time services like AR [augmented reality], VR [virtual reality], and autonomous cars. It’s about how you can build out that distributed cloud, abstract it, and make it available for more use cases, for a larger development community.” CORD combines NFV and SDN to bring data center economics and cloud agility to the telco central office. On.Lab‘s Open Network Operating System (ONOS) last year established CORD as a separate open source entity. AT&T was an early supporter of the CORD program, teaming with a number of companies to produce proof-of-concepts in late 2015. IHS Markit late last year released a survey that found 70 percent of respondents plan to deploy CORD in their central offices. That included 30 percent planning those deployments for the end of 2017, and the remaining 40 percent planning for deployment by the end of 2018.

No comment yet.
Scooped by Patrick Lopez!

CenturyLink Surges Forward with Its Own Version of CORD

CenturyLink Surges Forward with Its Own Version of CORD | SDN, NFV and cloud in mobile |

CenturyLink is on a tight deadline with its commitment to fully virtualize its IP core network by the end of 2019. As part of that, the service provider is emulating some of the open source communities’ work to create a Central Office Re-architected as a Data Center (CORD).

The company announced today that it is the first carrier to use its own virtualized Broadband Network Gateway (vBNG) to support broadband services over DSL to its residential and business customers.

Adam Dunstan, CenturyLink’s VP of SDN and NFV engineering, said the company developed the vBNG in-house. It’s built on an Intel white box server. Initially, CenturyLink has deployed the vBNG in some of its central offices and data centers in Minnesota.

Dunstan said a lot of CORD work has focused on fiber connectivity such as GPON for the access connection. But no work, outside of CenturyLink, has been done around DSL. “We need to provide DSL and GPON to customers,” he said. “We have a significant footprint for both.”

Ultimately, the company wants to use CORD to unify its access technologies across a common infrastructure. But it’s starting with DSL and will continue to roll out its vBNG for DSL throughout 2017.

“We built a CORD system using a set of components; some are similar and some are different than the [ONOS] working group,” said Dunstan. “We didn’t use software from ONOS.”

He added: “We used a number of components that are open source and generally available — OpenStack, OpenDaylight controllers, Intel software toolkits. Just because you’re doing CORD doesn’t mean you have to do ONOS CORD.”

CenturyLink built its SDN access controller using OpenDaylight software. As far as ONOS, Dunstan said, “The ONOS controller is a piece of software that controls virtual routers. There’s a bunch of other good container managers already. And we’re not yet moving the CPE from the premise into the central office. We didn’t need to use any of those components.”

CenturyLink did a full-stack test of its vBNG in production. The test included activation, forwarding, talking to walled gardens, and accessing the company’s authorization and provisioning systems.

“We made no changes to provisioning and authorization to deploy this,” said Dunstan. “It’s been in and out of production. We’re moving it to another location at the moment.”

“Our CORD deployment is a significant milestone on our path to achieve full network virtualization,” said Aamir Hussain, CenturyLink’s CTO, in a statement.

In addition to virtualizing its infrastructure, CenturyLink continues to develop and implement virtualized services, including a virtual firewall, data center interconnection and software-defined wide area networking (SD-WAN) for enterprise customers.

No comment yet.
Scooped by Patrick Lopez!

Facebook Updates Its Server Designs, Open Compute Platforms

Facebook Updates Its Server Designs, Open Compute Platforms | SDN, NFV and cloud in mobile |

Facebook, one of the driving forces behind open computing, this week updated its designs for an open storage chassis, an open server, and several other open compute platforms.

They are freely available to anyone, through the Open Compute Project (OCP). The announcement was one of the caps to the annual OCP conference in Santa Clara this week.

Demands on data centers are forever increasing. In Facebook’s case, the company is seeing an inexorable, rapid increase in the use of photos and videos that it has to keep up with. So, although Facebook wants open hardware, that hardware is getting progressively more complex. The upgrades announced this week are all either denser, faster, more powerful, more configurable, or some combination thereof.

Facebook said it plans to refresh its entire server hardware fleet with equipment based on the updated designs.

“As photos and videos become even more central to the way people connect and share with each other, the efficiencies we gain help us scale and improve the speed of our infrastructure,” a Facebook spokesman said in an email exchange.

Asked if Facebook is intimating a new wave of spending to get the new hardware installed on some accelerated schedule, a Facebook spokesman explained to SDxCentral that the company started work on the refresh more than a year ago. The company continuously updates its hardware and the update schedule hasn’t changed.

The Updates
Facebook’s new Bryce Canyon storage chassis will improve upon its predecessor, Open Vault, in several ways. Among them, it will support 20 percent more HDD (high density drive) capacity, four times the memory footprint, and have better thermal performance – a critical consideration.

Another update is Big Basin, the successor to the company’s Big Sur GPU server, a specialized bit of equipment that lacks general compute and networking capabilities and so must work in tandem with another server. Facebook uses this class of GPU servers to train neural networks. The company said Big Basin can train models that are 30 percent larger than Big Sur. It can also render impressive throughput increases. The practical result is the ability to tackle more complex problems and get results faster.

Tioga Pass is the successor to the company’s Leopard, a server Facebook said it uses for several tasks, including working with the Big Basin GPU server. The new spec allows for more options for setting up memory and compute resources, so that it can be configured for different applications.

Yosemite v2 is a refresh of Yosemite, the company’s first-generation multi-node compute platform. The design still holds four single socket (1S) server cards, but previously, if one card needed to be swapped out, all four had to be turned off too. One of the key advantages of the new design is that the other three cards can remain operating while the fourth is swapped out.

No comment yet.
Scooped by Patrick Lopez!

OPNFV set to release fourth NFV platform under ‘Danube’ moniker

OPNFV set to release fourth NFV platform under ‘Danube’ moniker | SDN, NFV and cloud in mobile |
Open source-focused NFV organization OPNFV said next release to drop in late March or early April, bolstering telecom move towards software platforms.

The Linux Foundation’s Open Platform for NFV project is looking to unveil a new platform release over the next two months, which will be the fourth platform targeting open source deployments of network functions virtualization.

OPNFV said the pending release is set to be unveiled around the late March or early April timeframe and will be dubbed “Danube,” which keeps with its river-based naming theme. Previous releases included the initial “Arno” release in mid-2015, “Brahmaputra” in early 2016, and “Colorado” last September.

The Colorado release included updates targeted at accelerating the development of NFV applications and services by enhancing security, IPv6 support, service function chaining, testing VPN capabilities and support for multiple hardware architectures. The organization noted the updates followed collaboration with upstream communities and were integrated into the “automated install/deploy/testing framework.”

“We’re seeing a maturity of process with the Colorado release, reflected by things like achievement of the CII Best Practices badge for security and the growing maturity of our testing and [software development and information technology operations]methodology,” said Chris Price, open source manager for SDN, cloud and NFV at Ericsson, at the time of the Colorado release. “The creation of working groups across [management and orchestration], infrastructure, security and testing also help the project evolve towards a foundational and robust industry platform for advanced open source NFV.”

OPNFV was founded in late 2014, with founding members including the likes of AT&T, China Mobile, Cisco, NTT DoCoMo and Vodafone.

OPNFV late last year formed its End User Advisory Group tasked with providing technical guidance to the OPNFV developer community working to bring NFV platforms to the telecom space. The advisory group includes representation from AT&T, British Telecom, CableLabs, China Mobile, China Unicom, Cox Communications, Deutsche Telekom, Fidelity Investments, Liberty Global, KDDI, Orange, SK Telecom, Sprint, Telecom Italia, Telefónica, Telia Company and Vodafone Group.

In general, telecom operators seem to welcome the help from open source organizations, noting their ability to provide a level of stability assurance for platforms.

“Certainly we are the benefactors of the work that those organizations do,” said John Isch, director of network and voice practice, at Orange Business Services Network and Voice Center of Excellence. “In an ideal world any virtual network function works on any open source system and those organizations hopefully get us closer to that nirvana. In today’s world it’s anything but plug-and-play with VNFs. There is a great deal of testing that needs to be done to ensure a VNF will work with a given orchestration platform. We believe this will only improve from here through the work of these organizations and pressure from the carrier industry.”

OPNFV last June released results of a survey that found an increasingly small percentage of telecom operators had not yet planned for NFV. The survey, which was conducted for OPNFV by Heavy Reading and released at last year’s OPNFV Summit, noted 6% of the more than 90 telecom operators questioned did not have an NFV strategy planned at all, down from 14% in September 2015.
No comment yet.
Scooped by Patrick Lopez!

World's first ETSI NFV Plugfest

World's first ETSI NFV Plugfest | SDN, NFV and cloud in mobile |
As all know in the telecom industry, the transition from standard to implementation can be painful, as vendors and operators translate technical requirements and specifications into code. There are always room for interpretation and desires to innovate or differentiate that can lead to integration issues. Open source initiatives have been able to provide viable source code for implementation of elements and interfaces and they are a great starting point. The specific vendors and operators’ implementations still need to be validated and it is necessary to test that integration needs are minimal.

Networks Function Virtualization (NFV) is an ETSI standard that is a crucial element of telecom networks evolution as operators are looking at their necessary transformation to accommodate the hyper growth resulting from video services moving to online and mobile.

As a member of the organization’s steering committee, I am happy to announce that the 5G open lab 5Tonic will be hosting the world’s first ETSI NFV plugfest from January 23 to February 3, 2017 with the technical support of Telefonica and IMDEA Networks Institute.  

5Tonic is opening its doors to the NFV community, comprising network operators, vendors and open source collaboration initiatives to assert and compare their implementations of Virtual Network Functions (VNFs), NFV Infrastructure and Virtual Infrastructure Manager. Additionally, implementations of Management and Orchestrations (MANO) functions will also be available.

43 companies and organizations have registered to make this event the largest in NFV interoperability in the world.

•           Telefonica
•           A10
•           Cisco
•           Canonical
•           EANTC
•           EHU
•           Ensemble
•           Ericsson
•           F5
•           Fortinet
•           Fraunhofer
•           HPE
•           Huawei
•           Inritsu
•           Intel
•           Italtel
•           Ixia
•           Keynetic
•           Lenovo
•           Mahindra
•           Openet
•           Palo Alto
•           Radware
•           Sandvine
•           Sonus
•           Spirent
•           RedHat
•           VMWare
•           WIND

Open source projects:
•           OSM (Open Source MANO)
•           Open Baton
•           Open-O
•           OPNFV

 OSM is delivering an open source MANO stack aligned with ETSI NFV Information Models. As an operator-led community, OSM is offering a production-quality open source MANO stack that meets the requirements of commercial NFV networks.

Testing will take place on site at the 5TONIC lab near Madrid, as well as virtually for remote participants.
No comment yet.
Scooped by Patrick Lopez!

Amdocs Will Contribute Modules to OpenECOMP Platform

Amdocs Will Contribute Modules to OpenECOMP Platform | SDN, NFV and cloud in mobile |

Billing and OSS company Amdocs said it will work closely with The Linux Foundation to contribute modules to the Enhanced Control, Orchestration, Management and Policy (ECOMP) platform and accelerate its adoption.

ECOMP was developed by AT&T and is now hosted by the Linux Foundation as an open source project that is available to service providers and cloud developers. Amdocs says it will play a key role in the new Linux Foundation project, which it is now calling OpenECOMP.

Amdocs didn’t provide any specifics on the modules it has developed for OpenECOMP but said that it will help service providers deploy the technology. “Having co-developed important modules to be contributed to OpenECOMP, we are in a position to help customers achieve an early advantage in leveraging OpenECOMP for service innovation,” said Gary Miles, chief marketing officer at Amdocs, in a prepared statement.

Sarah Wallace, director, service enablement, AI & analytics with IHS Markit, said that Amdocs’ involvement in the platform denotes the importance of OSS and BSS in a virtualized environment. “Amdocs partnering with Linux Foundation’s OpenECOMP platform reinforces the market’s need for all functions of OSS and BSS to be exposed and able to work with network functions in a virtualized environment. This is what will enable new and innovative services,” she said.

Two operators besides AT&T have said they will trial ECOMP. In December Bell Canada, which provides communications services to 21 million customers in the provinces of Quebec and Ontario and in the Northwest Territories, said it was testing the platform. And last September Orange said it was also planning to trial ECOMP.

Last week at AT&T’s Developer Summit in Las Vegas, Victor Nilson, senior vice president of Big Data for AT&T, told SDxCentral there are a number of operators interested in using ECOMP. And he said there will be more announcements in the future about those operators. “Operators can either write their own [management and orchestration platform], or they can use ECOMP, which is something that is proven and can scale,” Nilson added.

No comment yet.
Scooped by Patrick Lopez!

IHS Markit: 70% of Carriers Will Deploy CORD in the Central Offic

IHS Markit: 70% of Carriers Will Deploy CORD in the Central Offic | SDN, NFV and cloud in mobile |

Seventy percent of respondents to an IHS Markit survey plan to deploy CORD in their central offices — 30 percent by the end of 2017 and an additional 40 percent in 2018 or later.

The findings come from IHS Markit’s 2016 Routing, NFV & Packet-Optical Strategies Service Provider Survey.

The Central Office Re-Architected as a Data Center (CORD) combines network functions virtualization (NFV) and software-defined networking (SDN) to bring data center economics and cloud agility to the telco central office. CORD garnered so much attention in 2016 that its originator — On.Lab‘s Open Network Operating System (ONOS) — established CORD as a separate open source entity. And non-telcos have joined the open source group, including Google and Comcast.

IHS Markit found that 95 percent of operators surveyed are using or planning to deploy servers and storage in selected central offices to create mini data centers to offer cloud services. And they will use them as the NFV infrastructure on which to run virtual network functions (VNFs).

The survey results are based on interviews with router purchase decision-makers at 20 global service providers that control 36 percent of worldwide telecom capex and a third of revenue.

In other survey findings, operator respondents indicated that 100-Gb/s Ethernet is the wave of the future. They said that it will make up 38 percent of their 10-, 40- and 100-Gb/s Ethernet port purchases during 2018, which is more than two times that of 2016.

In addition, 70 percent of operators surveyed are deploying packet-optical transport systems (P-OTS) or plan to do so by 2018. Between 2016 and 2018, the percentage of nodes with P-OTS is anticipated to grow six-fold in core/long haul and almost double in access, aggregation, metro core, and regional.

“We believe these plans will keep a damper on router sales,” writes IHS Markit senior research director Michael Howard in a summary of the report. “And despite much industry talk, respondents have little current demand for a multilayer data/transport control plane.”

No comment yet.
Scooped by Patrick Lopez!

5G Standard May Include Information-Centric Networking, MEC

5G Standard May Include Information-Centric Networking, MEC | SDN, NFV and cloud in mobile |

The growth in network functions virtualization (NFV) and software-defined networking (SDN) is paving the way for new technologies like information-centric networking (ICN) and mobile edge computing (MEC).

In fact, ICN and MEC could become an integral part of the 5G standard. According to a new white paper from 5G Americas, growing interest in ICN and MEC is propelling standards bodies like the 3GPP and ETSI to consider those technologies for possible inclusion in the 5G standards.

“Both of these are making progress in the standards,” said Chris Pearson, president of the trade group 5G Americas.

ICN is considered a potential 5G technology because of its ability to couple network-layer functions with content awareness so that routing, forwarding, caching, and data transfer operations are performed on topology-independent content names rather than on IP addresses.

MEC, meanwhile, improves responsiveness by providing cloud computing capabilities closer to the end user than traditional cloud computing systems. In a MEC environment, compute and storage resources are exposed via a set of application protocol interfaces (APIs) so that operators and developers can use their capabilities.

The white paper goes on to say that while network features and functionality in 4G networks are increasingly migrating to software, there is a general assumption that in 5G the network functions and applications will be designed as cloud-native applications that will run in virtualized environments on distributed architectures.

ETSI is already working on a MEC specification, and the group in July released technical requirements, framework, and architecture, and a proof-of-concept framework. The MEC Industry Specification Group is currently working on 16 more specifications. ETSI also is changing its name from mobile edge computing to multi-access edge computing so it can better reflect the use of MEC in conjunction with WiFi and fixed access technologies.

ICN, meanwhile, is not being directly addressed by the 3GPP, but the group recently started on a project to define the 5G system architecture and will likely touch on ICN.

In addition, ATIS recently formed an ad-hoc committee to look into the evolution of content-optimized networks. Although not directly focused on ICN, the work will probably overlap.

No comment yet.
Scooped by Patrick Lopez!

HPE Selling OpenStack & Cloud Foundry Assets to SUSE

HPE Selling OpenStack & Cloud Foundry Assets to SUSE | SDN, NFV and cloud in mobile |

Hewlett Packard Enterprise (HPE) announced today that it’s selling some OpenStack and Cloud Foundry assets to SUSE. The companies also note that SUSE is acquiring some “talent” — meaning, people.

SUSE’s parent company, Micro Focus, is in the process of merging with HPE’s Software division, an $8.8 billion deal that was announced in September.

SUSE already offers an OpenStack-based infrastructure-as-a-service (IaaS) cloud. HPE‘s OpenStack assets will be poured into that offering. On the Cloud Foundry side, SUSE says it’s going to use the acquired assets to launch its own enterprise-ready platform-as-a-service (PaaS).

HPE doesn’t appear to be giving up on cloud completely. Today’s announcement says that the company will continue to offer Helion OpenStack and the Stackato PaaS, both of which will continue to use the relevant technologies that are being handed off to SUSE.

In other words, HPE is selling some technology that it will then continue to use. In that sense, this deal resembles the “spin-mergers” HPE has arranged, such as the transaction with Micro Focus. HPE is spinning off the Software group, which will merge with Micro Focus and create a new company, which HPE shareholders will hold a 50.1 percent stake in.

Terms of the deal were not disclosed. The companies expect to close the deal in the first quarter of 2017.

No comment yet.
Scooped by Patrick Lopez!

Facebook is once again putting the $41 billion computer network industry to shame

Facebook is once again putting the $41 billion computer network industry to shame | SDN, NFV and cloud in mobile |
Facebook has produced yet another computer network innovation that will once again floor the $41 billion network tech industry.

And Facebook will again share it with the world for free, putting commercial network tech vendors on notice. (We're looking at you, Cisco).

The new innovation, revealed on Tuesday, is something called Backpack and it's a second-generation computer switch, the successor to the one it released last year called the 6-Pack that directly challenged tech made by market leader Cisco (and others, like Juniper).

The difference is, the Backpack is way, way faster.

The 6-Pack was a 40G switch, which means it could stream 40G worth a data around a data center network. The Backpack is an 100G optical switch, which means it's 2.5 times faster, and using fiber optics (aka light) to move data around instead of the traditional and more limited copper wires.

The Backpack is also a companion to the new switch Facebook announced last spring, called Wedge 100. The Wedge 100 is what's known as a "top of rack" switch, that connects a computer rack of servers to the network. The Backpack then connects all the Wedge 100 switches together. In network jargon this is known as a "network fabric."

Facebook is attempting to build itself a fully 100G data center and these two pieces get it much of the way there, along with the network equipment it announced last week that put the telecom equipment industry on notice.

Going on sale in 2017
There are two key thing about this new switch. First, Facebook is turning it over to its game-changing Open Compute Project, which has gained cult-like status in the few years since Facebook launched it.

Facebook engineer Omar BaldonadoLinkedIn/Omar Baldonado

OCP creates open source hardware, where engineers can freely take hardware designs and work on them together.

OCP offers designs on racks, servers, storage drives and other hardware. Contract manufactures stand by to build them. OCP has even inspired other internet players to build their own hardware completely from scratch, such as LinkedIn.

In the case of Facebook's switches, Facebook went the extra step of arranging for its contract manufacturer, Accton, to mass produce the devices so anyone can buy them.

And Facebook also open sources the software to run the switch, and worked with other network startups to get their software to work on its switches.

Facebook plans to do all of this for the Backpack, too, Omar Baldonado, a member of Facebook's network engineering team tells us.

"We anticipate it will follow same path. Later in 2017, people will be able to get a Backpack. We are working with the software eco-system, too. That's why we are contributing ot OCP," he said. 

Mind-blowing technology
In order to create Backpack, Facebook had to work with chip makers and optical materials vendors to do what's never been done before, create special chips and special optical fiber that brings the cost of such switches down.

The optical switches on the market today are not typically used in the data center to connect servers together. They are typically used in the "Backplane," the part of a network that stretches between data centers or across cities.

And because they've been targeted for metro-scale networks and beyond, such switches tend to use a lot of power, throw off a lot of heat, and are very expensive.

Facebook helped design a switch that uses less power and generates less heat, can operate at around 55-degree Celsius, Baldonado says, which has never been done before. Folks in the network industry have told us Facebook's 100G work is "mind blowing."

To bring costs down, this switch, like the other OCP switches, is modular, meaning you can pull it apart and swap out parts, using different chips, different network cards and different software. 

At one point, a former Facebook OCP engineer named Yuval Bachar (who is now working at LinkedIn) declared the a goal that networks should cost as little as $1 per gigabyte. This goal has not been achieved, and Baldonado is the first to admit it. But with this switch and all the other hardware, Facebook is bringing costs down, he says. In this case, even if the switch is still pricey to buy, it will cost less to operate, he says.

Facebook is leading this charge into faster, cheaper, mind-blowing networks and data centers because one day we will all be using the social network to hang out in virtual reality, in addition to live-streaming more video.

"We are now creating more immersive, social, interactive 360 video sorts of experiences and that demands a much more scalable and efficient and quick network," he says.
Facebook's 100G modular optical "Backpack" switch
No comment yet.
Scooped by Patrick Lopez!

M-CORD paves way to 5G with open source virtualized EPC 

M-CORD paves way to 5G with open source virtualized EPC  | SDN, NFV and cloud in mobile |
AT&T, Verizon, Google and SK Telecom are among the industry heavyweights throwing their support behind the CORD Project, and specifically, M-CORD.

The CORD Project, which refers to Central Office Re-architected as a Data Center, is an open source endeavor that combines SDN, NFV and elastic cloud services, and it just announced availability of the first open source disaggregated and virtualized Evolved Packet Core (EPC). The project specifically identifies system integrator and CORD partner Radisys as contributing to its EPC framework, a foundation for many EPC products, to M-CORD.

M-CORD is designed to help pave the way for service-driven 5G architecture through capabilities such as programmable Radio Access Network (RAN), disaggregated and virtualized EPC, mobile edge computing and end-to-end slicing from RAN to EPC.

“The open source driven model is a fundamental element for the 5G roadmap as it enables extraordinary agility for identifying and responding to subscribers’ needs far more quickly than traditional mobility standards,” said Joseph Sulistyo, ‎senior director of open networking solutions and strategy at Radisys Corporation, in a press release.

Radisys has been a sustained contributor and prominent integrator of M-CORD since the first demonstration at Open Networking Summit 2016, and “we are thrilled with our continued collaboration with ON.Lab and partners and the extended community to embrace open source and enrich SDN and NFV in M-CORD for optimizing 5G infrastructure,” Sulistyo said.

Asked about the significance of being first with this EPC, Sulistyo told FierceWirelessTech that Radisys’ job is to provide a carrier-grade platform and accelerate service providers’ path to SDN, NFV and the cloud in a pragmatic way that allows operators to save money.

“We acknowledge that being first is only good enough when you can execute,” he said. As part of moving to this new open source, software-driven architecture, big changes are in store. “We understand the business model has changed,” and Radisys wants to make it accessible to customers in a fast manner so they don’t have to wait, he said.

RELATED: Verizon hooks up with ONOS project, joining AT&T, SK Telecom

According to Sulistyo, EPC is a fundamental element in providing high quality, fast and sustainable mobile services.

“To effectively and productively achieve these objectives for less, we need to empower and integrate EPC with these key technologies and architecture – NFV (for virtualization), SDN (for disaggregation), and Cloud (distributed ‘anytime, anywhere’ services and continuous DevOps model),” he said. “Through virtualized EPC approach, wireless operators can reduce capital expenses of delivering mobile services with standard commercial off-the-shelf hardware. Virtualized network infrastructure can also reduce operating expenses due to unified management and orchestration (MANO).”

Radisys is well aware it needs partners in this transformation. “When we do this, we know that we’re not going to be able to do this alone,” and other vendors and organizations will need to help make it become a reality, he said. “This allows us to really partner and incentivize other solution providers out there to come to the table. We cannot achieve this just with us.”

According to the CORD project, Radisys’ EPC has served as a foundation for many EPC solutions currently in use in production networks of service providers around the globe. Designed to be modular for ease-of-use, Radisys’ open source EPC contribution to M-CORD will allow the community to explore and demonstrate innovative architectures that can take advantage of the modularity of EPC.  

The CORD community includes service provider partners AT&T, China Unicom, Google, NTT Communications, SK Telecom and Verizon, as well as vendors Ciena, Cisco, Fujitsu, Intel, NEC, Nokia, Radisys and Samsung, and more than 30 collaborating organizations. CORD is hosted by The Linux Foundation.

“We are very pleased with M-CORD progress towards creating an open platform for LTE and 5G built with merchant silicon, white boxes and open source,” Andre Fuetsch, CTO and president of AT&T Labs, said in the press release. “As we move towards an increasingly open sourced mobile core, we can innovate LTE and 5G solutions faster and create services such as IoT, safety, mobile health and others with improved QoE and agility.”

RELATED: Radisys names Verizon as customer for its FlowEngine system

“Verizon recognizes M-CORD as an important open platform to help enable future innovative mobile services,” said Srini Kalapala, VP of Technology and Supplier Strategy at Verizon, in the release. “We continue to contribute to M-CORD while working with the larger ecosystem to realize M-CORD’s full potential toward accelerating next generation networks."

And Google is excited to see the community come together and embrace an open platform for innovation with M-CORD, according to Ankur Jain, principal engineer at Google. "This provides a framework on which the community can experiment with building next generation networks that are easier to manage, and where the network layer exchanges information with the application layer, thereby bringing efficiency to both,” he said in the release.

No comment yet.
Scooped by Patrick Lopez!

Deutsche Telekom, AT&T, SK team on ‘xRAN’ to bring SDN and NFV to the radio access network

Deutsche Telekom, AT&T, SK team on ‘xRAN’ to bring SDN and NFV to the radio access network | SDN, NFV and cloud in mobile |
Deutsche Telekom, AT&T and SK Telecom joined forces to promote radio access networks (RANs) that are based on software and thus move away from what the operators described as today’s “highly proprietary RAN infrastructure."

The three operators made use of the NGMN Industry Conference and Exhibition in Frankfurt am Main to launch, which aims to demonstrate how software-based, extensible RANs can enable operators to made better use of spectrum assets, reduce opex and capex, and bring services to markets faster than before based on emerging use cases.

Petr Ledl, head of the 5G:haus research group at Deutsche Telekom, explained that xRAN is based on the principles of network functions virtualisation (NFV) and software-defined networks (SDN).

“This is the first effort on the RAN,” he said, noting that most NFV and SDN work has so far focused on core networks.

He added that the alliance was to a certain extent born out of ongoing frustrations with vendors and the lack of flexibility that operators currently have with RANs.

The goal is to decouple the control panel from the base station to better serve users by manipulating load balances on the different base stations. also hopes to invite more operators to joint its ranks and support the entry of smaller software providers into the RAN environment. Intel, Texas Instruments, Aricent, Radisys and Professor Sachin Katti from Stanford University have already provided their support to the alliance.

Bruno Jacobfeuerborn, CTO of Deutsche Telekom, claimed that xRAN has the potential to transform the way mobile access networks are built and managed.

“This supports our vision for RAN infrastructure evolution, which is an important component of our software defined operator model. With the adoption of a software-based multi-service delivery platform, we will have the flexibility to better respond to our customers changing user, application and business needs in the 5G era,” Jacobfeuerborn said in a statement.

At NGMN, members are demonstrating an initial reference implementation in a multi-vendor demonstration to highlight the flexibility of xRAN’s open interfaces, decoupled control plane and evolved base stations (or evolved node B) stacked on commodity hardware, as an alternative to existing closed, distributed control implementations on proprietary hardware.
No comment yet.
Scooped by Patrick Lopez!

HPE, Samsung partner on NFV, VNF platform targeting mobile networks

HPE, Samsung partner on NFV, VNF platform targeting mobile networks | SDN, NFV and cloud in mobile |
Vendor giants HPE and Samsung are set to work through the HPE OpenNFV Partner Program on open and pretested NFV and VNF products

Telecom operators looking to deploy network functions virtualization technologies can now opt for a package deal from vendor giants Samsung and Hewlett Packard Enterprise.

The agreement sees Samsung joining HPE’s OpenNFV Partner Program as a “carrier-grade network equipment provider.” The firms said they will partner on providing carriers with integrated NFV infrastructure and virtual network functions solutions pretested for multivendor environments and based on an open architecture.

Specifically, Samsung said it will provide VNFs for mobile networks, including virtualized evolved packet core, virtualized internet protocol multimedia subsystem and VNF managers, with HPE supplying its OpenNFV platform and NFV management and orchestration solutions. The companies will jointly run go-to-market strategies with third-party solutions already verified by the HPE program.

“Together, the solutions will help carriers accelerate their transformation from networks built on monolithic, proprietary appliances to more agile cloud-based networks enabled by NFV,” the companies noted in a statement.

HPE earlier this year unveiled its Service Director platform, which the company said taps automation to foster interoperability for managing services in NFV deployments and existing physical environments, and builds on its NFV Director’s Management and Orchestration capabilities.

Samsung recently joined the Open Network Operating System project’s central office re-architected as a data center platform, which also includes members AT&T, Verizon Communications, China Unicom, NTT Communications and SK Telecom, as well as vendors like Ciena, Cisco, Fujitsu, Intel, NEC, Google, Radisys and Nokia. The CORD initiative, formed in early 2015, focused on accelerating the adoption of open-source software-defined networking and NFV solutions for service providers using open-source platforms like ONOS, OpenStack, Docker and XOS.

A recent report from IHS Markit found 81% of service providers surveyed said they plan to deploy NFV by the end of next year, with 100% of those questioned indicating they will deploy NFV “at some point.” Showing the market’s desire to roll out platforms, the survey also found that 58% of operators have deployed or will deploy NFV this year.

“Many carriers in 2016 are moving from their NFV proof-of-concept tests and lab investigations and evaluations to working with vendors that are developing and productizing the software, which is being deployed commercially,” the firm noted.

These early deployments are expected to focus on virtualized enterprise customer premise equipment, which has seen increased importance over the past several years. IHS noted these deployments are favored due to the ability to “assist with revenue generation because it allows operators to replace physical [Customer Premise Equipment] with software so they can quickly innovate and launch new services.”

In terms of continuing challenges, IHS found integrating NFV into existing networks remains an issue for a majority of service providers surveyed, with many citing a lack of “carrier grade” products. That concern is similar to what was expressed in an IHS survey from last year, which supplanted previous concerns over operations and business support systems.
No comment yet.
Scooped by Patrick Lopez!

AT&T Sees SDN & NFV Investments Beginning to Self-Fund

AT&T Sees SDN & NFV Investments Beginning to Self-Fund | SDN, NFV and cloud in mobile |

AT&T sees its ongoing network virtualization investments around software-defined networking (SDN) and network functions virtualization (NFV) as beginning to bolster cost savings across the carrier. Those savings are also becoming a self-generator of funding into those virtualization efforts. Speaking at today’s MoffettNathanson Media and Communications Summit, AT&T CFO John Stephens said the carrier’s ability to virtualize 34 percent of its network functions by the end of last year is now helping AT&T to hit its year-end 2017 goal of 55 percent virtualization. 

 “As a factoid, 34 percent of our networks at the end of the year were virtualized, and we will be at 55 percent by the end of this year,” Stephens said. “Think about what cost savings that brings. And the savings from the prior work you do starts paying for this investment. Last year we were paying for the investment to get up to 34 percent. Now that 34 percent is fully generating huge amounts of savings and is a sub-funder for things going forward.” AT&T expects to have 75 percent of its network operations controlled by virtualized platforms by 2020. As an example of how investments into virtualization and digitization are bringing cost out of the organization, Stephens said AT&T has been able to flip the ratio of billing inquiries through increased use of automation. “We used to do 20 percent automated and 80 percent manual, and now it’s flipped,” Stephens said. “Now millions of calls are going away or are opportunities to automate things. You have seen us continue to maintain margins, and that’s going to continue.” AT&T last year said that when the company has met its goal of virtualizing 75 percent of its network by 2020, it will see savings in operational expenses of up to 50 percent. That cost savings is expected from manual operations being replaced by automated scripts and procedures. Verizon recently told the investment community it expects to see cost savings from its network virtualization plans. In laying out areas of planned network streamlining, John Stratton, president of customer and product operations at Verizon, cited SDN and virtualization as cornerstones of those efforts to drive down the cost of carrying data across its network and opening up new markets. SD-WAN Impact The financial implications of software-defined wide area networking (SD-WAN) were also questioned, with Stephens acknowledging the move toward SD-WAN could clip revenues from part of AT&T’s business, but that it was not going to turn away from the need to advance services. “On an embedded base there is some risk, but quite frankly that risk through the legacy services might have been there anyway,” Stephens said. “So from our perspective, protecting that base by looking to the future, we will always look to the future.” Stephens added that in looking ahead, AT&T sees SD-WAN as increasing the “stickiness” of customers to the company and potentially leading to the uptake of additional services. “We have made some investments in SD-WAN and are looking forward to making it a part of our product offering, understanding that giving customers a better experience internally leads to them buying more products and services from you and then that changes the total revenue picture,” Stephens said.

No comment yet.
Scooped by Patrick Lopez!

OneWeb breaks ground on new satellite facility, gears up for 5G

OneWeb breaks ground on new satellite facility, gears up for 5G | SDN, NFV and cloud in mobile |

OneWeb Satellites officially  officially broke ground on a new $85 million satellite manufacturing facility in Exploration Park, Florida, on Thursday, but it’s what OneWeb plans to do with those satellites that should pique the interest of wireless operators.

The company's goal of launching hundreds of satellites is part of its vision to connect the 4 billion people who are under-served or unconnected around the world. While it does that, it’s also going to be in a position to supply backhaul for millions of small cells that are part of the cellular industry’s densification plans leading up to and in 5G.

It could also become cellular operators’ go-to solution in rural areas in the U.S. and elsewhere where the economics just haven’t penciled out in terms of providing coverage, according to OneWeb Founder and Executive Chairman Greg Wyler.

OneWeb Satellites is the joint venture between OneWeb, the satellite-based internet provider, and Airbus, the world’s second largest space company. The first order at the Florida facility will include the production of 900 communications satellites for OneWeb’s low Earth orbit (LEO) constellation.

OneWeb could have modified an existing structure, but building a new factory gives it a chance to design it from the ground up specifically for what it’s doing. The satellites built in the plant will be used primarily by OneWeb for its global internet services, but satellites also will be available for other commercial satellite operators and government customers globally as early as 2018. 

“We actually allow vendors to work within the facility and have their own residency,” giving the company a much tighter relationship with its vendors, which are able to test and validate pre-production components, according to Wyler.

The company, whose board includes the Virgin Group’s Sir Richard Branson and Qualcomm Executive Chairman Paul E. Jacobs, is very much grounded in the business and technology of cellular—and Wyler, who founded O3b Networks in 2007, described the company’s DNA as a mix of talent from the microprocessor, computer, cellular, battery and even solar panel industries. Satellites just happen to be the delivery mechanism.

During Mobile World Congress 2017, Intelsat and OneWeb announced their merger and a new cash infusion from SoftBank. The combined entity promises to deliver a robust technology roadmap for customers in wireless, mobility and government sectors, as well as media and enterprise segments.

Of course, telecom operators are one of OneWeb’s primary customer bases, and OneWeb has designed its systems to be fully compatible and integrated with the cellular networks, including as they move to 5G.  

“We designed our system so our usage of the new Ka band is totally compatible with the mobile industry’s needs,” Wyler told FierceWirelessTech. “We’re not a satellite company. We are a communications company,” which happens to have satellites in its system.

As 5G standards get hammered out, one of the first questions is: Can a satellite be part of a 5G network?

“The answer is, for OneWeb, yes,” he said. “We have tested and validated our latency” and signaling path to make sure it will operate seamlessly between the core network and the eNodeB, an element of the LTE RAN. “The core network doesn’t know it’s going over OneWeb, a microwave on the ground, fiber on the ground” or a cable. “It’s all the same to the core network. We’re just a microwave repeater that happens to be a little bit higher in elevation.”

The second question is whether the system can be easily deployed, which will be crucial in 5G.

“We will have a very simple deployment for backhaul for 5G,” he said. That is key because if an operator wants to roll out a million small cells, for instance, it’s going to be a challenge currently to feed capacity to all those cells.

OneWeb is also an option for wireless operators that want to fill in dead spots in coverage, which, despite massive infrastructure rollouts, still happen, especially in rural areas.

“There are many dark spots in the U.S.,” and OneWeb will enable coverage at an extremely low cost, which makes rollout easier and less expensive for mobile operators, he said, noting that the U.S. is certainly a market where OneWeb’s coverage will be relevant.  

The unique way in which OneWeb is designing its satellites—each of which will weigh only about 150 kilograms—means it will actually be affordable, something that has been elusive for much of the satellite industry. As for the latency, which is a big deal in 5G, OneWeb’s satellites will have very high throughput, but because they’re 30 times closer to earth than GEO satellites, they'll have dramatically lower latency, according to Wyler.
OneWeb’s investor base includes Qualcomm, SoftBank, EchoStar and Intelsat—all of which have a good understanding of spectrum. But Wyler said OneWeb is so far removed from the traditional satellite industry that it’s not even part of the spectrum debates going on between the terrestrial mobile and satellite industries as part of the FCC’s Spectrum Frontiers proceeding.  

While Qualcomm is an investor, it’s also been working closely with OneWeb, supplying what Wyler considers a chipset that’s truly revolutionary for the satellite industry and very much a part of how it’s going to be able to make the whole system work affordably. Wyler has said previously that the hardest part of OneWeb’s system is done by Qualcomm on the chipset side.

No comment yet.
Scooped by Patrick Lopez!

Vendors Beware: Google Is Lending SDN, NFV Expertise to Mobile Op

Vendors Beware: Google Is Lending SDN, NFV Expertise to Mobile Op | SDN, NFV and cloud in mobile |

Google is partnering with mobile operators to share its networking expertise and build a platform for carriers to run their network services. Details on this platform are vague, but for vendors like Cisco, Ericsson, Nokia, Juniper Networks and others, Google’s latest move may set off alarm bells.

Google Principal Engineer Ankur Jain explained Google’s plans in a blog post noting that the company has built a backbone network to link its servers to its data centers and its edge nodes. And that it has relied upon software-defined networking (SDN), network functions virtualization (NFV) and site reliability engineering to help deliver its services.

“Our SDN framework enables networks to adapt to new services and traffic patterns,” Jain said in the post. “Fast user space packet processing on commodity hardware increases the ability to deliver new features quickly while reducing costs.”

Now the company says it is going to work with mobile operators (SK Telecom and Bharti Airtel will be the first), to build some sort of platform that will incorporate the work the company has done on application program interfaces (APIs) to provide better network performance.

For example, Google has been working with some of its mobile partners to come up with a way to identify the user’s data plan limits while still protecting their identity. Called Carrier Plan Identifier (CPID), Google can request information from a user’s data plan from the mobile network operator. The API then encodes that data plan information so that users get better performance on their applications. This functionality could also enable new operational models for carriers.

Alex Choi, CTO of SK Telecom said in the post that his company hopes by working with Google, it can accelerate the transition to 5G and enable new use cases such as machine learning to optimize network operations.

Jain also reiterated that Google is working with the Central Office Re-Architected as a Data Center (CORD) open source project. CORD combines NFV and SDN to bring data center economics and cloud agility to the telco central office. Analyst firm IHS recently did a survey that found that 70 percent of operators planned to deploy CORD in their central offices.

Vendors on Alert
Google’s main incentive for partnering with operators is to make networks perform better so that users have better performance for their Google applications like Google Maps, YouTube, and Gmail.

But this latest announcement suggests an even closer alliance. According to The Street’s Real Money, the more operators align with Google’s networking strategy, which is to rely more on commodity hardware and less on proprietary data center switches and carrier networking appliances, the more difficult it will be for network equipment suppliers that are already feeling pressure.

No comment yet.
Scooped by Patrick Lopez!

AT&T Flings Doors Wide Open on ECOMP Platform

AT&T Flings Doors Wide Open on ECOMP Platform | SDN, NFV and cloud in mobile |
AT&T finally made it official and today announced its ECOMP platform as an open source project hosted by the Linux Foundation.

AT&T developed the Enhanced Control, Orchestration, Management, and Policy (ECOMP) software in-house. It provides a framework for real-time, policy-driven software automation of virtual network functions (VNFs). The carrier says it is already production-ready, having been in use for two years, internally.

The business goal of the project is to allow service providers to rapidly create new services in an automated fashion the way webscale titans are able to do.

The transition of the ECOMP code and collateral to the Linux Foundation begins today. Linux will establish a governance and membership structure, and the project will be covered by the Apache 2.0 license.

The organizations committed to ECOMP to date include Amdocs, AT&T, Bell Canada, Brocade, Ericsson, Huawei, IBM, Intel, Metaswitch, and Orange.

At an AT&T event at its retail store in downtown San Francisco today, Chris Rice, SVP at AT&T Labs, said ECOMP is divided into 11 modules. Each of those modules can fit into a virtual machine (with containers inside those VMs). And a module can be installed in an OpenStack cloud in 15 minutes.

Amdocs recently said it was working with the Linux Foundation and has contributed two modules to the project.
No comment yet.
Scooped by Patrick Lopez!

Projects can fail, tech evolves, but transformation is sustainable - Think Big - The Innovation Blog

Projects can fail, tech evolves, but transformation is sustainable - Think Big - The Innovation Blog | SDN, NFV and cloud in mobile |
When I originally met the CEO of Telefonica Research and Development David Del Val, he asked what I thought of the direction the industry was taking. I have not been shy on this blog and other public forums about my opinion on operators’ lack of innovation and transformation. My comments went something like that:

“I think that in a time very soon – I don’t know if it’s going to be in 3 years, 5 or 10 – voice will be free, texts will be free, data will be free or as close to a monthly utility price as you can think. Already, countries are writing access to broadband into their citizens’ fundamental rights.” 

“Most operators are talking about innovation and new services, but let’s face it, they’ve had a pretty poor track record. MMS was to be the killer app for GPRS/EDGE, push to talk for 3G, video calling for HSPA, VoLTE for 4G… There is no shame in being an operator of a very good, solid, inexpensive connectivity service. Some companies are very successful doing that and there will be more in the future. But you don’t need hundreds of thousands of people for that. If operators’ ambition is to “monetize”, “launch new services”, “open new revenue streams”, “innovate”, they have to transform first. And it’s gonna hurt.”

At that point, I wasn’t sure I had made the best first impression, but as it turned out, that discussion ended up turning into a full time collaboration. 

The industry is undergoing changes that will accelerate and break companies that are not adaptable or capable of rethinking their approach. 

4G wasn’t designed as a video network capable of doing other things like browsing and voice; the telecoms industry designed 4G to be a multipurpose mobile broadband network, capable of carrying VoIP, browsing, messaging, … but really, it wasn’t so hard to see that video would be the dominant part of traffic and cost and growing. I don’t have a crystal ball but I had publicly identified the problem more than 7 years ago.

The industry’s failure to realize this has led us to a situation where we have not engaged video providers early enough to create a mutually profitable business model. The result is traffic is increasing dramatically across all networks, while revenues are stagnating or decreasing because video services are mostly encrypted. At the same time, our traditional revenues from voice and messaging are eroded by other providers.  

As the industry is gearing up towards 5G and we start swimming in massive MIMO, beam-forming, edge computing, millimeter wave, IoT, drone and autonomous vehicles, I think it is wise to understand what it will take to really deliver on these promises.

Agile, lean, smart, open, software-defined, self-organizing, auto scalable, virtualized, deep learning, DevOps, orchestrated, open-source… my head hurts from all the trappings of 2016´s trendy telco hipster lingo. 

This is not going to get better in 2017.

The pressure to generate new revenues and to decrease costs drastically will dramatically increase on operators. There are opportunities to create new revenue streams (fintech, premium video, IoT…) or reduce costs (SDN, NFV, DevOps, Open source…) but they require initial investments that are unsure from a business case perspective because they are unproven. We are only starting to see operators who have made these investments over the last 3 years announcing results now. These investments are hard to make for any operator, because they are not following our traditional model. Operators for the last 20 years have been conditioned to work in standards to invent the future collectively and then buy technology solutions from large vendors. The key for that model was not innovation, it was sustainability, interoperability.

The internet has broken that model.

I think that operators who want to be more than a bit pipe provider need to create unique experiences for consumers, enterprises, verticals and things. Unique experiences can only be generated from context (understanding the customer, his desire, intent, capacity, limitations…), adaptation (we don´t need slices, we need strands) and control (end to end performance, QoS and QoE per strand). Micro segmentation has technical, but more importantly operational and organizational impacts.

Operators can’t hope to control, adapt, contextualized and innovate if they can’t control their network. Today, many have progressively vacated the field of engineering to be network administrators, Writing RFPs to select vendors, or better, mandate integrators to select and deploy solutions. The result is networks that are very undifferentiated, where a potential “innovation” from one can be rolled out by another with a purchase order. Where a change in a tariff, a new enterprise customer on-boarding, a new service takes years to deploy, hundreds of people, and millions of euros. Most operators can´t launch a service that doesn’t have less than 10 million people addressable market, or it won’t make the business case, right off the bat.

There are solutions, though, but they are tough medicine. You can´t really rip the rewards of SDN or NFV if you don´t control their implementation. It’s useless to have a programmable network, if you can’t program. Large integrators and vendors have made the effort to retool, hire and train. Operators must do the same unless they want to be MVNOs on their own networks. 

Innovation is trying. Projects can fail, technology evolves, but transformation is sustainable.
No comment yet.
Scooped by Patrick Lopez!

Mobile Edge Computing Creates ‘Tiny Data Centers’ at the Edge

Mobile Edge Computing Creates ‘Tiny Data Centers’ at the Edge | SDN, NFV and cloud in mobile |

One key element of 5G is likely to be Mobile Edge Computing (MEC), an emerging standard that extends virtualized infrastructure into the radio access network (RAN).

ETSI has created a separate working group for it — the ETSI MEC ISG — with about 80 companies involved.

“MEC uses a lot of NFV infrastructure to create a small cloud at the edge,” says Saguna CEO Lior Fite. Saguna has created its own product, the Open-RAN MEC and is involved with ETSI MEC ISG. Fite says the ETSI group is creating a set of APIs to define “a tiny data center at the edge.”

Saguna’s own MEC technology comprises two main components. The first is a multi-access compute element, and the second is a management element.

“Usually access networks include all kinds of encryption and tunneling protocols,” says Fite. “It’s not a standard, native-IP environment.” Saguna’s platform creates a bridge between the access network to a small OpenStack cloud, which works in a standard IP environment. It provides APIs about such things as location, registration for services, traffic direction, radio network services, and available bandwidth.

Saguna’s Open-RAN MEC also includes a management element that’s similar to an NFV manager, but it’s a MEC manager. It runs on a small OpenStack cloud. In fact, Saguna is collaborating with Wind River to validate the Saguna Open-RAN MEC on the Wind River Titanium Server as the carrier-grade OpenStack platform.

Eight-year-old Saguna is based in Israel. It’s raised about $16 million in three funding rounds. Fite says some of the MEC startups it competes with include Quortas, Vasona, and MECsware. And of course, big vendors such as Huawei, Nokia, and ZTE offer MEC technology as well.

“I think MEC is getting a lot of attention from many operators and vendors,” says Fite. “There are a lot more companies in the ecosystem.”

Fog Computing
In addition to being an important part of the emerging 5G network topology, MEC will be useful for the Internet of Things (IoT) as part of fog computing, says Fite.

The term “fog computing” was coined about three years ago. According to the OpenFog Consortium, it “is a system-level horizontal architecture that distributes resources and services of computing, storage, control, and networking anywhere along the continuum from Cloud to Things.”

Conceptually, it takes some, or all, of these resources down to the device level. And while MEC is intended for mobile networks, fog can include wireline networks.

Fite says that IoT devices are producing tons of data, and it’s desirable for that data to be ingested and processed at the edge of the network as close to the devices as possible. So there will be an integral connection between MEC and fog computing.

No comment yet.
Scooped by Patrick Lopez!

Huawei, Cisco, Nokia & Ericsson Join for NFV Testing

Huawei, Cisco, Nokia & Ericsson Join for NFV Testing | SDN, NFV and cloud in mobile |

Nokia, Ericsson, Cisco, and Huawei today announced they have signed an MoU to create the NFV Interoperability Testing Initiative (NFV-ITI).

Ericsson and Cisco already have a broad-reaching partnership. But Nokia and Huawei are fierce competitors with each other as well as with Cisco and Ericsson. But apparently, service providers need these top telecom vendors to work together to test the interoperability of virtual network functions (VNFs) in multi-vendor environments.

The NFV-ITI members will test the interoperability of NFV elements in specific customer situations to accelerate commercial implementations in services providers’ networks.

The companies also want to create alignment on generic principles for NFV interoperability testing.

In their announcement today, the companies acknowledge there are already several forums for NFV interoperability testing. Those include the European Telecommunications Standards Institute (ETSI) NFV Testing working group, OPNFV testing projects, and the Network Vendor Interoperability Testing (NVIOT) testing.

The NFV-ITI members plan to use best practices from the existing interop groups. And, according to today’s announcement, “NFV-ITI shall be well-aligned with the ETSI NFV Industry Specification Group and the OPNFV project.”

The four companies say all “relevant NFV vendors are welcome to join this initiative by ratifying the NFV-ITI MoU.”

Most large telecom equipment vendors have created end-to-end NFV offerings to help de-risk NFV for service providers. But today’s announcement indicates that at least some large customers want a multi-vendor environment, and they want it all to work together.

No comment yet.
Scooped by Patrick Lopez!

Bell Canada joins AT&T ECOMP collaboration targeting SDN

Bell Canada joins AT&T ECOMP collaboration targeting SDN | SDN, NFV and cloud in mobile |
AT&T ECOMP platform now counts Bell Canada and Orange as collaborators working towards carrier-grade SDN deployments, future 5G support.

AT&T continues to garner carrier interest for its software-focused enhanced control, orchestration, management and policy platform, with Bell Canada now on board to use the tool for the creation and management of software-defined networks.

Bell, which is one of Canada’s three dominate telecom operators along with Telus and Rogers Communications, is set to join France’s Orange in collaborating with AT&T on its ECOMP platform.

“Bell Canada is committed to leading broadband network and service innovation in Canada,” said Petri Lyytikainen, VP of network strategy, services and management at Bell Canada, in a statement. “We believe software-defined networks will advance the future of both wireless and wireline connectivity by adapting to customer needs quickly, and enabling a seamless user experience. We are pleased to collaborate with AT&T and other leading communications companies to evaluate the promising capabilities of the open source ECOMP platform.”

AT&T unveiled the ECOMP initiative earlier this year, which it said was designed to automate network services and infrastructure running in a cloud environment. The carrier said it had been working on ECOMP for nearly two years, tackling the project due to a lack of guidance for network functions virtualization and software-defined networking deployments in a wide area network environment.

ECOMP is said to provide automation support for service delivery, service assurance, performance management, fault management and SDN tasks. The platform is also designed to work with OpenStack, though the carrier noted it was extensible to other cloud and compute environments.

In touting work with fellow telecom operators, AT&T said the ECOMP platform is expected to be a key component in meeting the growing demand for data services as well as bolstering “5G” network deployments

AT&T also again reiterated plans to release the ECOMP platform into the open source software community through the Linux Foundation, stating those efforts remain on track for the first quarter of next year. The carrier had initially made the commitment in July.

“It’s exciting to see the communications industry coalescing around ECOMP,” said Jim Zemlin, executive director at the Linux Foundation. “ECOMP is the most comprehensive and complete architecture for [virtual network functions/SDN automation we have seen. AT&T has had this platform in production for over two years now. This technology is unique in that it’s both disruptive and battle tested. We can’t wait to host it at the Linux Foundation and open it up to the broader developer community.”
No comment yet.
Scooped by Patrick Lopez!

YouTube, Twitter, Facebook: more live video but no plans to commission content 

YouTube and Twitter plan to bring more live video to their respective platform, but neither has plans to commission their own content on the model of traditional media companies, executives from both companies told attendees at a Royal Television Society event in London last night. Twitter is likely to do more deals to bring live video to its platform following its live deal with the US National Football League, but has no plans to begin commissioning content, according to its UK chief. Dara Nasr Speaking at an RTS event, Social Media Muscles in on TV, Twitter’s UK managing director, Dara Nasr, said that Twitter worked with the the NFL on ‘as live’ before moving to live video. “This was the next iteration of our relationship with them,” he said. “Live is big for us. We want to partner with people and they want money.” While live content is a key part of Twitter’s plans, Nasr said the social media platform had no intention to start commissioning programming itself. “We celebrate content producers and we’d love it for them to use our platform,” he said. Speaking at the same event, Stephen Nuttall, senior director of YouTube, EMEA, said that the online video platform is a “longstanding believer” in live video. “We are seeing many creators experiment with live. Sky News is 24/7 live on YouTube and many other channels are also there.” Nuttall said that BT had made the Champions and Europa League finals available free-to-air live on channels including YouTube, which led to a double-digit increase in viewership of the matches shown on top of the live TV audience. “If you take TV [broadcast], we added a double digit increase to that audience,” he said. The airing of the matches had given BT a database of a large audience that could be interested in live sports, which are now likely to be advertised with sports package offers later.  He said that 60% of viewing of YouTube is now on mobile screens, but people had still watched those football matches for as long on YouTube as they did on TV, with 40 minutes of each match viewed on average. Nuttall said YouTube would not have a significant role in creating its own content. He said it was a platform for distribution that can work with all sorts of content creators and help them reach the audience they want to reach. “There is unlimited choice available online,” he said. Nuttall drew a distinction between fostering original content created and published by others and commissioning content itself. While it will not produce its own content, YouTube is “working with some established creators to allow them to create shows they would not otherwise create”, he said. YouTube recently hired Susanne Daniels from MTV to be head of original content, in Nuttall’s words, “to help people broaden out the range of content” that they are creating. “Digital influencers often work with established production companies like MCNs,” he said, adding that YouTube had some skills to bring to the party but noting that it would be a mistake to draw a big distinction between online and TV in terms of the types of content created for each. Premium content, including ‘originals’, is now available on the YouTube Red service in the US and some international markets, which makes content available on an advertising-free basis for a subscription. “[Red] has rolled out outside the US and I expect it will be in more international markets,” he said, without being specific. YouTube Red distributes content created by partners. Nuttall said that there had been an enormous expansion in the range of channels that exist. “You don’t necessarily have to be commissioned to put the content out there. You [creators] can do it yourself. You can get funding from brands or make money on your own. We are seeing profound change. As the proportion of ad spend on online platforms increases you will see more innovation and not less,” he said. Patrick Walker Patrick Walker Nuttall said that YouTube reinvests more than half of the money it makes from video in content. “We share more than half of our gross revenue with the people who make the content,” he said, arguing that this rate was similar to the BBC and superior to some commercial broadcasters. Also speaking at the event, Patrick Walker, Facebook’s director of media partnerships in EMEA said that live video has “really taken off” on Facebook. Professional news journalists are now creating significant amounts of content on the platform, he said. “About two thirds of views of live videos are actually ‘after live’,” he said. Use of the live API helped media companies “lean in to user comments” to help them know how best to market their content, he added.
No comment yet.
Scooped by Patrick Lopez!

AT&T joins T-Mobile and Sprint with video optimization platform

AT&T joins T-Mobile and Sprint with video optimization platform | SDN, NFV and cloud in mobile |

AT&T said planned launch of its Stream Saver service early next year will allow for customer control over streaming video quality. AT&T Mobility is following smaller rivals T-Mobile US and Sprint in offering a video optimization plan for customers, which is set to launch early next year. AT&T said the service, dubbed Stream Saver, will be offered for free to “customers on our most popular plans with data” including its prepaid GoPhone service. The carrier claims the platform will dial-back video quality of most high-definition content to standard-definition quality, or about 480p. The description seems to indicate the service will be automatically enabled on accounts, with customers having the ability to disable the quality limitation. “You control Stream Saver and can turn it off or back on for any qualified line at any time at myAT&T or Premier for business customers,” the carrier noted. “There is no charge to disable or enable Stream Saver.” In addition to saving data for consumers, the platform looks set to allow AT&T Mobility to save on valuable spectrum resources. The carrier has repeatedly cited the explosive growth of data traffic moving across its mobile network, with video singled out as a significant driver of that increase. During this year’s AT&T Developer Summit & Hackathon, Tom Keathley, SVP for wireless network architecture and design for AT&T Technology and Operations, noted video generated between 40% and 50% of data traffic on the carrier’s mobile network and more than 50% of traffic on its wired network. Those numbers are only expected to increase, with Keathley citing the often-cited Cisco Systems’ forecast of video traffic set to generate 70% of mobile data traffic by 2018. In looking to tackle the video bear, Keathley named a number of options, including the limiting of video stream quality to match a device screen’s capabilities. Keathley said that option would see video streaming quality limited to 480p resolution for smartphones, which he noted most consumers would be hard pressed to distinguish from higher resolutions. Another option outlined by Keathley was video pacing, which instead of attempting to download a video clip in its entirety as fast as possible, keeps the downloaded material just ahead of what is being watched so as to not waste network resources should the viewer decide to ditch the action before the video ends. Keathley also touched on the LTE Broadcast standard, noting AT&T has been experimenting with the technology and it’s indeed showing promise. LTE Broadcast uses multicast streaming technology to deliver video content from a single site to multiple users covered by that site instead of the current unicast technology where each user on a single site receives their own dedicated stream. One of the disadvantages to multicast systems is they require a dedicated chunk of spectrum, which if a site is only supporting a single user results in spectrum being wasted. However, Keathley noted multicast technology like LTE Broadcast would be a good solution for larger events where most consumers being served by a single site are more likely to be watching the same video stream. Video optimization now across 3 nationwide carriers The Stream Saver platform in operation appears similar to T-Mobile US’ Binge On and Sprint’s video optimization services, though with the caveat those two services when enabled do not cut into a customer’s data allotment. Of course, both T-Mobile US and Sprint are also aggressively pushing “unlimited” data buckets that for an extra monthly fee include the ability to stream unlimited HD video content. T-Mobile US’ Binge On service, which launched last year, initially drew net neutrality concerns and a letter from the Federal Communications Commission. The carrier looked to skirt those concerns by allowing customers to control the video optimization service. Keathley earlier this year said AT&T did not internally transcode or transrate video content, but that the carrier has been looking at the possibility. He noted a “competitor” had indeed been the first to implement such a program, but cited the controversy the move has generated. “We will watch how that plays out and follow a course of action,” Keathley said. AT&T earlier this year took a step towards streamlining its video delivery pricing, moving to zero-rate streaming video content from its DirecTV and U-verse video platforms for its wireless customers. The move was part of an update to the DirecTV application that includes a new “data free TV” option allowing for the viewing of video content with it counting against an AT&T Mobility data package. Verizon Wireless also allows users of its Go90 video platform to stream content without impacting their data allotments.

No comment yet.
Scooped by Patrick Lopez!

TIPping point

TIPping point | SDN, NFV and cloud in mobile |
For those of you familiar with this blog, you know that I have been advocating for more collaboration between content providers and network operators for a long time (here and here for instance). 

In my new role at Telefonica, I support a number of teams of talented intra-preneurs, tasked with inventing Telefonica's next generation networks, to serve the evolving needs of our consumers, enterprises and things at a global level. Additionally, connecting the unconnected and fostering sustainable, valuable connectivity services is a key mandate for our organization.

Very quickly, much emphasis has been put in delivering specific valuable use cases, through a process of hypothesis validation through prototyping, testing and commercial trials in compressed time frames. I will tell you more about Telefonica's innovation process in a future blog.

What has been clear is that open source projects, and SDN have been a huge contributing factor to our teams' early successes. It is quite impossible to have weekly releases, innovation sprints and rapid prototyping without the flexibility afforded by software-defined networking. What has become increasingly important, as well, is the necessity, as projects grow and get transitioned to our live networks to prepare people and processes for this more organic and rapid development. There are certainly many methodologies and concepts to enhance teams and development's agility, but we have been looking for a hands-on approach that would be best suited to our environment as a networks operator.

As you might have seen, Telefonica has joined Facebook's Telecom Infra Project earlier this year and we have found this collaboration helpful. We are renewing our commitment and increase our areas of interest beyond the Media Friendly Group and the Open Cellular Project with the announcement of our involvement with the People and Processes group. Realizing that - beyond technology- agility, adaptability, predictability and accountability are necessary traits of our teams, we are committing ourselves to sustainably improve our methods in recruitment, training, development, operations and human capital.

We are joining other network operators that have started - or will start- this journey and looking forward to share with the community the results of our efforts, and the path we are taking to transform our capabilities and skills.
No comment yet.
Scooped by Patrick Lopez!

OpenStack Newton Addresses Containers and Bare Metal

OpenStack Newton Addresses Containers and Bare Metal | SDN, NFV and cloud in mobile |
OpenStack Newton, the 14th major release of the open source cloud framework, debuts today with a bit of a spotlight on networking.

“In this release specifically, there were a lot of things on the networking side that do a good job tying together enterprise technology and containers and bare metal,” says Jonathan Bryce, executive director of the OpenStack Foundation.

Of course, the release includes the usual trove of incremental improvements that help with things like scalability and overall usefulness. But it’s been interesting to watch the progression of some projects that are helping OpenStack extend beyond the world of virtual machines.

In a conversation with SDxCentral, Bryce described a few of the new features related to Neutron (OpenStack‘s networking project), explaining how they make the platform more useful with containers and bare metal.

Related: OpenStack Mitaka: Telco NFV, ‘Intent,’ & Neutron
Ironic is the OpenStack project for provisioning workloads onto bare metal — that is, servers that don’t come with a particular operating system pre-loaded. With Newton, Ironic has tighter ties to Neutron, which was created as a way to network virtual machines and is now being spruced up for bare metal and containers. For instance, when Neutron is deployed onto bare metal, it will now know what security and access control policies should be applied to each port.

On the container side, there’s Kuryr, a networking project that’s like Neutron for containers.

“Containers are kind of the opposite side of bare metal,” Bryce says. “When you run containers, they don’t have their own networking stack, because those containers are little slices inside the host operating system.”

In other words, containers don’t have an OS of their own; they share one. That’s one reason why the networking of containers takes some work, and why managing them in bulk can be challenging, he says.

Kuryr features introduced in Newton include the project’s first integrations with Docker Swarm and Kubernetes, two popular options for container orchestration.

Magnum is a container orchestration manager, also described as containers-as-a-service. Launched in late 2014, the project gives operators a way to call up containers much in the way they call up compute instances. It’s an aid to the usual container orchestration tools: Docker Swarm, Kubernetes, and Mesos.

Some of the new features in Magnum include support for Kubernetes clusters on bare metal (which relates back to the Kuryr and Ironic projects) and asynchronous cluster creation.

Related: Walmart Offers OpenStack a 6th Birthday Present
Get Me a Network — yes, that’s its actual name — is a new feature for Nova, the OpenStack project related to computing software.

It’s the networking option you would use “if you were just getting started and you didn’t know what you wanted your network to look like,” Bryce says. In many cases, Neutron requires every tenant to have networking configurations set, which is a complication for those who don’t know networking.

Get Me a Network simplifies the process for those folks. A typical use case would be someone who wants to deploy just a couple of virtual machines and doesn’t need any complicated networking.

Mutable configuration settings are a new convenience factor added to Nova. When new configuration options become available, it’s now possible to add them into an OpenStack node without having to reboot.

VLAN-aware virtual machines are a networking feature with implications for network functions virtualization (NFV). OpenStack can be used as the virtual infrastructure manager (VIM) in an NFV deployment, and that use case has helped attract telecom developers to the framework.

The new feature lets an operator use a virtual LAN to move traffic toward a particular virtual networking function (VNF) housed inside a virtual machine. This can be useful in cases where an application needs the dynamic nature of a VLAN — or when a pre-OpenStack application was written in a way that was meant to take advantage of VLANs.

Non-High Tech Users
One OpenStack trend not directly reflected in Newton is the increasing commercial worthiness of the framework. OpenStack is an archipelago of projects, but several vendors offer their own commercial versions of it, the appeal being that the customer wouldn’t have to puzzle over how to assemble everything.

“We’ve gotten to that point where it’s mature enough and there are enough commercial options. We see non-high-tech companies adopting it,” Bryce says.

One example he holds up is JFE Steel in Japan, which has moved OpenStack into a production cloud.

“The thing that’s interesting about it is that they’re a steel company. This isn’t Paypal or eBay. They don’t have thousands of developers,” Bryce says. (Paypal and eBay do happen to be OpenStack users, by the way.)

OpenStack Newton is available now at The next OpenStack release is named Ocata, after a beach in Spain, and is due for release on Feb. 23.
No comment yet.