Solutions to help your business Sign up for our newsletters Join our Community
  • Share

Five big mistakes carriers make when planning a next-gen network

The decision is made - your company - be it wireline, mobile, or broadband - will be implementing a next generation IP-centric network! Congratulations - you will be preparing to provide a host of digital services desired by your customers, and demanded by your shareholders. You've selected at least some of your vendors, roughed out the plans for the new services and the migration (or replacement) of the legacy network to the next generation architecture. Finally, you have energized your staff for the years of hard work necessary to realize the dream.

More on this Topic

Industry News

Blogs

Briefing Room

We’re going to explore the five critical mistakes that are easiest to make in planning and rolling out that next generation network - mistakes that mean the difference between a successful rollout and becoming a cautionary tale for your competitors. We'll discuss these five mistakes and outline how leading service providers, already deeply engaged in implementing their next generation networks, are avoiding them.

First, let’s consider the situation that the carriers find themselves in as they prepare to implement their next generation networks. The carriers are adopting one of two major strategies in their network rollouts, well-characterized by the strategies of two of the leading next generation carriers today: Telstra and British Telecom (BT).

Telstra, in its network transformation project, is adding new technologies to its network to evolve it into an IP-enabled, multi-layer network that can provide a host of new digital services. During the initial transition, the current services are planned to continue to use the legacy infrastructure.

BT, in its 21CN project, is building an entirely new infrastructure aside the legacy network, and moving existing services onto the new infrastructure (including normal telephone service) as well as offering new digital services during the initial rollout.

Regardless of strategy, both carriers have the same challenges with regards to new IP-enabled networks:

  • New technologies are being introduced into the networks in unprecedented volumes; particularly a variety of new access technologies, Metropolitan Ethernet (including the emerging Provider Broadband Technology), Internet Protocol (IP), Multiprotocol Label Switching (MPLS), and Automatically Switched Optical Networks (ASON),
  • The technologies of next generation networks are layered, providing a single, simpler infrastructure for the services, but introducing a complexity in planning the networks due to the complex layering and interconnection of these layered technologies,
  • The market adoption rates of the new services being offered are unknown, leading to significant uncertainties in the timing and placement of network resources that are required to support these services with a high Quality of Service (QoS). This is especially true for bandwidth-hungry video services, where small differences in market adoption rates mean large changes in the network resources required to support them.

And both strategies are subject to the same five mistakes:

  • Letting engineering dictate the network roll-out (or "how to under-provision")
  • Letting marketing dictate the network roll-out (or "how to over-provision")
  • Planning the network domains in silos (or “how to keep everyone guessing”)
  • Planning the network with techniques that "have always worked before"
  • Not being agile/responsive to shifting customer and business demands

Letting Engineering Dictate the Network Roll-Out – or, “How to under-provision”

Network engineering generally provides a view six, twelve, and twenty-four months ahead on what the network will be, and the equipment that will be needed. It also provides guidance on what network additions and rearrangement projects should be initiated, and provides a first-cut design of the network projects to the operations people. The capital budgets that they work with generally are highly constrained, and are often reduced as the fiscal year progresses. Thus, they need to determine those places where the network capacity is most needed. Where to put the scarce resources? Usually not where the demand is uncertain – those places that require resources due to the introduction of new services (assuming that their tools and techniques even allow them to determine that correctly). Since these demands are determined by the marketing organization, which is not historically known for its accurate predictions, engineering is even less prone to commit precious resources. The result is usually under-provisioning – or over-provisioning in one place and under-provisioning in another.

Letting Marketing Dictate the Network Roll-Out – or, “How to Over-Provision”

Marketing’s main concern is the success of the services that they are introducing. And the best way to ensure a new service fails? Be unable to properly explain and provision the service, followed by poor early customer service experience. Proper network planning is critical to the success of new services – ensuring that the right network resources are available to allow speedy provisioning of the service, and the service is provided at the right QoS.

In a next generation network, the first of these – ensuring provisioning-ready resources - is primarily applicable in the access portion of the network, where network equipment needs to be dedicated to the customer – be it a fiber to the home, twisted wire pairs connected to a dedicated Digital Subscriber Line terminal, or bandwidth on a hybrid-fiber coax facility. The job of network planning is to determine the best technology to match the service mix expected from a geographic/demographic area and the right amount of network resources (e.g. DSL Access Multiplexer ports) for provisioning the service within an acceptable time period. That time period has dramatically shortened from the traditional 30 to 90 day provisioning cycles of the last century as customers have come to expect near real-time provisioning.

What is the best way to ensure that a service can be provided to a customer who requests it? Put in the highest-level technology one can, and ensure that plenty of spare equipment is available – even if that is not the most economic thing to do.

Ensuring the appropriate Quality of Service is especially important in the metropolitan (aggregation) portions of the network, where usually Ethernet and/or ATM technology is employed, and the core network, where IP technology with Multiprotocol Label Switching (MPLS) is employed to create high-quality routes in the otherwise best-effort IP network. These technologies have the characteristic that they can be oversubscribed – but when that is done, they do not provide the same quality of service. Thus, the best way to guarantee an acceptable customer experience is to overprovision the network (and the servers). And marketing would just as soon have that happen.

Planning the Network in Separate Silos – or, “How to keep everyone guessing”

Even if, somehow, capital scarcity is balanced with the need to ensure proper quality of service, there is still an issue with the multi-technology nature of the next generation network. The traffic presented to the network by the access network must be properly carried by the metropolitan Ethernet aggregation network, which keeps some traffic within the metro area, and then presents the long-haul traffic onto the IP/MPLS core network. All, in turn, require the underlying optical transport network to carry the traffic (actually, it is even more complex, with legacy network technologies and services, as well as VoIP and signaling traffic sharing the network fabric). Traditionally, the planning of each of these “domains” (be they technology, geographic, or even temporal – with planners for various time periods) was done separately, with little information interchange among them. Usually, for instance, the IP network planners assumed infinite capacity from the underlying optical transport network. And the eastern region planners do their capacity planning independent of the western region planner. The net effect is a set of plans that have little hope of meshing into a single, coherent, cost-effective, master plan for the next generation network, leading to inefficiencies and stranded resources.

Planning with techniques that have always worked before – or, “Just keep trending”

For most network service providers, who overbuilt their networks during the exuberant latter days of the twentieth century, network planning has, for the last few years, meant essentially trending – looking at “almost full” links or “hot” routers and adding more capacity there. Theoretically, if the operations response time was fast enough, this could continue to work, even when new bandwidth-hungry services are being introduced. However, with equipment ordering times measured in months and network builds requiring thousands of staff hours and weeks of planning, the trending technique no longer works. In addition, new technology introductions, and new service introductions still have to start somewhere, requiring a forward-looking plan. So, getting a good estimate of the expected demand the new services will put on the network, in time to plan the network additions, is necessary. These estimates must be based, not on tactical operational data of network usage, but on more strategic, market-driven data.

Not being agile to shifting demands – or, “That’s their problem, not mine”

Network planning was traditionally a complex, laborious process, often requiring sixty to ninety days to do a “planning cycle,” even when done with minimal information interchange among the planners. This meant that there was minimal capability to do sufficient “what-if” analyses for different market adoption rates and demographics, and also an inability to rapidly respond to new information that changes the patterns and volume of traffic introduced on the network. Leading service providers are implementing technologies and processes that take what used to be yearly plans and do them quarterly, with a target of cycling through the planning process – essentially replanning the network - every two weeks.

Avoiding the Five Mistakes

So how does a service provider thread the needle between over and under-provisioning, plan an entire network in a consistent fashion, all the time being agile enough to respond quickly to the shifting demands? I’ve already hinted at some of the answers. But the answer is certainly not to continue to use the rules-of-thumb, spreadsheets, and separate domain work processes of the past, but to take a different approach – a holistic, cross-domain set of OSSs and processes that bring together all of the right information and provide a market-driven, cross-domain, comprehensive “master plan” for the network.

This next generation planning approach is being pioneered at carriers like Telstra, whose OSS Transformation Project includes a focus on re-engineering the entire network planning process. Telstra is currently implementing these planning changes for the access and underlying transport network, and will move to the IP/MPLS, Ethernet, and switched networks in a phased fashion. At each phase, the carrier will have the ability to create a set of market-driven plans for each of the domains that not only mesh, but can also be done quickly and in a way that provides the target quality of service experience for their customers at the lowest possible cost.

Dr. Mark H. Mortensen is the Senior VP of Marketing of VPIsystems.

Want to use this article? Click here for options!
© 2014 Penton Media Inc.

Learning Library

Webcasts

White Papers

Featured Content

The Latest

News

From the Blog

Briefingroom

Join the Discussion

Resources

Get more out of Connected Planet by visiting our related resources below:

Connected Planet highlights the next generation of service providers, as well as how their customers use services in new ways.

Subscribe Now

Back to Top