by Sinclair Vass, JDSU
January 12, 2012
If the 2011 calendar year could be described in three words, they would be “explosive bandwidth growth.” Both wired and wireless networks were stretched as consumers continued to adopt network intensive technologies at a fast pace.
The main driver of bandwidth demand continues to be real-time entertainment, mainly video. According to Sandvine, in 2009, 29 percent of all Internet traffic during peak hours in the U.S. was real-time entertainment. By 2011, that number jumped to 49 percent, with Netflix streaming alone accounting for 30 percent of overall Internet usage during peak times.
To put network demand in perspective, every 60 seconds consumers downloaded 13,000 hours worth of music from Pandora, posted more than 600 videos to YouTube, and used 370,000 minutes of voice time on Skype.
In the mobile space, 4G LTE became mainstream as carriers began deploying the new, faster networks. Almost all major carriers are now in some stage of such deployments. As more consumers use smartphones and the applications they offer, upgrading backhaul networks also became vital, with as many as 16 backhaul networks being upgraded at a time.
But as fast as the wireless carriers can upgrade their networks, users are immediately pushing those networks back to their new limits. The root cause of this is data usage. For example, every 60 seconds consumers made 695,000 mobile updates to Facebook and downloaded more than 13,000 iPhone applications in 2011.
In other areas of the tech industry, developers and technologists were exploring new ways to make our interaction with technology more seamless by using applications like voice and gesture recognition. While some gaming and computing offerings already use the first generation of these applications, next-generation versions will enable us to interact more virtually with our electronics and even our cars by using voice commands or via the wave of a hand. The race is on to develop more compact, higher performance, and cost-effective optical hardware that will support these emerging new applications.
Let’s consider how these trends will affect the optical communications industry in 2012.
Optical supply chain for telecommunications becomes more on-demand
The main challenge for carriers and network equipment manufacturers (NEMs) has not been the recent growth in bandwidth demand itself, but the fact that the growth has come in fits and spurts – making forecasting unpredictable at best. As a result, many in the optics industry have begun to adopt an on-demand supply chain model, with widespread adoption of vendor managed inventory (VMI) methods or other demand-pull systems.
This trend will continue into 2012, to the point that the vast majority of product shipped to equipment manufacturers will be VMI driven. This practice will enable vendors to react faster and more flexibly to demand changes.
Carriers become aware of self-aware networks
Many consumers are not willing to pay more for services, even though they use more bandwidth every year. As a result, carriers must operate their networks more efficiently. One of the best ways to accomplish this goal is to more proactively manage bandwidth provisioning in the optical domain.
This is why “self-aware” networks gained the attention of most operators in 2011 and why they represent a major evolution in transport network design. In these new networks, optical wavelength connections will be dynamically created, re-routed, or removed according to local network bandwidth needs. Self-aware capabilities will drastically reduce overall network operating costs for the carrier.
While self-aware networks were still in development in 2011, first deployments could start in late 2012 or early 2013. Operators now know what a self-aware network looks like, what it can do, and what it will cost. This coming year will see operators deciding how to best integrate the technology into their next-generation networks and selecting their preferred equipment for full commercial deployment in 2013.
40G reaches mainstream, with 100G close behind
In 2011, 40G deployments continued at a rapid pace with strong demand from China and EMEA. The main driver in China was the need for greater overall Internet speeds, while EMEA’s demand was driven by rising sales of tablets and smartphones. A lot of NEMs are talking about 100G, but 40G is now being deployed all over the world and will continue to play an important role in networks once 100G is readily available.
Hardware for 100G networks began shipping in 2011, but not all parts of the technology are ready. Long-haul transponders for 100G are still in development as the industry decides on the best configuration. Some 100G components still need to be reduced in size and power consumption before the technology can reach full-scale deployment stages.
The short-distance market has been figured out and could see initial deployments in late 2012. But the long-haul market is still unclear.
Tunable networks now the norm
The entire industry is seeking components that are smaller in size, consume less power, and provide improved functionality, while simultaneously supporting the continued aggressive price reduction trends in the telecommunications equipment market. With this in mind, it is no wonder that the tunable XFP transceiver saw rapid deployment in 2011. At this point, the tunable XFP has all but replaced the 300-pin transponder, and we will see continued growth for the tunable XFP throughout 2012.
Also in 2012, the tunable SFP+ transceiver will be brought into production. The SFP+ offers an even smaller form factor, higher density, and lower power consumption. In the short term, the SFP+ will be used to provide tunable features further towards the edge of the network. In the long term, it could replace the XFP form factor for all tunable applications.
Sinclair Vass is senior director of marketing within the Communications and Commercial Optical Products Business Segment at JDSU.