Aimed predominantly at short-reach, single-span fiber optic links for Data Center Interconnect (DCI), 400ZR is an interoperable networking Implementation Agreement (IA) in progress by the OIF. It defines a footprint-optimized solution for transporting 400Gb Ethernet over DCI links targeting a minimum of 80 km. Enabled by advanced coherent optical technology design targeting small, pluggable form factor modules such as QSFP-DD and OSFP, 400ZR proposes a technology-driven solution for high capacity data transport, matched to 400GE switch port market introduction.
(Sponsored by Ciena)
The OIF and the Ethernet Alliance hosted interop and technology demonstrations on the show floor this week at OFC 2019 in San Diego. The activities at the two booths overlapped with a fiber connection across the show floor, via which a 400 Gigabit Ethernet (400GbE) transmission generated via Flex Ethernet based on OIF’s FlexE 2.0 Implementation Agreement became part of the Ethernet Alliance’s 400GbE interop.
Microsoft, which not too long ago was facing irrelevance, is successfully riding the cloud to reestablish itself and brighten it prospects.
The company’s research and development efforts, the Playbook says, focus on AI, quantum computing, productivity tools, streaming cloud-based gaming, and networking. “Scaling the cloud requires not only more data centers, but also faster intra- and inter-data center connectivity links,” Menon wrote. “For instance, Microsoft is very active with the OIF in pushing 400ZR, a 400 Gbps link specification for data center interconnect.”
Heavy Reading’s optical expert Sterling Perrin discusses some of the major trends in transport network technology at this year’s OFC event in San Diego, including 5G Transport, 800G and 400ZR.
112G serial links have moved out of the lab and onto the exhibit floor. 56G ramps up while 28G goes mainstream.
By Martin Rowe, EE Web, Tuesday, February 12, 2019
Last year at DesignCon 2018, we witnessed high-speed digital designs that moved past 56 Gbits/s (56G) and onto 112 Gbits/s (112G). This year, DesignCon 2019 brought numerous demonstrations of 112G as the connectors and cables caught up with the silicon. While still appearing in technical papers and panels, 112G has certainly moved into the exhibit hall. Meanwhile, 56G has matured and is now a complete ecosystem.
When it comes to signal integrity and high-speed signals, transmission lengths certainly matter, especially with electrical signals over copper connections. Yes, optical transmission is an option, but nobody wants to pay for it. At a panel session on Jan. 31, OIF board president Nathan Tracy presented the table shown in Figure 1 that describes five OIF standards for different electrical transmission lengths.
Figure 1: The Optical Internetworking Forum has created standards for 112-Gbit/s copper connections. (Source: Optical Internetworking Forum and DesignCon)
Figure 2: Cable assemblies jump over PCBs to reduce insertion loss. (Source: Optical Internetworking Forum, Broadcom, and DesignCon)
This article originally appeared on Gazettabyte by editor Roy Rubenstein: http://www.gazettabyte.com/home/2018/8/20/t-api-taps-into-the-transport-layer.html
The Optical Internetworking Forum (OIF) in collaboration with the Open Networking Foundation (ONF) and the Metro Ethernet Forum (MEF) have tested the second-generation transport application programming interface (T-API 2.0).
T-API 2.0 is a standardised interface, released in late 2017 by the ONF, that enables the dynamic allocation of transport resources using software-defined networking (SDN) technology.
The interface has been created so that when a service provider, or one of its customers, requests a service, the required resources including the underlying transport are configured promptly.
The OIF-led interoperability demonstration tested T-API 2.0 in dynamic use cases involving equipment from several systems vendors. Four service providers – CenturyLink, Telefonica, China Telecom and SK Telecom – provided their networking labs, located in three continents, for the testing.
Packets and transport
SDN technology is generally associated with the packet layer but there is also a need for transport links, from fibre and wavelength-division multiplexing technology at Layer 0 through to Layer 2 Ethernet.
Transport SDN differs from packet-based SDN in several ways. Transport SDN sets up dedicated pipes whereas a path is only established when packets flow for packet SDN. “When you order a 100-gigabit connection in the transport network, you get 100 gigabits,” says Jonathan Sadler, the OIF’s vice president and Networking Interoperability Working Group chair. “You are not sharing it with anyone else.”
Another difference is that at the packet layer with its manipulation of packet headers is a digital domain whereas the photonic layer is analogue. “A lot of the details of how a signal interacts with a fibre, with the wavelength-selective switches, and with the different componentry that is used at Layer 0, are important in order to characterise whether the signal makes it through the network,” says Sadler.
T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible
Prior to SDN, control functions resided on a platform as part of a network’s distributed control plane. Each vendor had their own interface between the control and the optical domain embedded within their platforms. T-API has been created to expose and standardise that interface such that applications can request transport resources independent of the underlying vendor equipment.
To fulfil a connection across an operator’s network involves a hierarchy of SDN controllers. An application’s request is first handled by a multi-domain SDN controller that decomposes the request for the various domain controllers associated with the vendor-specific platforms. T-API 2.0’s role is to link the multi-domain controller to the application layer’s orchestrator and also connect the individual domain controllers to the multi-domain SDN controller (see diagram above). T-API is an example of a northbound interface.
The same T-API 2.0 interface is used at both SDN controller levels, what differs is the information each handles. Sadler compares the upper T-API 2.0 interface to a high-level map whereas the individual TAPI 2.0 domain interfaces can be seen as maps with detailed ‘local’ data. “Both [interfaces] work on topology information and both direct the setting-up of connections,” says Sadler. “But the way they are doing it is with different abstractions of the information.”
The ONF developed the first T-API interface as part of its Common Information Model (CIM) work. The interface was tested in 2016 as part of a previous interoperability demonstration involving the OIF and the ONF.
One important shortfall revealed during the 2016 demonstrations, and which has slowed its deployment, is that the T-API 1.0 interface didn’t fully define how to notify an upper controller of events in the lower domains. For example, if a link is congested, or worst, lost, it couldn’t inform the upper controller to re-route traffic. This has been put right with T-API 2.0.
“T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible,” says Sadler.
When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs
In addition to the four service providers, six systems vendors took part in the recent interoperability demonstration: ADVA Optical Networking, Coriant, Infinera, NEC/ Netcracker, Nokia and SM Optics.
The recent tests focussed on the performance of the TAPI-2.0 interface under dynamic network conditions. Another change since the 2016 tests was the involvement of the MEF. The MEF has adopted and extended T-API as part of its Network Resource Modeling (NRM) and Network Resource Provisioning (NRP) projects, elements of the MEF’s Lifecycle Service Orchestration (LSO) architecture. The LSO allows for service provisioning using T-API extensions that support the MEF’s Carrier Ethernet services.
Three aspects of the T-API 2.0 interface were tested as part of the use cases: connectivity, topology and notification.
Setting up a service requires both connectivity and topology. Topology refers to how a service is represented in terms of the node edge points and the links. Notification refers to the northbound aspect of the interface, pushing information upwards to the orchestrator at the application layer. This allows the orchestrator in a multi-domain network to re-route connectivity services across domains.
The four use cases tested included multi-layer network connections whereby topology information is retrieved from a multi-domain network with services provisioned across domains.
T-API 2.0 was also used to show the successful re-routing of traffic when network situations change such as a fault, congestion, or to accommodate maintenance work. Re-routing can be performed across the same layer such as the IP, Ethernet or optical layer, or, more optimally, across two or more layers. Such a capability promises operators the ability to automate re-routing using SDN technology.
The two other use cases tested during the recent demonstration were the orchestrator performing network restoration across two or more domains, and the linking of data centres’ network functions virtualisation infrastructure (NFVI). Such NFVI interconnect is a complex use case involving SDN controllers using T-API to create a set of wide area networks connecting the NFV sites. The use case set up is shown in the diagram below.
SK Telecom, one of the operators that participated in the interoperability demonstration, welcomes the advent of T-API 2.0 and says how such APIs will allow operators to enable services more promptly.
“It has been difficult to provide services such as bandwidth-on-demand and networking services for enterprise customers enabled using a portal,” says Park Jin-hyo, executive vice president of the ICT R&D Centre at SK Telecom. “These services will be provided within minutes, according to the needs, using the graphical user interface of SK Telecom’s network-as-service platform.”
SK Telecom stresses the importance of open APIs in general as part of its network transformation plans. As well as implementing a 5G Standalone (SA) Core, SK Telecom aims to provide NFV and SDN-based services across its network infrastructure including optical transport, IP, data centres, wired access as well as networks for enterprise customers.
“Our final goal is to open the network itself to enterprise customers via an open API,” says Park. “Our mission is to create 5G-enabled network-slicing-based business models and services for vertical markets.”
The OIF says the use cases have shown that T-API 2.0 enables real-time orchestration and that the main shortcomings identified with the first T-API interface have been addressed with T-API 2.0.
The OIF recognises that while T-API may not be the sole approach available for the industry – the IETF has a separate activity – the successful tests and the broad involvement of organisations such as the ONF and MEF make a strong case for T-API 2.0 as the approach for operators as they seek to automate their networks.
“When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs,“ says Sadler.
SK Telecom said participating in the interop demonstrations enabled it to test and verify, at a global level, APIs that the operators and equipment manufacturers have been working on. And from a business perspective, the demonstration work confirmed to SK Telecom the potential of the ‘global network-as-a-service’ concept.
Editor note: Added input from SK Telecom on September 1st.
Roy Rubenstein, Gazzetabyte
June 23, 2017
The Optical Internetworking Forum’s (OIF) group tasked with developing two styles of 400-gigabit coherent interface is now concentrating its efforts on one of the two.
When first announced last November, the 400ZR project planned to define a dense wavelength-division multiplexing (DWDM) 400-gigabit interface and a single wavelength one. Now the work is concentrating on the DWDM interface, with the single-channel interface deemed secondary.
“It [the single channel] appears to be a very small percentage of what the fielded units would be,” says Karl Gass of Qorvo and the OIF Physical and Link Layer working group vice chair, optical, the group responsible for the 400ZR work.
The likelihood is that the resulting optical module will serve both applications. “Realistically, probably both [interfaces] will use a tunable laser because the goal is to have the same hardware,” says Gass.
The resulting module may also only have a reach of 80km, shorter than the original goal of up to 120km, due to the challenging optical link budget.
Origins and status
The 400ZR project began after Microsoft and other large-scale data centre players such as Google and Facebook approached the OIF to develop an interoperable 400-gigabit coherent interface they could then buy from multiple optical module makers.
The internet content providers’ interest in an 80km-plus link is to connect premises across the metro. “Eighty kilometres is the magic number from a latency standpoint so that multiple buildings can look like a single mega data centre,” says Nathan Tracy of TE Connectivity and the OIF’s vice president of marketing.
Since then, traditional service providers have shown an interest in 400ZR for their metro needs. The telcos’ requirements are different to the data centre players, causing the group to tweak the channel requirements. This is the current focus of the work, with the OIF collaborating with the ITU.
“The catch is how much can we strip everything down and still meet a large percentage of the use cases”
“The ITU does a lot of work on channels and they have a channel measurement methodology,” says Gass. “They are working with us as we try to do some division of labour.”
The group will choose a forward error correction (FEC) scheme once there is common agreement on the channel. “Imagine all those [coherent] DSP makers in the same room, each one recommending a different FEC,” says Gass. “We are all trying to figure out how to compare the FEC schemes on a level playing field.”
Meeting the link budget is challenging, says Gass, which is why the link might end up being 80km only. “The catch is how much can we strip everything down and still meet a large percentage of the use cases.”
400ZR form factors
Once the FEC is chosen, the power envelope will be fine-tuned and then the discussion will move to form factors. The OIF says it is still too early to discuss whether the project will select a particular form factor. Potential candidates include the OSFP MSA and the CFP8.
“The cloud is the biggest voice in the universe”
The industry assumption is that the 80km-plus 400ZR digital coherent optics module will consume around 15W, requiring a very low-power coherent DSP that will be made using 7nm CMOS.
“There is strong support across the industry for this project, evidenced by the fact that project calls are happening more frequently to make the progress happen,” says Tracy.
Why the urgency?
“The cloud is the biggest voice in the universe,” says Tracy. To support the move of data and applications to the cloud, the infrastructure has to evolve, leading to the data centre players linking smaller locations spread across the metro.
“At the same time, the next-gen speed that is going to be used in these data centres – and therefore outside the data centres – is 400 gigabit,” says Tracy.
The goal of networking standards groups is to establish multivendor/multicarrier standards that interoperate. To achieve that goal, the OIF (Optical Internetworking Forum) is conducting a global interoperability demo to test software-defined networking (SDN) Transport Application Programming Interfaces (TAPI) among 5 global carriers and 11 system and software vendors. The tests cover services across optical, IP, and virtual appliance layers to see how services interact across the different vendors. In 1Q17, all issues identified will be presented to users and standard organizations to improve the performance and adoption of SDN. In doing this work, the OIF is performing a necessary step to drive adoption of multilayer and multivendor SDN.
OIF’s demonstration will lead to faster adoption of multilayer SDN
The OIF has been working to accelerate the deployment of new optical technology since its founding in 1998. In its 2014 demonstration of SDN transport architecture, the OIF established the need for common APIs for end-to-end orchestration for a multidomain network. Now, it is testing the Open Networking Foundation’s (ONF’s) SDN TAPI across 11 system participants: Adva, Ciena, Coriant, FiberHome, Huawei Technologies, Infinera, Juniper Networks, NEC Corporation, Sedona, SM Optics, and ZTE, with the support of five carriers – China Telecom, China Unicom, SK Telecom, Telefonica, and Verizon. The goal is to identify gaps in the current standards and work with standard bodies to address those gaps.
The demonstrations will abstract the topology for each carrier, including a virtual network in another carrier, setup of dynamic VNFs (virtual network functions), dynamic-connect IP services, and restore/setup of intra-lab optical connections. Each of these demonstrations will help vendors, carriers, and standard bodies see how complete the solutions are, and what needs to be done to progress multilayer interoperability between vendors and carriers.
Overall, the OIF is helping to move SDN and NFV (network functions virtualization) forward with the test of SDN TAPI. Vendors will benefit from knowing what applications need more work, and carriers will benefit from knowing how adoption of SDN/NFV can work across vendors and carriers so they can speed investment.
Donald Frey, Principal Analyst, Intelligent Networks
Roy Rubenstein, Gazettabyte
July 26, 2016
The Optical Internetworking Forum (OIF) has started a new analogue coherent optics (ACO) specification based on the CFP8 pluggable module.
The CFP8 is the latest is a series of optical modules specified by the CFP Multi-Source Agreement and will support the emerging 400 Gigabit Ethernet standard.
An ACO module used for optical transport integrates the optics and driver electronics while the accompanying coherent DSP-ASIC residing on the line card.
Systems vendors can thus use their own DSP-ASIC, or a merchant one if they don’t have an in-house design, while choosing the coherent optics from various module makers. The optics and the DSP-ASIC communicate via a high-speed electrical connector on the line card.
Current CFP2-ACO modules support single-wavelength transmission rates from 100 gigabit to 250 gigabit depending on the baud rate and modulation scheme used. The goal of the CFP8-ACO is to support up to four wavelengths, each capable of up to 400 gigabit-per-second transmissions.
This project is going to drive innovation
“This isn’t something there is a dire need for now but the projection is that this will be needed in two years’ time,” says Karl Gass of Qorvo and the OIF Physical and Link Layer Working Group optical vice chair.
OIF members considered several candidate optical modules for the next-generation ACO before choosing the CFP8. These included the existing CFP2 and the CFP4. There were some proponents for the QSFP but its limited size and power consumption is problematic when considering long-haul applications, says Gass.
One difference between the CFP2 and CFP8 modules is that the electrical connector of the CFP8 supports 16 differential pairs while the CFP2 connector supports 10 pairs.
“Both connectors have similar RF performance and therefore can handle similar baud rates,” says Ian Betty of Ciena and OIF board member and editor of the CFP2-ACO Implementation Agreement. To achieve 400 gigabit on a wavelength for the CFP8-ACO, the electrical connector will need to support 64 gigabaud.
Betty points out that for coherent signalling, four differential pairs per optical carrier are needed. “This is independent of the baud rate and the modulation format,” says Betty.
So while it is not part of the existing Implementation Agreement, the CFP2-ACO could support two optical carriers while the CFP8 will support up to four carriers.
“This is only the electrical connector interface capacity,” says Betty. “It does not imply it is possible to fit this amount of optics and electronics in the size and power budget.” The CFP8 supports a power envelope of 20W compared to 12W of the CFP2.
The CFP2-ACO showing the optical building blocks and the electrical connector linking the module to the DSP-ASIC. Source: OIF
The CFP8 occupies approximately the same area as the CFP2 but is not as tall such that the module can be doubled-stacked on a line card for a total of 16 CFP8-ACOs on a line card.
Given that the CFP8 will support up to four carriers per module – each up to 400 gigabit – a future line card could support 25.6 terabits of capacity. This is comparable to the total transport capacity of current leading dense WDM optical transport systems.
Rafik Ward, vice president of marketing at Finisar, says such a belly-to-belly configuration of the modules provides future-proofing for next-generation lineside interfaces. “Having said that, it is not clear when, or how, we will be able to technically support a four-carrier coherent solution in a CFP8 form factor,” says Ward.
Oclaro stresses that such a high total capacity implies that sufficient coherent DSP silicon can fit on the line card. Otherwise, the smaller-height CFP8 module may not enable the fully expected card density if the DSP chips are too large or too power-hungry.
Besides resulting in a higher density module, a key OIF goal of the work is to garner as much industry support as possible to back the CFP8-ACO. “How to create the quantity of scale so that deployment becomes less expensive and therefore quicker to implement,” says Gass.
The OIF expects the work to be similar to the development of the CFP2-ACO Implementation Agreement. But one desired difference is to limit the classes associated with the module. The CFP2-ACO has three class categories based on whether the module has a limited and linear output. “The goal of the CFP8-ACO is to limit the designs to single classes per wavelength count,” says Gass.
Gass is looking forward to the CFP8-ACO specification work. Certain standards efforts largely involve making sure components fit into a box whereas the CFP8-ACO will be more engaging. “This project is going to drive innovation and that will drive some technical work,” says Gass.
Gazettabyte – Roy Rubenstein
May 21, 2015
The Optical Internetworking Forum (OIF) has started modulator and receiver specification work to enhance coherent optical transmission performance. The OIF initiative aims to optimise modulator and receiver photonics operating at a higher baud rate than the current 32 Gigabaud (Gbaud).”We want the two projects to look at those trade-offs and look at how we could build the particular components that could support higher individual channel rates,” says Karl Gass of Qorvo and the OIF physical and link layer working group vice chair, optical.
The OIF members, which include operators, internet content providers, equipment makers, and optical component and chip players, want components that work over a wide bandwidth, says Gass. This will allow the modulator and receiver to be optimised for the new higher baud rate.
“Perhaps I tune it [the modulator] for 40 Gbaud and it works very linearly there, but because of the trade-off I make, it doesn’t work very well anywhere else,” says Gass. “But I’m willing to make the trade-off to get to that speed.” Gass uses 40 Gbaud as an example only, stressing that much work is required before the OIF members choose the next baud rate.
“We want the two projects to look at those trade-offs and look at how we could build the particular components that could support higher individual channel rates”
The modulator and receiver optimisations will also be chosen independent of technology since lithium niobate, indium phosphide and silicon photonics are all used for coherent modulation.
The OIF has not detailed timescales but Gass says projects usually take 18 months to two years.
Meanwhile, the OIF has completed two projects, the specification outputs of which are referred to as implementation agreements (IAs).
One is for integrated dual polarisation micro-intradyne coherent receivers (micro-ICR) for the CFP2. At OFC 2015, several companies detailed first designs for coherent line side optics using the CFP2 module.
The micro-ICR IA also defines a low-speed SPI bus interface to control the trans-impedence amplifiers in the coherent receiver. The digital bus interface enables circuit settings to be changed with operating temperature. With the first generation coherent receiver design, analogue signalling was used for their control, says Gass. The smaller micro-ICR has a reduced pin count and so uses a narrower digital bus to control the circuits.The second completed IA is the 4×5-inch second-generation 100 Gig long-haul DWDM transmission module.
“This [module] is considered an intermediate step with the almost immediate goal being to go to a CFP module,” says Gass.
Martin Rowe -March 30, 2015
Last week at OFC 2015, the OIF (Optical Internetworking Forum) demonstrated two 50 Gbps transmissions using both PAM4 and NRZ formats. Over the past year, PAM4 has emerged as what appears to be the modulation format of choice for many systems, although NRZ will still have its place.
PAM4, the topic of the Jitter Panel at DesignCon 2015, looks to become the modulation format for LR (long reach) and MR (medium reach) optical links. In particular, PAM4 looks to take over from NRZ for electrical links that lead up to an optical module and across backplanes. For XSR (extra-short reach) applications, NRZ is likely to live on in applications where signal-to-noise ratio is important such as within an optical module or in memory buses. The two videos below show demonstrations from the OIF booth.
In the first video, Scott Sommers of Molex shows a 50 Gbps PAM4 signal traveling over a 0.54 m Molex backplane. The signal is generated by an (AWG) arbitrary waveform generator from Keysight Technologies. The demo uses the AWG because silicon to generate the PAM4 signal isn’t yet available.
Silicon that can generate a 56 Gbps data stream using NRZ is available, and Jeff Twombly of Credo Semiconductor demonstrated it in the OIF booth. In this demonstration, a Credo 56G NRZ SerDes drove three demonstrations: a CEI-56G-VSR-NRZ channel, a CEI-56G-MR/LR-NRZ backplane and a CEI-56G-MR-NRZ passive copper cable. The video shows the signal passing through a 1 m length of copper cable.
Gazettayte, Roy Rubenstein
Wednesday, March 25, 2015
The Optical Internetworking Forum (OIF) is using the OFC exhibition taking place in Los Angeles this week to showcase the first electrical interfaces running at 56 Gigabit. Coherent optics in a CFP2 pluggable module is also being demonstrated.
“The most important thing for everyone is power consumption on the line card”
The OIF – an industry organisation comprising communications service providers, internet content providers, system vendors and component companies – is developing the next common electrical interface (CEI) specifications. The OIF is also continuing to advance fixed and pluggable optical module specifications for coherent transmission including the pluggable CFP2 (CFP2-ACO).
“These are major milestones that the [demonstration] efforts are even taking place,” says Nathan Tracy, a technologist at TE Connectivity and the OIF technical committee chair.
Tracy stresses that the CEI-56G specifications and the CFP2-ACO remain works in progress. “They are not completed documents, and what the demonstrations are not showing are compliance and interoperability,” he says.
Five CEI-56G specifications are under development, such as platform backplanes and links between a chip and an optical engine on a line card (see Table below).
Moving from the current 28 Gig electrical interface specifications to 56 Gig promises to double the interface capacity and cut electrical interface widths by half. “If we were going to do 400 Gigabit with 25 Gig channels, we would need 16 channels,” says Tracy. “If we can do 50 Gig, we can get it down to eight channels.” Such a development will enable chassis to carry more traffic and help address the continual demand for more bandwidth, he says.
But doubling the data rate is challenging. “As we double the rate, the electrical loss or attenuation of the signal travelling across a printed circuit board is significantly impacted,” says Tracy. “So now our reaches have to get a lot shorter, or the silicon that sends and receives has to improve to significant higher levels.”
One of the biggest challenges in system design is thermal management
Moreover, chip designers must ensure that the power consumption of their silicon do not rise. “We have to be careful as to what the market will tolerate, as one of the biggest challenges in system design is thermal management,” says Tracy. “We can’t just do what it takes to get to 56 Gigabit.”
To this aim, the OIF is pursuing two parallel tracks: using 56 Gigabit non-return-to-zero (NRZ) signalling and 4-level pulse amplitude modulation (PAM-4) which encodes two bits per symbol such that a 28 Gbaud signalling rate can be used. The 56 Gig NRZ uses simpler signalling but must deal with the higher associated loss, while PAM-4 does not suffer the same loss as it is similar to existing CEI-28 channels used today but requires a more complex design.
“Some [of the five CEI-56G specifications] use NRZ, some PAM-4 and some both,” says Tracy. The OIF will not say when it will complete the CEI-56G specifications. However, the projects are making similar progress while the OIF is increasing its interactions with other industry standards groups to shorten the overall timeline.
Source: OIF, Gazettabyte
Two of the CEI-56G specifications cover much shorter distances: the Extra Short Reach (XSR) and Ultra Short Reach (USR). According to the OIF, in the past it was unclear that the industry would benefit from interoperability for such short reaches.
“What is different at 56 Gig is that architectures are fundamentally being changed: higher data rates, industry demand for higher levels of performance, and changing fabrication technologies,” says Tracy. Such fabrication technologies include 3D packaging and multi-chip modules (MCMs) where silicon dies from different chip vendors may be connected within the module.
The XSR interface is designed to enable higher aggregate bandwidth on a line card which is becoming limited by the number of pluggable modules that can be fitted on the platform’s face plate. Density can be increased by using mid-board optics (an optical engine) placed closer to a chip. Here, fibre from the optical engine is fed to the front plate increasing the overall interface capacity.
The USR interface is to support stackable ICs and MCMs.
All are coming together in this pre-competitive stage to define the specifications, yet, at the same time, we are all fierce competitors
“The most important thing for everyone is power consumption on the line card,” says Tracy. “If you define these very short reach interfaces in such a way that these chips do not need as much power, then we have helped to enable the next generation of line card.”
The live demonstrations at OFC include a CEI-56G-VSR-NRZ channel, a CEI-56G-VSR-PAM QSFP compliance board, CEI-56G-MR/LR-PAM and CEI-56G-MR/LR-NRZ backplanes, and a CEI-56G-MR-NRZ passive copper cable.
The demonstrations reflects what OIF members are willing to show, as some companies prefer to keep their work private. “All are coming together in this pre-competitive stage to define the specifications, yet, at the same time, we are all fierce competitors,” says Tracy.
Also on display is working CFP2 analogue coherent optics (CFP2-ACO). The significance of coherent optics in a pluggable CFP2 is the promise of higher-density line cards. The CFP is a much bigger module and at most four can be fitted on a line card, while with the smaller CFP2, with its lower power consumption, up to eight modules are possible.
Using the CFP2-ACO, the coherent DSP-ASIC is external to the CFP2 module. Much work has been done to ensure that the electrical interface can support the analogue signalling between the CFP2 optics and the on-board DSP-ASIC, says Tracy.
At OFC, several companies have unveiled their CFP2-ACO products including Finisar, Fujitsu Optical Components, Oclaro and NEC, while Clariphy has announced a single-board reference design that includes its CL20010 DSP-ASIC and a CFP2-ACO slot.
News Analysis, Light Reading
Carol Wilson, Editor-at-large
February 12, 2015
Internet content providers and other network operators are looking for much fatter connections between their data centers than the current Ethernet service definitions can provide. So the Optical Internetworking Forum is stepping up with a new project to define more flexible Ethernet options for using the entire capacity of a given optical link. (See OIF Aims to Enable More Flexible Ethernet.)
Known as FlexEthernet, the project will establish a way for Ethernet equipment to use a variety of different tools such as channelization, bonding and sub-rate functionality to create those faster connections in a standard way, says Nathan Tracy, chairman of the Optical Internetworking Forum (OIF) ‘s Technical Committee and manager of industry standards for TE Connectivity (NYSE: TEL).
The idea is to supplement the Ethernet standard definitions developed by the IEEE with a common approach that can be brought to market more quickly, in time to meet the booming demand for faster connections between data centers, Tracy says.
“This uses the IEEE’s Ethernet in more flexible ways,” he notes.
Large Internet content providers are among those clamoring for the new flexibility, Tracy admits. While he doesn’t name specific companies, it’s apparent that Google (Nasdaq: GOOG), Apple Inc. (Nasdaq: AAPL) and Facebook are driving their own networking agendas and would benefit from this kind of connectivity.
One common feature set within FlexEthernet would allow a given link between two points to consume the full bandwidth of that link, beyond the published data rates that are typically 10 Gbit/s or 100 Gbit/s. “What FlexEthernet will enable a user to do is to start running data at the maximum rate of the link and then dial that down until it reaches an error rate that is acceptable,” Tracy says. “The traffic will go beyond the defined Ethernet service, but it will still look like Ethernet as the data goes on and off the link. That is one of the first apps of FlexEthernet and it is the one that drove this conversation a year ago.”
Another possible FlexEthernet option is to enable creation of custom data rates by using bonding of multiple rates — offering a 200Gbit/s service by bonding together two 100Gbit/s lanes, for example. The traditional way of doing this involves link aggregation, Tracy says, but that wouldn’t deliver the full 200 Gbit/s.
The IEEE would be expected to ultimately develop a standard approach to 200 Gbit/s, but that could be a couple of years away and, in the meantime, a standard approach to offering that kind of connection can be defined by the OIF.
“This would allow interim data rates for niches or specific needs until it is determined there is a broad market potential and broadly available technology” to do it via IEEE standards, Tracy says. The standards process isn’t forced ahead of what could be more efficient options in the long run.
FlexEthernet using an extension of the OIF’s multi-link gearbox will also allow the bundling of 10Gbit/s lines together so they can be supported by one 50-gig pin on an ASIC. This makes more efficient use of the limited number of pins on a given ASIC, Tracy notes.
The new project was launched at the OIF’s quarterly meeting earlier this year, along with the preparation of the OIF’s SDN Framework, a technical white paper which lays out the components and interfaces that will need to be standardized for SDN. That work is focused on establishing an applications development framework. (See SDN Tests Go Swimmingly, Says OIF and OIF Launches SDN Implementation Project.)
— Carol Wilson, Editor-at-Large, Light Reading
Author Stephen Hardy,
Editorial Director and Associate Publisher, Lightwave
February 12, 2015
Optical Internetworking Forum (OIF) members decided last month at their first quarter 2015 meeting to launch a project that would enable systems designers and their customers to tune the transmission speeds of their Ethernet equipment to rates not specified in existing Ethernet standards.
The FlexEthernet project will build on the OIF’s previous development of the multi-link gearbox (MLG), a device that mitigates differences in the number of lanes between chip interfaces. For example, the MLG can translate between a chip that sends a 100-Gbps signal across 10 lanes of 10 Gbps and another that operates at 4×25 Gbps and allow the signal to be recovered in its original 10-lane form (see “OIF launches new interconnect, 100G projects” and “AppliedMicro demos OIF-compliant 100G/10G multi-link gearbox chip”). The new project will create ways of using channelization, bonding, and sub-rate functionality to enable data rates to be adjusted either above or below current Ethernet standards.
The effort responds to requests from data center operators for a way to maximize the capacity of existing network infrastructures, according to Nathan Tracy of TE Connectivity and the OIF’s Technical Committee chair. For example, if a particular link couldn’t support a 40 Gigabit Ethernet connection due to reach or interference factors, FlexEthernet would enable the user to reduce the transmission rate to the highest the link would support without having to drop all the way to 10 Gigabit Ethernet, Tracy said. Conversely, if an operator wanted to transmit at a higher rate than 100 Gigabit Ethernet, FlexEthernet would enable that as well.
Transmission rates of 200 Gbps would be possible today if the FlexEthernet project were complete, Tracy added.
However, it’s not complete – and, as is customary among OIF spokesmen, Tracy declined to predict when the FlexEthernet development project would finish.
In other action at the quarterly meeting, the OIF membership moved closer to approving its SDN Framework document and finishing development of implementation agreements for software-defined networking (SDN) APIs addressing topology, service request, connection request, and path computation (see “OIF to explore Transport SDN, CFP2” and “OIF looks to solidify Transport SDN APIs”). The Physical and Link Layer Working group also met to discuss the application of PAM-4 and NRZ modulation formats for various CEI-56G projects.
The groups will meet again in April.
Carol Wilson, Editor-at-large
After successfully demonstrating Global Transport SDN, the Optical Internetworking Forum is starting an effort to develop implementation agreements for the interfaces used in that demo to link applications to an SDN controller. The move will address issues revealed in the demo about gaps in definitions for how user applications interact with the underlying transport network resources. (See OIF Launches SDN Implementation Project.)
The Optical Internetworking Forum (OIF) , which did the demo jointly with the Open Networking Foundation , is planning to develop these agreements for the two application programming interfaces (APIs) that were used in the demo — for service request and topology — as well as for path computation and link resource manager interfaces that the group has already identified in its SDN Framework. (See SDN Tests Go Swimmingly, Says OIF and OIF, ONF List Vendors in Transport SDN Demo.)
The implementation agreements are essentially agreements among multiple industry players on how something is done, in advance of standards development, says Jonathan Sadler, the Coriant exec who is OIF technical committee vice chair. The OIF’s SDN Framework has been in process since 2013 and a number of APIs have been identified in that work that need to be addressed. The two that were part of the demo — service request and topology — were given early importance but others will also be needed as SDN Global Transport is pushed toward commercial availability, he notes.In particular, implementation agreements will enable a common Service API to enable deployment across OpenFlow and non OpenFlow-based networking environments.
“The implementation agreement includes how to use REST and JSON — two specific technologies in Web 2.0 space — to convey the info needed for SDN in the transport environment” to set up services, Sadler says.
The goal is to have one approach and one programming language for the way applications talk to the network and request resources, he adds. Today, applications have multiple ways of talking to the network and requesting resources. A common approach will simplify the communications between applications, an SDN controller and the underlying network resources.
Ultimately, that will allow application developers to write one version and use it across multiple networks and different types of vendor equipment and controllers, which in turn will help drive broader application development.
— Carol Wilson, Editor-at-Large, Light Reading