– TCP Proxy Application Level Benchmark IA (March 2006)
The TCP Proxy Application Level Benchmark specifies performance metrics and testing methodologies for the measurement of TCP Proxy applications on a network processor subsystem. TCP Proxies fully terminate TCP connections enabling full data scanning and modification. TCP Proxies serve as the foundation for applications such as L7 firewalls, load balancers, SSL accelerators and Intrusion Detection Systems. This benchmark defines standard testing procedures for TCP Proxy Goodput on existing connections, Goodput with connection setup/teardown, connection setup rate, connection setup and teardown rate, SYN/ACK latency, and connection setup latency. Standard workloads are defined including TCP object sizes, number of concurrent connections, traffic patterns and round trip times. Tester performance calibration is handled through the specification of both DUT and loopback tests. This is the NPF’s first layer 4 benchmark.
– IPSec Forwarding Application Level Benchmark IA (July 2004)
The IPSec Forwarding Application Level Benchmark Implementation Agreement (IA)enables the objective measurement and reporting of the IPSec performance of any Network Processing Unit (NPU)-based device or set of components under test. The benchmark provides both vendors and System OEMs with a quantitative analysis of how the system/part(s) will perform providing IPSec-based functions. The IA includes specific instructions for set-up and configuration, tests and test parameters and result reporting formats. The specification supports a number of different configurations including systems with multiple NPU’s, PC Board’s and/or daughter cards, and a backplane or switch fabric. Whichever configuration is used, all of the systems components must be documented. The IA includes three specific tests. Each test has its own procedure, test setup, frame sizes and reporting format. The three tests measure the IPSec forwarding rate, IPSec throughput and the IPSec latency.
– Switch Fabric Benchmark (July 2003)
The Switch Fabric Benchmark is a compilation of five fabric Implementation Agreements that were released starting in October of 2002. The five IA’s include Switch Fabric Benchmarking Framework, Fabric Traffic Models, Fabric Performance Metrics, Performance Testing Methodology for Fabric Benchmarking, and Switch Fabric Benchmark Test Suites. Together, these agreements provide all of the background methodology and testing parameters needed for vendor independent switch fabric performance measurement. The tests are divided into three suites including hardware benchmarks, arbitration benchmarks and multicast benchmarks. Each suite includes multiple tests with their own test objective, arrival pattern, test procedure, and result presentation instructions. The three main performance metrics include latency, accepted vs. offered bandwidth and jitter. This benchmark will enable system design engineers to assess and compare different switch fabrics in an open and objective manner.
– IP Forwarding Benchmark IA (June 2003)
This Implementation Agreement extends the scope of the IPv4 Forwarding Benchmark (July 2002) to include the IPv6 protocol and new IPv4 and IPv6 routing tables. This specification provides industry standard measures of the forwarding performance of network processing systems with native IPv4, native IPv6 and mixed IPv4/IPv6 traffic. IPv4 routing tables featuring 10k, 120k and 1M routes are included as are IPv6 routing tables of 400 and 1.2k routes The IA details the terminology, test configurations, benchmark tests, routing tables and reporting formats needed to measure and publish the forwarding performance of network processing based systems.
The tests are grouped into three categories: data plane tests, control plane tests, and concurrent data plane and control plane tests. The data plane tests include measures of the aggregate forwarding rate, throughput, latency, loss ratio, overload forwarding rate, and system power consumption. Different traffic combinations are used including 100 percent native IPv4, 50 percent IPv4/50 percent IPv6, and 100 percent IPv6. The control plane tests include measures of forwarding table update rates. Lastly, the concurrent data plane and control plane tests include measures of concurrent forwarding table updates on the forwarding rate.
– MPLS Application Level Benchmark IA (January 2003)
The MPLS IA defines a methodology to obtain network processor MPLS application level benchmarks and describes the tests used to obtain MPLS performance metrics in Ingress, Egress and Transit configurations. It includes an Annex that describes a reference implementation of the benchmark, outlines the traffic streams required to run the benchmark tests, and provides the references and descriptions associated with the benchmark routing tables. An associated MPLS reporting template presents a sample report. This IA establishes consistent and objective measurement criteria that accurately assess the MPLS performance of Network processing products.
– IPv4 Forwarding Benchmark IA (July 2002)
The IPv4 Forwarding Benchmark IA details the interfaces, configuration parameters, test set-up and execution details, including traffic mix and routing table contents, needed to measure the IPv4 forwarding performance of NPU-based systems. The IPv4 interface takes the methodology defined for network equipment by the IETF (RFC2544) and adapts it in a new framework that targets the NPU subsystem. Using a consistent methodology, it enables easy comparison between NP-based systems with widely varying architectures. Design engineers will now be able to assess and compare, in an open, objective, and reproducible way, the IPv4 forwarding performance of NP-based devices. The features of this benchmark have been incorporated into the more general IP Forwarding Benchmark (June 2003).
NPF Related Agreements
From the Common Switch Interface Consortium – CSIX-L1 (August 2001) | PDF