Network Testing

Publications

[Grif00] Nancy Griffeth, Ruibing Hao, David Lee, and Rakesh Sinha. Integrated system interoperability testing with applications to VOIP. In Proceedings of FORTE/PSTV 2000, Pisa, Italy, October 2000; also in IEEE/ACM Trans. Netw., 12(5):823–836, 2004.

[Djou05a] Constantinos Djouvas and Nancy Griffeth. Experimental method for testing networks. In Proceedings of SERP’05 - The 2005 International Conference on Software Engineering Research and Practice, June 2005.

[Djou05b] Constantinos Djouvas, Nancy Griffeth, and Nancy Lynch. Using Self-Similarity to Increase Network Testing Effectiveness, September 2005.

[Grif06a] Nancy Griffeth, Yuri Cantor, and Constantinos Djouvas, "Testing a Network by Inferring its State Machine from Network Traces", in International Conference on Software Engineering Advances, ICSEA 2006.

[Grif06b] Constantinos Djouvas, Nancy D. Griffeth, Nancy A. Lynch. "Testing Self-Similar Networks," electronically published in Electronic Notes in Theoretical Computer Science, 2006.



Software: The AGATE Tool Suite

Version 0.3 of the tool suite incorporates all the functionality in a single easy-to-use GUI:

Download Agate 0.3

Objectives

The goal of this project is to develop the theoretical basis for network testing and to define a methodology and build tools that support cost-effective testing.  This work addresses the problem of verifying that a network built from tested components works correctly in concert to provide required services.    Initial research in this area was undertaken at the Next Generation Networks Interoperability Test Laboratory at Lucent Technologies.  This work and its theoretical basis are described in [Grif00].

The network testing problem is important because networks are hard to build correctly, and even networks that appear to work most of the time may have subtle bugs that require intermittent action, such as re-starting network elements.  Sometimes, the bugs prevent all communication.  Sometimes, the bugs interfere with only one application.  Sometimes, the bugs prevent the network from carrying the required load.  Sometimes, the bugs expose the network to security violations.  The goal of testing is to find faults in the network in order to correct them, but even just knowing the limitations of a given kind of network, without correcting faults, can save endless pain - i.e., knowing what loads it can carry, how frequently devices must be rebooted, how large it can scale, what security vulnerabilities it has.

The network testing problem is especially hard because networks are dynamic.  The component network elements change.  The configuration of a given network element may also change.  The connectivity of the network may change because components enter and leave; it may also change because of failures.  In this context, network testing must address how to determine the correctness of a collection of tested network components, combined in any of a range of configurations.   In this project, we assume that the individual components of the network have already been tested, and the question to be determined is whether the network as designed and configured will support the desired services.

Approach

Effective testing has a number of prerequisites:

  • Testers must know what the network is required to do.

  • Testers must develop a plan for determining whether the network does what it is required to do.

  • Testers must be able to mimic end-user activities, and sometimes must be able to mimic some of the network elements (if they are not available during testing).

  • Testers must be able to determine what the system has done in response to their activities.

  • Testers must be able to measure the effectiveness of the testing done so far (and in toto).

In addition, testers need models, tools, and processes to help them do the testing efficiently.

What the network is  required to do

Modeling addresses the need for a tester to know what the network is required to do.  The first step in testing is for the tester to understand what services the network should support and to plan the testing.   Formal models of the network may be available, but usually are not.  However, it is usually clear that the network must have certain high-level properties. For example, DHCP must allocate IP addresses when possible and not assign any IP address to more than one host. TCP must deliver messages reliably, in-order, and without duplication.

In [Grif00], we used a state machine model to model a network supporting Voice over IP.  The model captured the end-to-end behavior of the VoIP service.  This model enabled us to formalize the definition of test coverage and to generate a minimum adequate set of tests according to this definition of test coverage.  Subsequently, we attempted to model the behavior of an H.248 VoIP network in more detail, using SDL.  We concluded that detailed modeling does not scale well to large protocols with complex data structures. 

We are experimenting with building models from the network itself, using tools that automatically generate the models from network traces. Then we test the models to see if they have the required properties [SERP05].

Potential alternatives to using SDL are I/O Automata [LYNC96] and process algebras.  These will provide a rigorous formal basis for defining test coverage and for proving properties of a collection of test cases.  They also include some capabilities for modularizing the specification that may contribute to better scaling.  The disadvantage of using rigorous formal models is that they are not usually available to testers before a test effort; testers are not usually qualified to create them; and they are time-consuming and expensive to create. 

Planning and executing tests

One difficult problem for test planning and execution is that only a small number of networks, at best, can actually be tested, even when the goal is to test a class of networks. For example, when vendors test their network equipment, they are trying to verify that the equipment works in an entire range of network topologies and configurations.

Networks vary in other contexts as well. An ISP network changes continuously. Even small organizations add new hosts regularly. They also add or swap in new network equipment as new technologies or higher bandwidths become available, as for example adding new wireless access points. The remaining equipment must continue working as expected.

This problem motivates the question of how to choose a network for testing, when the real goal is to verify that an entire class of networks works. The central goal of this work is to find a single representative of a class of networks, whose correctness implies the correctness of the class. We propose using a subnetwork that is common to all of the networks in the class and whose behavior looks like the behavior of any of the networks. When a subnetwork has this property, we call the networks ``self-similar'' because each is similar to a substructure of itself [Djou05].

[LYNC96]  N. Lynch, Distributed Algorithms, Prentice-Hall, 1996.