Floodlight-Test
Floodlight-Test
Floodlight-Test is a test execution framework released with floodlight for developers to conduct integration tests for floodlight and any developer added extensions. Floodlight-Test allows developers to:
- Instantiate one or more VM(s) with mininet
- Run floodlight on developer's host (e.g., in eclipse) or in a VM
- Run a provided set of basic integration tests
- Add new integration tests for any newly developed extensions
Floodlight-Test is released for assuring the high quality of floodlight and any of its extensions. Floodlight-Test encourages and helps developers to follow proper test methodologies in their design process, while it also serves as a quality standard for community contributions into the floodlight repository.
By design, Floodlight is meant to be an open source controller for OpenFlow applications and/or controller features to be built on. Over time, Floodlight will grow with the open source community's contribution into a sound and stable controller platform. OpenBench will provide the test tools and process to assure the sound growth of floodlight.
System requirements
- VirtualBox v4.1.14 or later (earlier versions may work but have not been tested)
- Internet connectivity for initial installation
- Floodlight vmdk
Installation procedure
1. On host, obtain floodlight-vm.zip from http://www.projectfloodlight.org/download/; unzip it in your favorite working directory, say ~/work
2. On host, obtain VM create/setup scripts by:
-
- git clone https://github.com/floodlight/floodlight-test
- scripts are under floodlight-test/scripts
3. Either a) rename unzipped VM folder name to default name given in onetime-create-vm.sh (i.e., floodlightcontroller-test), or b) edit the folder name in onetime-create-vm.sh to match the VM folder (i.e., floodlightcontroller-release date).Â
4. On host, run onetime-create-vm.sh; in VirtualBox GUI, click "Network" tab and "OK" for all VMs (default three), then click to start them, log in each (username: floodlight, no password), run 'ifconfig' to confirm and note down eth0 IP address
5. On host, edit onetime-setup-vm.sh and setup-bench.sh with the found VM IP addresses; run onetime-setup-vm.sh which will log into one VM (consolve-vm) and install Floodlight-Test from github.
Running tests
Each time you want to start running tests, you will click start all VM(s) from VirtualBox GUI and do the following:
1. Update floodlight.jar (and floodlight.properties) if needed:
-
- If you have not changed your floodlight code (i.e., floodlight.jar is up-to-date on your test VMs), you can simply start the three VMs (one console, two testers)
- If you do need to update your floodlight.jar, a convenience script is provided. On host, confirm/update path to your floodlight source root directory in update-floodlight.sh; confirm/update VM IP addresses in update-floodlight.sh. Run update-floodlight.sh which builds (with ant), uploads, and runs latest floodlight.jar on VM(s)
2. On "console" VM, 'cd floodlight-test' and then 'source setup-python-env.sh'
3. On "console" VM, 'bm clean' which cleans up any old VM states from previous runs.
4. Edit build/Makefile.workspace to confirm/edit VM IP addresses under make target 'register-vms-floodlight'
5. On "console" VM, 'bm register-vms-floodlight'
6. On "console" VM, 'bm check-vms-floodlight'; see failed-check-vms-floodlight file for failed tests, if any
7. On "console" VM, 'bm check-tests-floodlight'; see failed-check-tests file for failed tests, if any
8. On "console" VM, 'bigtest/test-dir/test.py' to run individual failed tests directly to diagnose cause of failure
Useful tips:
1. Create a snapshot for the two "tester VMs" in VirtualBox immediately after initial install. Â Click a VM, click Snapshots tab on the upper right, and click the plus tab to create a snapshot. It is a recommended practice to check the "Restore current snapshot" check box and power off from time to time to restore initial clean image and avoid false alarms created due to stale image state. Â For example, you would certainly want to revert the image after running check-tests-floodlight (or, manually do 'rm /opt/floodlight/floodlight/feature/quantum', then 'sudo service floodlight stop', then sudo service floodlight start') to return floodlight to default mode. Â
2. Use terminals to ssh into VMs to be able to see longer scroll history
3. Most failures of setup or test scripts were due to incorrect/incomplete network setup. Check the following for typical network problems:
- If your VMs are in the bridged mode as the startup scripts configured: ifconfig on each VM to assure they have received a valid address. If not, confirm whether you are on a network with a DHCP server; whether you have followed instructions above to "click" your VirtualBox VM's GUI Network tab,; others. Â If all do not work, you can still assign a static address for each VM with, e.g., 'ifconfig eth0 xx.xx.xx.xx 255.255.255.0'
- At any time after your initial setup, you are free to change your VirtualBox VMs to use host-only network. If you have not done this before, go to VirtualBox menu bar > VirtualBox > Preferences > Network > Add a host-only Network if you do not already have one (vboxnet0). Then, click each VM's Network tab to switch to host-only Adapter/vboxnet0. Â The host only network automatically runs a DHCP server, such that your host is. e.g., 192.168.56.1, and your VMs will have 192.168.56.[101, 102, 103].Â
Base test suite
Each test suite is simply a batch file with a number of lists. Â You can open each one to see what tests are being run. Â Each suite at the end produces a failed_(suite name) file showing any individual tests that failed. All tests are based on mininet. Â A test can have either one or two floodlight controller running.
1. check-vms-floodlight: Currently consisting only one test called SmokeTest1, essentially a simple stress test for floodlight.
2. check-tests-floodlight: Currently containing ten tests grouped into five categories:
- floodlight: testing switch-controller connection under switch restart and handling of switches with same (conflict) DPID (keeping the last connection).
- forwarding: testing forwarding among OF and non-OF islands, moving hosts, and no path scenarios
- rest: Floodlight rest API test
- staticflowpusher: Static Flow Entry Pusher test
- openstack: quantum plugin + virtual network filter test.  This test restarts floodlight with a different configuration property file (quantum.properties).  After running this test, make sure you either revert the VM to initial snapshot, or manually do 'rm /opt/floodlight/floodlight/feature/quantum', then 'sudo service floodlight stop', then sudo service floodlight start' to return floodlight to default mode.
Requirements for merging floodlight extensions
Floodlight enforces strict software engineering practice for quality assurance. All modules within floodlight must be accompanied with unit tests and integration tests provided by the developer(s).
1. JUnit unit tests. Code coverage threshold, eclipse, bm check
2. OpenBench integration tests
3. Floodlight committer tests and code review
Adding new integration tests
With the Floodlight test utility, adding an integration test in python is extremely straightforward. Â See any one test under the bigtest directory, and you can see how a test environment is setup, and how you can quickly add your own test commands.
Consider the following example:
1. bigtest/firewall/FloodlightFirewallTest.py
#!/usr/bin/env python ## Creates a tree,4 topology to test different firewall rules ## with ping and iperf (TCP/UDP, differed ports) Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ## @author KC Wang # import a number of basic bigtest libraries import bigtest.controller import bigtest # import a number of useful python utilities. # This particular example does REST API based testing, hence urllib is useful for sending REST commands and # json is used for parsing responses import json import urllib import time from util import * import httplib # bigtest function to connect to two active tester VMs # make sure you already started the VM and have done bm register-vms-floodlight # (with the correct two nodes indicated in build/Makefile.workspace) env = bigtest.controller.TwoNodeTest() log = bigtest.log.info # use the first tester VM's floodlight controller # since its a linux node, we use its bash mode as command line interface controllerNode = env.node1() controllerCli = controllerNode.cli() controllerIp = controllerNode.ipAddress() controllerCli.gotoBashMode() controllerCli.runCmd("uptime") # use the second tester VM to run mininet mininetNode = env.node2() mininetCli = mininetNode.cli() # this starts mininet from linux console and enters mininet's command line interface mininetCli.gotoMininetMode("--controller=remote --ip=%s --mac --topo=tree,4" % controllerIp) # this function uses REST interface to keep on querying floodlight until the specified switches are all # connected to the controller correctly and seeing each other in the same connected cluster switches = ["00:00:00:00:00:00:00:1%c" % x for x in ['1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f']] controllerNode.waitForSwitchCluster(switches)
Now, you are ready to add some test commands to test a number of different cases to make sure floodlight works correctly.
.... # issuing a mininet command # pingall should succeed since firewall disabled x = mininetCli.runCmd("pingall") # return is stored in x and the bigtest.Assert method can check for a specific string in the response bigtest.Assert("Results: 0%" in x) # you can use python's sleep to time out previous flows in switches time.sleep(5) # Sending a REST API command command = "http://%s:8080/wm/firewall/module/enable/json" % controllerIp x = urllib.urlopen(command).read() bigtest.Assert("running" in x) ... # clean up all rules - testing delete rule # first, retrieve all rule ids from GET rules command = "http://%s:8080/wm/firewall/rules/json" % controllerIp x = urllib.urlopen(command).read() parsedResult = json.loads(x) for i in range(len(parsedResult)): # example sending a REST DELETE command. Post can be used as well. params = "{\"ruleid\":\"%s\"}" % parsedResult[i]['ruleid'] command = "/wm/firewall/rules/json" url = "%s:8080" % controllerIp connection = httplib.HTTPConnection(url) connection.request("DELETE", command, params) x = connection.getresponse().read() bigtest.Assert("Rule deleted" in x) ... # iperf TCP works, UDP doesn't mininetCli.runCmd("h3 iperf -s &") x = mininetCli.runCmd("h7 iperf -c h3 -t 2") # bigtest.Assert can also test for a "not" case bigtest.Assert(not "connect failed" in x)
2. bigtest/forwarding/IslandTest1.py
This example shows yet a different style of test. Similarities can be easily seen in the way the two-node environment is set up. Â What's useful to see in this example is how you define an arbitrary topology of switches, with hosts connect to each, and each switch can be listening to a different controller at your choice. Â This is useful in simulating an OF island connected to non-OF island topology, since an island controlled by a controller B would appear as a non-OF network to controller A.
import bigtest from mininet.net import Mininet from mininet.node import UserSwitch, RemoteController from mininet.cli import CLI from mininet.log import setLogLevel import bigtest.controller from bigtest.util.context import NetContext, EnvContext def addHost(net, N): name= 'h%d' % N ip = '10.0.0.%d' % N return net.addHost(name, ip=ip) def MultiControllerNet(c1ip, c2ip): "Create a network with multiple controllers." net = Mininet(controller=RemoteController, switch=UserSwitch) print "Creating controllers" c1 = net.addController(name = 'RemoteFloodlight1', controller = RemoteController, defaultIP=c1ip) c2 = net.addController(name = 'RemoteFloodlight2', controller = RemoteController, defaultIP=c2ip) print "*** Creating switches" s1 = net.addSwitch( 's1' ) s2 = net.addSwitch( 's2' ) s3 = net.addSwitch( 's3' ) s4 = net.addSwitch( 's4' ) print "*** Creating hosts" hosts1 = [ addHost( net, n ) for n in 3, 4 ] hosts2 = [ addHost( net, n ) for n in 5, 6 ] hosts3 = [ addHost( net, n ) for n in 7, 8 ] hosts4 = [ addHost( net, n ) for n in 9, 10 ] print "*** Creating links" for h in hosts1: s1.linkTo( h ) for h in hosts2: s2.linkTo( h ) for h in hosts3: s3.linkTo( h ) for h in hosts4: s4.linkTo( h ) s1.linkTo( s2 ) s2.linkTo( s3 ) s4.linkTo( s2 ) print "*** Building network" net.build() # In theory this doesn't do anything c1.start() c2.start() #print "*** Starting Switches" s1.start( [c1] ) s2.start( [c2] ) s3.start( [c1] ) s4.start( [c1] ) return net with EnvContext(bigtest.controller.TwoNodeTest()) as env: log = bigtest.log.info controller1 = env.node1() cli1 = controller1.cli() controller2 = env.node2() cli2 = controller2.cli() print "ip1:%s ip2:%s" % (controller1.ipAddress(), controller2.ipAddress()) with NetContext(MultiControllerNet(controller1.ipAddress(), controller2.ipAddress())) as net: sleep(20) ## net.pingAll() returns percentage drop so the bigtest.Assert(is to make sure 0% dropped) o = net.pingAll() bigtest.Assert(o == 0)