Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

FloodlightTest

...

Floodlight-Test

Floodlight-Test is a test execution framework released with floodlight for developers to conduct integration tests for floodlight and any developer added extensions. FloodlightTest Floodlight-Test allows developers to:

  • Instantiate one or more VM(s) with mininet
  • Run floodlight on developer's host (e.g., in eclipse) or in a VM
  • Run a provided set of basic integration tests
  • Add new integration tests for any newly developed extensions

FloodlightTest Floodlight-Test is released for assuring the high quality of floodlight and any of its extensions. FloodlightTest Floodlight-Test encourages and helps developers to follow proper test methodologies in their design process, while it also serves as a quality standard for community contributions into the floodlight repository.

By design, Floodlight is meant to be an open source controller for OpenFlow applications and/or controller features to be built on. Over time, Floodlight will grow with the open source community's contribution into a sound and stable controller platform. OpenBench will provide the test tools and process to assure the sound growth of floodlight.

System requirements

  1. VirtualBox v4.1.14 or later (earlier versions may work but have not been tested)
  2. Internet connectivity for initial installation
  3. Floodlight vmdk

Installation procedure

1. On host, obtain floodlight-vm.zip from http://floodlightwww.openflowhubprojectfloodlight.org/download/; unzip it in your favorite working directory, say ~/work

2. On host, obtain VM create/setup scripts by:

3. Either a) rename unzipped VM folder name to default name given in onetime-create-vm.sh (i.e., floodlightcontroller-test), or b) edit the folder name in onetime-create-vm.sh to match the VM folder (i.e., floodlightcontroller-release date). 

...

5. On host, edit onetime-setup-vm.sh and setup-bench.sh with the found VM IP addresses; run onetime-setup-vm.sh which will log into one VM (consolve-vm) and install FloodlightTest Floodlight-Test from github.

Running tests

Each time you want to start running tests, you will click start all VM(s) from VirtualBox GUI and do the following:

...

    • If you have not changed your floodlight code (i.e., floodlight.jar is up-to-date on your test VMs), you can simply start the three VMs (one console, two testers)
    • If you do need to update your floodlight.jar, a convenience script is provided. On host, confirm/update path to your floodlight source root directory in update-floodlight.sh; confirm/update VM IP addresses in update-floodlight.sh. Run update-floodlight.sh which builds (with ant), uploads, and runs latest floodlight.jar on VM(s)

2. On "console" VM, 'cd floodlighttestfloodlight-test' and then 'source setup-python-env.sh'

...

8. On "console" VM, 'bigtest/test-dir/test.py' to run individual failed tests directly to diagnose cause of failure

Useful tips:

1. Create a snapshot for the two "tester VMs" in VirtualBox immediately after initial install.  Click a VM, click Snapshots tab on the upper right, and click the plus tab to create a snapshot. It is a recommended practice to check the "Restore current snapshot" check box and power off from time to time to restore initial clean image and avoid false alarms created due to stale image state.  For example, you would certainly want to revert the image after running check-tests-floodlight (or, manually do 'rm /opt/floodlight/floodlight/feature/quantum', then 'sudo service floodlight stop', then sudo service floodlight start') to return floodlight to default mode.  

...

  • If your VMs are in the bridged mode as the startup scripts configured: ifconfig on each VM to assure they have received a valid address. If not, confirm whether you are on a network with a DHCP server; whether you have followed instructions above to "click" your VirtualBox VM's GUI Network tab,; others.  If all do not work, you can still assign a static address for each VM with, e.g., 'ifconfig eth0 xx.xx.xx.xx 255.255.255.0'
  • At any time after your initial setup, you are free to change your VirtualBox VMs to use host-only network. If you have not done this before, go to VirtualBox menu bar > VirtualBox > Preferences > Network > Add a host-only Network if you do not already have one (vboxnet0). Then, click each VM's Network tab to switch to host-only Adapter/vboxnet0.  The host only network automatically runs a DHCP server, such that your host is. e.g., 192.168.56.1, and your VMs will have 192.168.56.[101, 102, 103]. 

Base test suite

Each test suite is simply a batch file with a number of lists.  You can open each one to see what tests are being run.  Each suite at the end produces a failed_(suite name) file showing any individual tests that failed. All tests are based on mininet.  A test can have either one or two floodlight controller running.

...

  • floodlight: testing switch-controller connection under switch restart and handling of switches with same (conflict) DPID (keeping the last connection).
  • forwarding: testing forwarding among OF and non-OF islands, moving hosts, and no path scenarios
  • rest: Floodlight rest API test
  • staticflowpusher: Static Flow Entry Pusher test
  • openstack: quantum plugin + virtual network filter test.  This test restarts floodlight with a different configuration property file (quantum.properties).  After running this test, make sure you either revert the VM to initial snapshot, or manually do 'rm /opt/floodlight/floodlight/feature/quantum', then 'sudo service floodlight stop', then sudo service floodlight start' to return floodlight to default mode.

Requirements for merging floodlight extensions

Floodlight enforces strict software engineering practice for quality assurance. All modules within floodlight must be accompanied with unit tests and integration tests provided by the developer(s).

1. JUnit unit tests. Code coverage threshold, eclipse, bm check
2. OpenBench integration tests
3. Floodlight committer tests and code review

Adding new integration tests

With the Floodlight test utility, adding an integration test in python is extremely straightforward.  See any one test under the bigtest directory, and you can see how a test environment is setup, and how you can quickly add your own test commands.

...

Code Block
....

# issuing a mininet command
# pingall should succeed since firewall disabled
x = mininetCli.runCmd("pingall")

# return is stored in x and the bigtest.Assert method can check for a specific string in the response
bigtest.Assert("Results: 0%" in x)

# you can use python's sleep to time out previous flows in switches
time.sleep(5)

# Sending a REST API command
command                                                                                               
time.sleep(5)

# Sending a REST API command                                                                                                                                       
command = "http://%s:8080/wm/firewall/module/enable/json" % controllerIp
x = urllib.urlopen(command).read()
bigtest.Assert("running" in x)

...

# clean up all rules - testing delete rule                                                                                                            
# first, retrieve all rule ids from GET rules                                                                                                       
command = "http://%s:8080/wm/firewall/rules/json" % controllerIp
x = urllib.urlopen(command).read()
parsedResult = json.loads(x)

for i in range(len(parsedResult)):
    # example sending a REST DELETE command.  Post can be used as well.
    params = "{\"ruleid\":\"%s\"}" % parsedResult[i]['ruleid']
    command = "/wm/firewall/rules/json"
    url = "%s:8080" % controllerIp
    connection =  httplib.HTTPConnection(url)
    connection.request("DELETE", command, params)
    x = connection.getresponse().read()
    bigtest.Assert("Rule deleted" in x)

...

# iperf TCP works, UDP doesn't
mininetCli.runCmd("h3 iperf -s &")
x = mininetCli.runCmd("h7 iperf -c h3 -t 2")
# bigtest.Assert can also test for a "not" case
bigtest.Assert(not "connect failed" in x)

2. bigtest/forwarding/IslandTest1.py

This example shows yet a different style of test.

Code Block

import bigtest
from mininet.net import Mininet
from mininet.node import UserSwitch, RemoteController
from mininet.cli import CLI
from mininet.log import setLogLevel
import bigtest.controller
from bigtest.util.context import NetContext, EnvContext

def addHost(net, N):
    name= 'h%d' % N
    ip = '10.0.0.%d' % N
    return net.addHost(name, ip=ip)

def MultiControllerNet(c1ip, c2ip):
    "Create a network with multiple controllers."

    net = Mininet(controller=RemoteController, switch=UserSwitch)

    print "Creating controllers"
    c1 = net.addController(name = 'RemoteFloodlight1', controller = RemoteController, defaultIP=c1ip)
    c2 = net.addController(name = 'RemoteFloodlight2', controller = RemoteController, defaultIP=c2ip)

    print "*** Creating switches"
    s1 = net.addSwitch( 's1' )
    s2 = net.addSwitch( 's2' )
    s3 = net.addSwitch( 's3' )
    s4 = net.addSwitch( 's4' )

    print "*** Creating hosts"
    hosts1 = [ addHost( net, n ) for n in 3, 4 ]
    hosts2 = [ addHost( net, n ) for n in 5, 6 ]
    hosts3 = [ addHost( net, n ) for n in 7, 8 ]
    hosts4 = [ addHost( net, n ) for n in 9, 10 ]

    print "*** Creating links"
    for h in hosts1:
        s1.linkTo( h )
    for h in hosts2:
        s2.linkTo( h )
    for h in hosts3:
        s3.linkTo( h )
    for h in hosts4:
        s4.linkTo( h )

    s1.linkTo( s2 )
    s2.linkTo( s3 )
    s4.linkTo( s2 )

    print "*** Building network"
    net.build()

    # In theory this doesn't do anything                                                                                                                             
    c1.start()
    c2.start()

    #print "*** Starting Switches"                                                                                                                                   
    s1.start( [c1] )
    s2.start( [c2] )
    s3.start( [c1] )
    s4.start( [c1] )

    return net


with EnvContext(bigtest.controller.TwoNodeTest()) as env:
  log = bigtest.log.info

  controller1 = env.node1()
  cli1 = controller1.cli()

  controller2 = env.node2()
  cli2 = controller2.cli()

  print "ip1:%s ip2:%s" % (controller1.ipAddress(), controller2.ipAddress())

  with NetContext(MultiControllerNet(controller1.ipAddress(), controller2.ipAddress())) as net:
    sleep(20)
    ## net.pingAll() returns percentage drop so the bigtest.Assert(is to make sure 0% dropped)                                                                       
    = "http://%s:8080/wm/firewall/module/enable/json" % controllerIp
x = urllib.urlopen(command).read()
bigtest.Assert("running" in x)

...

# clean up all rules - testing delete rule
# first, retrieve all rule ids from GET rules
command = "http://%s:8080/wm/firewall/rules/json" % controllerIp
x = urllib.urlopen(command).read()
parsedResult = json.loads(x)

for i in range(len(parsedResult)):
    # example sending a REST DELETE command.  Post can be used as well.
    params = "{\"ruleid\":\"%s\"}" % parsedResult[i]['ruleid']
    command = "/wm/firewall/rules/json"
    url = "%s:8080" % controllerIp
    connection =  httplib.HTTPConnection(url)
    connection.request("DELETE", command, params)
    x = connection.getresponse().read()
    bigtest.Assert("Rule deleted" in x)

...

# iperf TCP works, UDP doesn't
mininetCli.runCmd("h3 iperf -s &")
x = mininetCli.runCmd("h7 iperf -c h3 -t 2")
# bigtest.Assert can also test for a "not" case
bigtest.Assert(not "connect failed" in x)

2. bigtest/forwarding/IslandTest1.py

This example shows yet a different style of test. Similarities can be easily seen in the way the two-node environment is set up.  What's useful to see in this example is how you define an arbitrary topology of switches, with hosts connect to each, and each switch can be listening to a different controller at your choice.  This is useful in simulating an OF island connected to non-OF island topology, since an island controlled by a controller B would appear as a non-OF network to controller A.

Code Block

import bigtest
from mininet.net import Mininet
from mininet.node import UserSwitch, RemoteController
from mininet.cli import CLI
from mininet.log import setLogLevel
import bigtest.controller
from bigtest.util.context import NetContext, EnvContext

def addHost(net, N):
    name= 'h%d' % N
    ip = '10.0.0.%d' % N
    return net.addHost(name, ip=ip)

def MultiControllerNet(c1ip, c2ip):
    "Create a network with multiple controllers."

    net = Mininet(controller=RemoteController, switch=UserSwitch)

    print "Creating controllers"
    c1 = net.addController(name = 'RemoteFloodlight1', controller = RemoteController, defaultIP=c1ip)
    c2 = net.addController(name = 'RemoteFloodlight2', controller = RemoteController, defaultIP=c2ip)

    print "*** Creating switches"
    s1 = net.addSwitch( 's1' )
    s2 = net.addSwitch( 's2' )
    s3 = net.addSwitch( 's3' )
    s4 = net.addSwitch( 's4' )

    print "*** Creating hosts"
    hosts1 = [ addHost( net, n ) for n in 3, 4 ]
    hosts2 = [ addHost( net, n ) for n in 5, 6 ]
    hosts3 = [ addHost( net, n ) for n in 7, 8 ]
    hosts4 = [ addHost( net, n ) for n in 9, 10 ]

    print "*** Creating links"
    for h in hosts1:
        s1.linkTo( h )
    for h in hosts2:
        s2.linkTo( h )
    for h in hosts3:
        s3.linkTo( h )
    for h in hosts4:
        s4.linkTo( h )

    s1.linkTo( s2 )
    s2.linkTo( s3 )
    s4.linkTo( s2 )

    print "*** Building network"
    net.build()

    # In theory this doesn't do anything
    c1.start()
    c2.start()

    #print "*** Starting Switches"
    s1.start( [c1] )
    s2.start( [c2] )
    s3.start( [c1] )
    s4.start( [c1] )

    return net


with EnvContext(bigtest.controller.TwoNodeTest()) as env:
  log = bigtest.log.info

  controller1 = env.node1()
  cli1 = controller1.cli()

  controller2 = env.node2()
  cli2 = controller2.cli()

  print "ip1:%s ip2:%s" % (controller1.ipAddress(), controller2.ipAddress())

  with NetContext(MultiControllerNet(controller1.ipAddress(), controller2.ipAddress())) as net:
    sleep(20)
    ## net.pingAll() returns percentage drop so the bigtest.Assert(is to make sure 0% dropped)
    o = net.pingAll()
    bigtest.Assert(o == 0)

Adding new unit tests

...