Chaitanya Kumar
2017-07-13 05:56:32 UTC
Hi
We are working on a research project that involves HP OpenFlow enabled
switch (HP 3500 yl). We are facing some issues with performance
particularly when operating the switch in "OpenFlow" mode. The switch is
controlled via a desktop running the Ryu controller.
The rules on the switch match packets based on the fields supported by
OpenFlow. Further, the switch also modifies a certain IP header field (in
this case the ToS bits) for packets that match the rules and are hence
forwarded.
More precisely, the rules match the ToS bits of the packet and change them
to a different value before forwarding them to a chosen host.
However, in the process, the forwarded packets achieve a throughput of no
more than 700kbps, while the source and destination hosts have 100 Mbit/s
Ethernet ports. If we disable "OpenFlow" mode and use it as it is then we
achieve a full throughput of 100Mbit/s (the Ethernet link speeds of the
client and server hosts). The end-to-end throughput was measured using
*iperf*.
Could someone shed some light on the reason for this drastic performance
degradation? (all the switch does is match packets whose ToS value is
(say,) 0x28 and replaces it with 0x40 before forwarding them to the right
destination)
Also, is there an alternative switch that someone has used successfully for
similar things?
A figure showing our experiment scenario is given below, for reference.
Thanks,
Chaitanya
We are working on a research project that involves HP OpenFlow enabled
switch (HP 3500 yl). We are facing some issues with performance
particularly when operating the switch in "OpenFlow" mode. The switch is
controlled via a desktop running the Ryu controller.
The rules on the switch match packets based on the fields supported by
OpenFlow. Further, the switch also modifies a certain IP header field (in
this case the ToS bits) for packets that match the rules and are hence
forwarded.
More precisely, the rules match the ToS bits of the packet and change them
to a different value before forwarding them to a chosen host.
However, in the process, the forwarded packets achieve a throughput of no
more than 700kbps, while the source and destination hosts have 100 Mbit/s
Ethernet ports. If we disable "OpenFlow" mode and use it as it is then we
achieve a full throughput of 100Mbit/s (the Ethernet link speeds of the
client and server hosts). The end-to-end throughput was measured using
*iperf*.
Could someone shed some light on the reason for this drastic performance
degradation? (all the switch does is match packets whose ToS value is
(say,) 0x28 and replaces it with 0x40 before forwarding them to the right
destination)
Also, is there an alternative switch that someone has used successfully for
similar things?
A figure showing our experiment scenario is given below, for reference.
Thanks,
Chaitanya