[Click] Poor forwarding performance with click in a paravirtualized Xen environment

Richard Neumann mail at richard-neumann.de
Mon Mar 25 11:54:38 EDT 2013


Hello again everybody,

I ran some further test and also tried to generate packages at lower
rates using ping with 10 kpps max. and receiving them using tcpdump.

Up until this threshold, click seems to work fine and there seems to be
no livelock.

As operating system I am using openSUSE 12.1 with the kernel
3.4.33-2.24-xen.

Our hardware are Supermicro X8DTG-D Servers with Intel Xeon X5650 CPUs @
2.67GHz and 3 x 2GB DDR3-1333 RAM.

Best regards,

Richard


Am Donnerstag, den 21.03.2013, 18:39 +0000 schrieb Adam Greenhalgh:
> We wrote further work where we had better performance, it's linked from the vrouter page at UCL .
> 
> Adam
> 
> Sent from my iPhone
> 
> On 21 Mar 2013, at 12:34, Giorgio Calarco <giorgio.calarco at unibo.it> wrote:
> 
> > And, sorry, I forgot to say one thing: have you already read this paper by
> > N.Egi ?
> > 
> > Evaluating Xen for Router Virtualization
> > http://nrg.cs.ucl.ac.uk/vrouter/publications/vrouter_pmect07.pdf
> > 
> > Perhaps you can find some inspiration.
> > 
> > Cheers, Giorgio
> > 
> > 
> > On Thu, Mar 21, 2013 at 12:31 PM, Richard Neumann
> > <mail at richard-neumann.de>wrote:
> > 
> >> Hello again,
> >> 
> >> @Giorgio Calarco:
> >>        Hi Richard, what happens if you decrease the input rate at lower
> >>        values? For instance at 40000pps? (maybe you have to change your
> >>        packet generator to perform this experiment ). And, where is
> >>        your brctl-based bridge created ? Within dom0?
> >>        Let me know,
> >> 
> >> I'll try to run a test with a slower packet generator and will let you
> >> know how it performs then. But the problem is that we actually do want
> >> to do high-speed packet processing here.
> >> The brige is created inside the domU environment, connecting the two 10G
> >> interfaces which have been passed through from the dom0.
> >> 
> >> @Luigi Rizzo:
> >>> At first sight it seems a case of receive livelock, which does not occur
> >>> so badly with the linux bridge as it (probably) is helped by NAPI.
> >>> 
> >>> It is not clear if your userspace click is also using netmap (which
> >>> may have its own bugs) and/or whether the two interfaces eth1 and
> >>> eth2 are using pci-passthrough or are emulated/paravirtualized.
> >> 
> >> in the respective setup, click is is not using netmap and even was
> >> compiled without the --with-netmap stuff.
> >> The 10G Cards are using pci-passthrough, because I expected better
> >> performance from this than from emulation or paravirtualization  (I did
> >> not actually compare it yet).
> >> 
> >>> In any case the 530Kpps you get with linux bridge is probably close
> >>> to the peak performance you can get, unless (a) Xen gives you access
> >>> to the real hw (via virtual functions and/or pci-passthrough), and
> >> 
> >> So, for I am actually am using PCI passthrough, the performance should
> >> be better even using a linux bridge.
> >> 
> >>> (b) you are using netmap or some very fast os stack bypass to talk
> >>> to the network interfaces within the virtual machine.
> >>> 
> >>> I managed to go quite fast within qemu+kvm, see below
> >>> 
> >>>    http://info.iet.unipi.it/~luigi/netmap/talk-google-2013.html
> >>>    http://info.iet.unipi.it/~luigi/papers/20130206-qemu.pdf
> >>> 
> >>> but that needed a little bit of tweaking here and there
> >>> (not that i doubt that the Xen folks are smart enough to do
> >>> something similar).
> >> 
> >> Thank you both for your help so far.
> >> 
> >> Best regards,
> >> 
> >> Richard
> >> 
> >> 
> >> _______________________________________________
> >> click mailing list
> >> click at amsterdam.lcs.mit.edu
> >> https://amsterdam.lcs.mit.edu/mailman/listinfo/click
> > _______________________________________________
> > click mailing list
> > click at amsterdam.lcs.mit.edu
> > https://amsterdam.lcs.mit.edu/mailman/listinfo/click




More information about the click mailing list