[Click] Packet loss even at low sending rate

Beyers Cronje bcronje at gmail.com
Mon Dec 5 03:01:27 EST 2011


For interest sake can you post your the config you are using?

On Mon, Dec 5, 2011 at 4:37 AM, Bingyang LIU <bjornliu at gmail.com> wrote:

> Hi all,
>
> I need some help on PollDevice. I found that PollDevice caused some packet
> loss rate (less than 1%) even at low input rate (50kpps).
>
> To be accurate, I found the statistics of switch, which showed that the
> number of packets sent out the switch port to the machine's interface was
> 20000000, while the "Count" handler of the PollDevice element was 19960660.
>
> I tuned the driver buffer by "ethtool -G eth0 rx 2096" (default is 256),
> but nothing got better.
>
> Could anyone help me with this?
>
> Thanks very much.
> Bingyang
>
> On Sun, Dec 4, 2011 at 3:38 PM, Bingyang LIU <bjornliu at gmail.com> wrote:
>
> > Hi~
> >
> > I used CPUQueue and found that it didn't drop packets any more. So the
> > only problem is that PollDevice drops packets, actually I think it
> couldn't
> > poll all the ready packets from devices.
> >
> > Does anyone has similar problem with PollDevice, and is there any
> solution
> > or best practices?
> >
> > best
> > Bingyang
> >
> >
> > On Sun, Dec 4, 2011 at 2:10 PM, Bingyang LIU <bjornliu at gmail.com> wrote:
> >
> >> Hi Cliff,
> >>
> >> I couldn't use multi-threading when using FromDevice. When I used
> >> multi-threading and FromDevice together, the system crashed. So I had to
> >> use single thread when using FromDevice, and use 4 threads when using
> >> PollDevice.
> >>
> >> The router I tested has four gigabit interfaces, each of which connected
> >> with a host. All hosts sent packets to each other at the given rate.
> >> When the sending rate is 50kpps (200kpps in total), FromDevice gave the
> >> output ratio of 99.74%, while PollDevice gave 99.94%.
> >> When the sending rate is 200kpps (800kpps in total), FromDevice only
> gave
> >> the output ratio of 62.63%, while PollDevice gave 99.29%.
> >>
> >> That's why I think PollDevice works much better than FromDevice.
> >> Actually, both of them cause some packet loss at low input rate.
> >>
> >> And I think click 1.8 is also a mainline source code. But you are right,
> >> I should try 2.0. However, I'm not sure whether the same thing will
> happen
> >> to 2.0.
> >>
> >> best
> >> Bingyang
> >>
> >>
> >>
> >>
> >> On Sun, Dec 4, 2011 at 2:29 AM, Cliff Frey <cliff at meraki.com> wrote:
> >>
> >>> What performance numbers did you see when using FromDevice instead of
> >>> PollDevice?
> >>>
> >>> Have you tried mainline click?
> >>>
> >>>
> >>> On Sat, Dec 3, 2011 at 10:57 PM, Bingyang Liu <bjornliu at gmail.com
> >wrote:
> >>>
> >>>> Thanks Cliff. Ya, I have tried fromdevice, and it gave worse
> >>>> performance.
> >>>>
> >>>> I think Queue should be a very mature element, and there should not be
> >>>> a bug there. But the experiment results told me that something got
> wrong.
> >>>> Should I use a thread safe queue instead of queue, when I use
> multithreads?
> >>>>
> >>>> Thanks
> >>>> Bingyang
> >>>>
> >>>> Sent from my iPhone
> >>>>
> >>>> On Dec 4, 2011, at 12:31 AM, Cliff Frey <cliff at meraki.com> wrote:
> >>>>
> >>>> You could try FromDevice instead of PollDevice.  I'd expect that it
> >>>> would work fine.  If it is not high performance enough, it would be
> great
> >>>> if you should share your performance numbers just to have another
> datapoint.
> >>>>
> >>>> I doubt that Queue has a bug, you could try latest click sources
> though
> >>>> just in case.  As for finding/fixing any polldevice issues, I don't
> have
> >>>> anything to help you there...
> >>>>
> >>>> Cliff
> >>>>
> >>>> On Sat, Dec 3, 2011 at 8:49 PM, Bingyang LIU <bjornliu at gmail.com
> >wrote:
> >>>>
> >>>>> Hi Cliff,
> >>>>>
> >>>>> Thank you very much for your help. I followed your suggestion and got
> >>>>> some results.
> >>>>>
> >>>>> 1. It turned out that "PollDevice" failed to get all the packets from
> >>>>> NIC, even if the packet sending rate is only 200kpps with the packet
> size
> >>>>> of 64B.
> >>>>> 2. I used "grep . /click/.e/*/drops", all of them reported 0 drops.
> >>>>> 3. I put a counter between every two connected elements, to determine
> >>>>> which element dropped packet. Finally I found a queue dropped
> packets,
> >>>>> because the downstream counter reported less "count" than the
> upstream one.
> >>>>> However, it was straight that this queue still reported 0 drops. I
> think
> >>>>> there might be some bug with the element, or I mis-used the elements.
> >>>>>
> >>>>> So I have two questions. First, how can I make PollDevice work
> better,
> >>>>> which means that it won't drop packets at low rate. (Should I use
> Stride
> >>>>> Scheduler?) Second, is there any bug with Queue in Click 1.8.0, in
> terms of
> >>>>> dropping packets without reporting the drops?
> >>>>>
> >>>>> My experiment environment and configuration:
> >>>>> * Hardware: CPU Inter Xeon X3210 (quad core at 2.13Ghz), 4GB RAM. (a
> >>>>> server on deterlab)
> >>>>> * Software: Ubuntu8.04 + Click1.8, with PollDevice and
> >>>>> multi-thread enabled.
> >>>>> * Configuration: ./configure
> >>>>> --with-linux=/usr/src/linux-2.6.24.7 --enable-ipsec --enable-warp9
> >>>>> --enable-multithread=4
> >>>>> * Installation: sudo click-install --thread=4 site7_router1.click
> >>>>>
> >>>>>  thanks!
> >>>>> best
> >>>>> Bingyang
> >>>>>
> >>>>> On Sat, Dec 3, 2011 at 12:42 PM, Cliff Frey <cliff at meraki.com>
> wrote:
> >>>>>
> >>>>
> >>>
> >>
> >>
> >> --
> >> Bingyang Liu
> >> Network Architecture Lab, Network Center,Tsinghua Univ.
> >> Beijing, China
> >> Home Page: http://netarchlab.tsinghua.edu.cn/~liuby
> >>
> >
> >
> >
> > --
> > Bingyang Liu
> > Network Architecture Lab, Network Center,Tsinghua Univ.
> > Beijing, China
> > Home Page: http://netarchlab.tsinghua.edu.cn/~liuby
> >
>
>
>
> --
> Bingyang Liu
> Network Architecture Lab, Network Center,Tsinghua Univ.
> Beijing, China
> Home Page: http://netarchlab.tsinghua.edu.cn/~liuby
> _______________________________________________
> click mailing list
> click at amsterdam.lcs.mit.edu
> https://amsterdam.lcs.mit.edu/mailman/listinfo/click
>


More information about the click mailing list