[Click] SMP Click - Question Task scheduling

Ashok Ambati ashok.ambati at gmail.com
Sat Feb 9 04:21:01 EST 2008


Hi Adam,

When running the Linux TCP stack over the e1000 driver, I pump 194 Mbps into
the gateway and am able to get 185 Mbps out. Using Click with the polling
driver, I pump 194 Mbps in and get only 70-80 Mbps out.

I am running the gateway on an IBM x366 dual processor system with
3.66GhzIntel Xeon MP processors, 4GB RAM (DDR II SDRAM PC2-3200
400Mhz).

The network card is a dual port Gbit Ethernet PCI card based on the Intel
82546GB chip.

Thanks.

Ashok.


On 2/9/08, Adam Greenhalgh <a.greenhalgh at cs.ucl.ac.uk> wrote:
>
> How many packets can you forward with a single thread ?
>
> What type of system do you have ? What spec of memory do you have in it ?
>
> Adam
>
> On 09/02/2008, Ashok Ambati <ashok.ambati at gmail.com> wrote:
> > Hi Beyers,
> >
> > Thanks very much for your suggestions. I increased TxDescriptors to 4096
> but
> > still saw queue overflows. Then I changed the queue size to 2000. The
> queue
> > stopped overflowing but all the packets were not going through. A check
> on
> > the Ethernet driver showed a high count on rx_no_buffer_count and
> > rx_missed_errors. So I increased RxDescriptors to 4096. But the behavior
> is
> > the same.
> >
> > I also played with the BURST settings on the poll device and to device.
> > Setting BURST to 32 caused queue overflows again.
> >
> > Not sure what else I can try. One thought is that the packet processing
> by
> > the thread is consuming so much CPU it is unable to poll the adapter
> often
> > enough.
> >
> > Ashok.
> >
> >
> > On 2/7/08, Beyers Cronje <bcronje at gmail.com> wrote:
> > >
> > > Hi Ashok,
> > >
> > > As far as I can tell when you run PollDevice then ToDevice always gets
> > > scheduled. Once scheduled PollDevice will push at most BURST number of
> > > packets where BURST is 8 by default, and ToDevice will try and
> transmit
> > > BURST number of packets where BURST is 16 by default.
> > >
> > > It could happen that ToDevice can not enqueue the packet to the
> adapter TX
> > > ring because it is full, which could lead to PollDevice pushing
> packets
> > > faster than what ToDevice can transmit them. Check ToDevice's 'drops'
> > > handler for this. In which case it might help increasing you adapter's
> TX
> > > ringsize, the default with e1000 is 256, I typically increase this to
> the
> > > maximum of 4096. You can also experiment with different settings for
> BURST
> > > in PollDevice and ToDevice. Also play with Queue's capacity value.
> > >
> > > Click also provides the ScheduleInfo element for you to manually tune
> > > scheduling paramaters of each element. I am NOT sure if ScheduleInfo
> is
> > > compatible with SMP Click, I have never tried it before.
> > > See http://read.cs.ucla.edu/click/elements/scheduleinfo
> > > So you could configure ToDevice to get scheduled more often than
> > > PollDevice. This can obviously lead to the receiving adapater's RX
> ring
> > > getting full if PollDevice is not scheduled fast/often enough to
> enqueue the
> > > packets to Click. At least dropping at the RX adapter is a lot less
> > > expensive than dropping at Click's Queue or the TX adapter, so this
> might be
> > > a "good" thing.
> > >
> > > Hope this helps a bit.
> > >
> > > Beyers
> > >
> > >  On Feb 7, 2008 5:44 PM, Ashok Ambati <ashok.ambati at gmail.com> wrote:
> > >
> > > > Hello,
> > > >
> > > > I am a new user. I am configuring the kernel module to route between
> 2
> > > > interfaces. On a 2-CPU server, I am creating 2 threads - one polling
> > > > eth0
> > > > and transmitting to eth1 and the other polling eth1 and trasmitting
> to
> > > > eth0.
> > > > I do not understand how each thread switches between the 2 tasks so
> as
> > > > to
> > > > maintain a balance between the incoming and outgoing packets. For
> the
> > > > eth1
> > > > -> eth0 path where there is heavy traffic, the queue fronting the
> > > > ToDevice(eth0) is overflowing. Am I doing something wrong?
> > > >
> > > > Here is my config -
> > > >
> > > >
> > > > // Generated by make-ip-conf.pl
> > > >
> > > > // eth0 192.168.200.136 00:04:23:C5:5D:CA
> > > >
> > > > // eth1 9.47.83.136 00:04:23:C5:5D:CB
> > > >
> > > > // Shared IP input path and routing table
> > > >
> > > > ip :: Strip(14)
> > > >
> > > > -> CheckIPHeader(INTERFACES 192.168.200.136/255.255.255.0
> > > > 9.47.83.136/255.255.255.0)
> > > >
> > > > -> rt :: StaticIPLookup(
> > > >
> > > > 192.168.200.136/32 0,
> > > >
> > > > 192.168.200.255/32 0,
> > > >
> > > > 192.168.200.0/32 0,
> > > >
> > > > 9.47.83.136/32 0,
> > > >
> > > > 9.47.83.255/32 0,
> > > >
> > > > 9.47.83.0/32 0,
> > > >
> > > > 192.168.200.0/255.255.255.0 1,
> > > >
> > > > 9.47.83.0/255.255.255.0 2,
> > > >
> > > > 255.255.255.255/32 0.0.0.0 0,
> > > >
> > > > 0.0.0.0/32 0,
> > > >
> > > > 0.0.0.0/0.0.0.0 9.47.83.1 2);
> > > >
> > > > // ARP responses are copied to each ARPQuerier and the host.
> > > >
> > > > arpt :: Tee(3);
> > > >
> > > > // Input and output paths for eth0
> > > >
> > > > c0 :: Classifier(12/0806 20/0001, 12/0806 20/0002, 12/0800, -);
> > > >
> > > > pd0 :: PollDevice(eth0);
> > > >
> > > > todevice0 :: ToDevice(eth0, 8);
> > > >
> > > > pd0 -> c0;
> > > >
> > > > out0 :: Queue(2000) -> todevice0;
> > > >
> > > > c0[0] -> ar0 :: ARPResponder(192.168.200.136/24 00:04:23:C5:5D:CA)
> ->
> > > > out0;
> > > >
> > > > arpq0 :: ARPQuerier(192.168.200.136, 00:04:23:C5:5D:CA) -> out0;
> > > >
> > > > c0[1] -> arpt;
> > > >
> > > > arpt[0] -> [1]arpq0;
> > > >
> > > > c0[2] -> Paint(1) -> ip;
> > > >
> > > > c0[3] -> Print("eth0 non-IP") -> Discard;
> > > >
> > > > // Input and output paths for eth1
> > > >
> > > > c1 :: Classifier(12/0806 20/0001, 12/0806 20/0002, 12/0800, -);
> > > >
> > > > pd1 :: PollDevice(eth1);
> > > >
> > > > pd1 -> c1;
> > > >
> > > > todevice1 :: ToDevice(eth1, 8);
> > > >
> > > > out1 :: Queue(2000) -> todevice1;
> > > >
> > > > c1[0] -> ar1 :: ARPResponder(9.47.83.136/24 00:04:23:C5:5D:CB) ->
> out1;
> > > >
> > > > arpq1 :: ARPQuerier(9.47.83.136, 00:04:23:C5:5D:CB) -> out1;
> > > >
> > > > c1[1] -> arpt;
> > > >
> > > > arpt[1] -> [1]arpq1;
> > > >
> > > > c1[2] -> Paint(2) -> ip;
> > > >
> > > > c1[3] -> Print("eth1 non-IP") -> Discard;
> > > >
> > > > // Local delivery
> > > >
> > > > toh :: ToHost;
> > > >
> > > > arpt[2] -> toh;
> > > >
> > > > rt[0] -> EtherEncap(0x0800, 1:1:1:1:1:1, 2:2:2:2:2:2) -> toh;
> > > >
> > > > // Forwarding path for eth0
> > > >
> > > > rt[1] -> DropBroadcasts
> > > >
> > > > -> cp0 :: PaintTee(1)
> > > >
> > > > -> gio0 :: IPGWOptions(192.168.200.136)
> > > >
> > > > -> FixIPSrc(192.168.200.136)
> > > >
> > > > -> dt0 :: DecIPTTL
> > > >
> > > > -> fr0 :: IPFragmenter(1500)
> > > >
> > > > -> [0]arpq0;
> > > >
> > > > dt0[1] -> ICMPError(192.168.200.136, timeexceeded) -> rt;
> > > >
> > > > fr0[1] -> ICMPError(192.168.200.136, unreachable, needfrag) -> rt;
> > > >
> > > > gio0[1] -> ICMPError(192.168.200.136, parameterproblem) -> rt;
> > > >
> > > > cp0[1] -> ICMPError(192.168.200.136, redirect, host) -> rt;
> > > >
> > > > // Forwarding path for eth1
> > > >
> > > > rt[2] -> DropBroadcasts
> > > >
> > > > -> cp1 :: PaintTee(2)
> > > >
> > > > -> gio1 :: IPGWOptions(9.47.83.136)
> > > >
> > > > -> FixIPSrc(9.47.83.136)
> > > >
> > > > -> dt1 :: DecIPTTL
> > > >
> > > > -> fr1 :: IPFragmenter(1500)
> > > >
> > > > -> [0]arpq1;
> > > >
> > > > dt1[1] -> ICMPError(9.47.83.136, timeexceeded) -> rt;
> > > >
> > > > fr1[1] -> ICMPError(9.47.83.136, unreachable, needfrag) -> rt;
> > > >
> > > > gio1[1] -> ICMPError(9.47.83.136, parameterproblem) -> rt;
> > > >
> > > > cp1[1] -> ICMPError(9.47.83.136, redirect, host) -> rt;
> > > >
> > > > //CPU assignment for Polling and Target devices
> > > >
> > > > StaticThreadSched(pd0 0);
> > > >
> > > > StaticThreadSched(todevice0 1);
> > > >
> > > > StaticThreadSched(pd1 1);
> > > >
> > > > StaticThreadSched(todevice1 0);
> > > >
> > > > // ThreadMonitor();
> > > >
> > > > Thanks.
> > > >
> > > >
> > > >
> > > > Ashok.
> > > > _______________________________________________
> > > > click mailing list
> > > > click at amsterdam.lcs.mit.edu
> > > > https://amsterdam.lcs.mit.edu/mailman/listinfo/click
> > > >
> > >
> > >
> > _______________________________________________
> > click mailing list
> > click at amsterdam.lcs.mit.edu
> > https://amsterdam.lcs.mit.edu/mailman/listinfo/click
> >
>


More information about the click mailing list