click performance

Robert Morris rtm at amsterdam.lcs.mit.edu
Wed Jul 4 08:44:16 EDT 2001


Amit,

Are you using our patched e1000 driver as well as the patched kernel?
We tuned the e1000 driver in interrupting mode (as well as adding
polling), with the result that it can send and receive about twice as
fast as the original (v2.5.11) Intel driver.

Robert

> From: "Amit Dror" <amit at checkpoint.com>
> To: "'Robert Morris'" <rtm at amsterdam.lcs.mit.edu>
> Cc: <click at amsterdam.lcs.mit.edu>,
>    "Oded Gonda \(E-mail\)" <ogonda at checkpoint.com>
> Subject: RE: click performance 
> Date: Wed, 4 Jul 2001 14:24:54 +0200
> Importance: Normal
> 
> Robert,
> 
> After reinstalling our system because of a HD failure the result we got with
> FastUDPSource was 880000 (comparing to 524193 we had before). As this is
> still quite far from the 1.3 million number we wanted to know if you have
> any other ideas what can cause this.
> 
> Another issue we had is that we got a notable improvement comparing a bare
> 2.2.18 kernel  with a click patched 2.2.18 kernel.
> With the 2.2.18 we were able to forward 130,000 64 byte packets. With the
> click patched kernel we got 260,000 64 byte packets.
> This was somewhat surprising since the click module was not loaded and we
> disabled the CPU cache changes in skbuff.c (because of binary compatibility
> issues).
> Examining the changes in the 2.2.18 patch (excluding skbuff.c changes) we
> couldn't find any hint for this major difference.
> Do you know what may have caused this difference ?
> 
> Thanks,
> 
> Amit
> 
> > -----Original Message-----
> > From: Robert Morris [mailto:rtm at amsterdam.lcs.mit.edu]
> > Sent: Sun, June 17, 2001 3:51 PM
> > To: Amit Dror
> > Cc: click at amsterdam.lcs.mit.edu
> > Subject: Re: click performance
> >
> >
> > Amit,
> >
> > I think that 1.3 million number came from an 800 mHz Pentium III with
> > a ServerWorks LE chipset and 64bit/66mHz PCI. Supermicro 370DLE
> > motherboard.
> >
> > You should be able to forward IP faster than 325,000 p/s.
> >
> > How fast can you send with a suitably modified version of this?
> >
> > ctr :: FastUDPSource(1300000, 13000000, 60, SRCETH, SRCIP, 1001,
> >                      DSTETH, DSTIP, 1002, 1)
> >   -> ToDevice(eth1);
> > PollDevice(eth1) -> Discard;
> >
> > Robert
> >
> > > From: "Amit Dror" <amit at checkpoint.com>
> > > To: "'Robert Morris'" <rtm at amsterdam.lcs.mit.edu>
> > > Cc: <click at amsterdam.lcs.mit.edu>
> > > Subject: RE: click performance
> > > Date: Sun, 17 Jun 2001 16:22:39 +0200
> > > Importance: Normal
> > >
> > > This is a multi-part message in MIME format.
> > >
> > > ------=_NextPart_000_017A_01C0F749.BAE1B000
> > > Content-Type: text/plain;
> > > 	charset="iso-8859-1"
> > > Content-Transfer-Encoding: 7bit
> > >
> > > Robert,
> > >
> > > The cards we use are also Pro/1000 F (although connected to
> > a 33MHz PCI
> > > bus).
> > >
> > > Attached is the configuration file.
> > >
> > > What is the CPU speed on the machine you are using ?
> > >
> > > Thanks,
> > >
> > >
> > > Amit
> > >
> > > > -----Original Message-----
> > > > From: Robert Morris [mailto:rtm at amsterdam.lcs.mit.edu]
> > > > Sent: Fri, June 15, 2001 3:46 PM
> > > > To: Amit Dror
> > > > Cc: click at amsterdam.lcs.mit.edu
> > > > Subject: Re: click performance
> > > >
> > > >
> > > > Amit,
> > > >
> > > > While Click can receive or send that fast, it can't do
> > both. I think
> > > > it can only forward at about half that rate.
> > > >
> > > > We use a machine with a 64bit/66mHz PCI bus.
> > > >
> > > > We also use the 66 mHz PCI version of the e1000, called the
> > > > "Pro/1000 F
> > > > Server Adapter" (note the "F"), model number PWLA8490SX.
> > > >
> > > > If you have model PWLA8490, you have a card with a 33 mHz PCI
> > > > interface.  It sends a lot slower than the PWLA8490SX,
> > but I think it
> > > > receives at about the same rate.
> > > >
> > > > Robert
> > > >
> > > > > From: "Amit Dror" <amit at checkpoint.com>
> > > > > To: "'Robert Morris'" <rtm at amsterdam.lcs.mit.edu>
> > > > > Cc: <click at amsterdam.lcs.mit.edu>
> > > > > Subject: RE: click performance
> > > > > Date: Fri, 15 Jun 2001 16:24:18 +0200
> > > > > Importance: Normal
> > > > >
> > > > > Robert,
> > > > >
> > > > > The following line is taken from the Click Change
> > > > Log/Software History
> > > > > Version 1.2.0:
> > > > > "Added a polling Intel EEPro 1000 gigabit driver, which can
> > > > receive or send
> > > > > up to 1.3 million 64 byte packets per second."
> > > > >
> > > > > We are using Smartbits system to generate/measure the
> > > > forwarding rate.
> > > > > Smartbits should be able to perform up to full gigabit capacity.
> > > > >
> > > > > I don't have the configuration file accessible at the
> > > > moment, but it was
> > > > > generated by the make-ip-conf.pl script.
> > > > >
> > > > > I'm starting to suspect that the difference may be caused
> > > > because of the PCI
> > > > > bus. We are using a single 64bit/33MHz PCI bus.
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > > Amit
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Robert Morris [mailto:rtm at amsterdam.lcs.mit.edu]
> > > > > > Sent: Thu, June 14, 2001 6:14 PM
> > > > > > To: Amit Dror
> > > > > > Cc: click at amsterdam.lcs.mit.edu
> > > > > > Subject: Re: click performance
> > > > > >
> > > > > >
> > > > > > Amit,
> > > > > >
> > > > > > Where did you see a report of Click forwarding 1.3
> > million packets
> > > > > > per second?
> > > > > >
> > > > > > What are you using to generate the packets? What are you using
> > > > > > to measure the forwarding rate?
> > > > > >
> > > > > > Can you send us the exact Click configuration you are using?
> > > > > >
> > > > > > Thanks,
> > > > > > Robert
> > > > > >
> > > > > > > From: "Amit Dror" <amit at checkpoint.com>
> > > > > > > To: <click at amsterdam.lcs.mit.edu>
> > > > > > > Subject: click performance
> > > > > > > Date: Thu, 14 Jun 2001 19:05:42 +0200
> > > > > > > Importance: Normal
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > we have set-up an environment with a click router and Intel
> > > > > > EEPro 1000
> > > > > > > gigabit cards.
> > > > > > >
> > > > > > > Running a test of 64 byte packet stream we have reached
> > > > > > only 325,000 packets
> > > > > > > per second with the following hardware configuration:
> > > > > > >
> > > > > > > IBM Netfinity 4500R
> > > > > > > Dual Intel PIII 1000Mhz (booted UP kernel) , 256MB RAM
> > > > > > > 2 x Intel PRO/1000 NICs
> > > > > > >
> > > > > > > The click router is running a simple IP router
> > > > > > configuration generated by
> > > > > > > make-ip-conf.pl
> > > > > > >
> > > > > > > As this is quite far from the reported 1.3 million 64 byte
> > > > > > packets per
> > > > > > > second result  we try to understand what causes the
> > difference.
> > > > > > >
> > > > > > > Is it possible to get the hardware configuration in which
> > > > > > the 1.3 million 64
> > > > > > > byte packets per second was measured ?
> > > > > > >
> > > > > > > Do you have any idea about what may cause this difference ?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > >
> > > > > > > Amit Dror, Software Developer,
> > > > > > > Check Point Software Technologies Ltd.
> > http://www.checkpoint.com
> > > > > > > Phone: +972-3-7534532, Fax: +972-3-7534893
> > > > > > >
> > > > > > ==============================================================
> > > > > > ===========
> > > > > > > This message may contain confidential and/or proprietary
> > > > > > information, and
> > > > > > > is intended only for the person / entity to whom it was
> > > > originally
> > > > > > > addressed. The content of this message may contain
> > > > private views and
> > > > > > > opinions which do not constitute a formal disclosure or
> > > > > > commitment unless
> > > > > > > specifically stated.
> > > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > >
> > > ------=_NextPart_000_017A_01C0F749.BAE1B000
> > > Content-Type: application/octet-stream;
> > > 	name="router.click"
> > > Content-Transfer-Encoding: quoted-printable
> > > Content-Disposition: attachment;
> > > 	filename="router.click"
> > >
> > > // Generated by make-ip-conf.pl=0A=
> > > // eth1 1.0.0.100 00:90:27:E2:67:21=0A=
> > > // eth2 2.0.0.100 00:D0:B7:6F:2F:09=0A=
> > > =0A=
> > > tol :: ToLinux;=0A=
> > > t :: Tee(3);=0A=
> > > t[2] -> tol;=0A=
> > > =0A=
> > > c0 :: Classifier(12/0806 20/0001,=0A=
> > >                   12/0806 20/0002,=0A=
> > >                   12/0800,=0A=
> > >                   -);=0A=
> > > PollDevice(eth1) -> [0]c0;=0A=
> > > out0 :: Queue(200) -> todevice0 :: ToDevice(eth1);=0A=
> > > arpq0 :: ARPQuerier(1.0.0.100, 00:90:27:E2:67:21);=0A=
> > > c0 [1] -> t;=0A=
> > > t[0] -> [1]arpq0;=0A=
> > > arpq0 -> out0;=0A=
> > > ar0 :: ARPResponder(1.0.0.100 00:90:27:E2:67:21);=0A=
> > > c0 [0] -> ar0 -> out0;=0A=
> > > =0A=
> > > c1 :: Classifier(12/0806 20/0001,=0A=
> > >                   12/0806 20/0002,=0A=
> > >                   12/0800,=0A=
> > >                   -);=0A=
> > > PollDevice(eth2) -> [0]c1;=0A=
> > > out1 :: Queue(200) -> todevice1 :: ToDevice(eth2);=0A=
> > > arpq1 :: ARPQuerier(2.0.0.100, 00:D0:B7:6F:2F:09);=0A=
> > > c1 [1] -> t;=0A=
> > > t[1] -> [1]arpq1;=0A=
> > > arpq1 -> out1;=0A=
> > > ar1 :: ARPResponder(2.0.0.100 00:D0:B7:6F:2F:09);=0A=
> > > c1 [0] -> ar1 -> out1;=0A=
> > > =0A=
> > > rt :: LookupIPRoute(=0A=
> > >  1.0.0.100/32 0,=0A=
> > >  1.0.0.255/32 0,=0A=
> > >  1.0.0.0/32 0,=0A=
> > >  2.0.0.100/32 0,=0A=
> > >  2.0.0.255/32 0,=0A=
> > >  2.0.0.0/32 0,=0A=
> > >  1.0.0.0/255.255.255.0 1,=0A=
> > >  2.0.0.0/255.255.255.0 2,=0A=
> > >  255.255.255.255/32 0.0.0.0 0,=0A=
> > >  0.0.0.0/32 0,=0A=
> > >  0.0.0.0/0 18.26.4.1 1);=0A=
> > > =0A=
> > > rt[0] -> EtherEncap(0x0800, 1:1:1:1:1:1, 2:2:2:2:2:2) -> tol;=0A=
> > > ip ::  Strip(14)=0A=
> > >     -> CheckIPHeader(1.0.0.255 2.0.0.255 )=0A=
> > >     -> GetIPAddress(16)=0A=
> > >     -> [0]rt;=0A=
> > > c0 [2] -> Paint(1) -> ip;=0A=
> > > c1 [2] -> Paint(2) -> ip;=0A=
> > > =0A=
> > > rt[1] -> DropBroadcasts=0A=
> > >         -> cp0 :: PaintTee(1)=0A=
> > >         -> gio0 :: IPGWOptions(1.0.0.100)=0A=
> > >         -> FixIPSrc(1.0.0.100)=0A=
> > >         -> dt0 :: DecIPTTL=0A=
> > >         -> fr0 :: IPFragmenter(1500)=0A=
> > >         -> [0]arpq0;=0A=
> > > dt0 [1] -> ICMPError(1.0.0.100, 11, 0) -> [0]rt;=0A=
> > > fr0 [1] -> ICMPError(1.0.0.100, 3, 4) -> [0]rt;=0A=
> > > gio0 [1] -> ICMPError(1.0.0.100, 12, 1) -> [0]rt;=0A=
> > > cp0 [1] -> ICMPError(1.0.0.100, 5, 1) -> [0]rt;=0A=
> > > c0 [3] -> Print(xx0) -> Discard;=0A=
> > > rt[2] -> DropBroadcasts=0A=
> > >         -> cp1 :: PaintTee(2)=0A=
> > >         -> gio1 :: IPGWOptions(2.0.0.100)=0A=
> > >         -> FixIPSrc(2.0.0.100)=0A=
> > >         -> dt1 :: DecIPTTL=0A=
> > >         -> fr1 :: IPFragmenter(1500)=0A=
> > >         -> [0]arpq1;=0A=
> > > dt1 [1] -> ICMPError(2.0.0.100, 11, 0) -> [0]rt;=0A=
> > > fr1 [1] -> ICMPError(2.0.0.100, 3, 4) -> [0]rt;=0A=
> > > gio1 [1] -> ICMPError(2.0.0.100, 12, 1) -> [0]rt;=0A=
> > > cp1 [1] -> ICMPError(2.0.0.100, 5, 1) -> [0]rt;=0A=
> > > c1 [3] -> Print(xx1) -> Discard;=0A=
> > >
> > > ------=_NextPart_000_017A_01C0F749.BAE1B000--
> > >
> >
> >
> 




More information about the click mailing list