[Fwd: Re: [Click] generating artificial delay]

Simon Schuetz simon.schuetz at netlab.nec.de
Fri Mar 26 14:31:06 EST 2004


Hi,
I solved those performance problems by increasing TCP Buffer sizes and
TCP Windows. Unfortunately I have some strange results using IPerf to
measure TCP throughput.
Lots of runs create nearly same throughput. And some few are very far
off the line. For example, I measured the time it takes to send 3
300MByte of data. The results in seconds:
21, 21, 21, 37, 21, 21, 21, 21, ...
The 37 is very different from 21. In lots of testings, this happened
every tenth run on average. My test network is not connected to any
other machines, I disabled cron, was running on linux runlevel 3 and
shutdown unnecessary services. If I don't use the click router, I don't
experience these problems.

The machines are setup in the following way using GBit ethernet cards,
running statically configured IPv6 (no radvd):

IPerfServer(eth0)-----(eth2)ClickRouter(eth0)-----(eth0)IperfClient

The click configuration is:
myqueue1 :: Queue(10000)
myqueue2 :: Queue(10000)
FromDevice(eth0) -> myqueue1 -> DelayUnqueue(0.05) -> ToHost(eth0)
FromDevice(eth2) -> myqueue2 -> DelayUnqueue(0.05) -> ToHost(eth2)

Some important sysctl:

net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_rmem = 4096        87380   1048576
net.ipv4.tcp_wmem = 4096        65536   1048576
net.ipv4.tcp_mem = 48128        48640   49152
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_fack = 1
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_max_orphans = 8192
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1net.ipv4.tcp_rmem = 4096        87380  
1048576
net.core.rmem_default = 65535
net.core.rmem_max = 2097152


net.ipv4.tcp_wmem = 4096        65536   1048576
net.core.wmem_default = 65535
net.core.wmem_max = 2097152




I looked at the information in /proc/click, the queues did not drop any
packets and the capacity was never fully used.
Between each testrun I reload the click router to get an "initial
state".
I tested with different window and buffer sizes and different queuing
capacities.

Any ideas?

Thanks 
Simon


On Mon, 2004-03-22 at 17:56, Eddie Kohler wrote:
> Simon Schuetz wrote:
> > Hi,
> > I think I have to revise my recent postings.
> > It seems, that the click router is not the absolute bottleneck,
> > neither by using 
> > FromDevice(eth0) -> Queue -> DelayUnqueue(0.05) -> ToHost(eth0)
> > nor
> > FromDevice(eth0) -> Queue -> DelayShaper(0.05) -> ToDevice(eth2)
> > 
> > Tested with several TCP streams at a time, and found out that the total
> > troughput of those streams is far higher than with one stream. So the
> > limiting factor is probably the TCP configuration.
> > 
> > Sorry about that
> > Simon
> 
> Ah!!!  Of course.  Somehow I thought you were testing throughput with UDP. 
> Adding 100 milliseconds to a TCP flow's RTT will *seriously* impact its 
> throughput (throughput is inversely proportional to RTT).  I'd be interested if 
> your throughput went down a lot when measured with UDP.
> 
> Eddie
> 



More information about the click mailing list