click perform effects with respect to RAM
Will Stockwell
bigwill at mit.edu
Fri Feb 21 15:25:32 EST 2003
Response to Brecht's mail comes first, then to Gordon's. Thanks for the
input!
On Thu, 20 Feb 2003, Brecht Vermeulen wrote:
> can you maybe try with a full-blown router (make-ip-conf.pl) to see what
> it says ? (you seem to split on subnets so this should work also I
> think)
I gave this a quick and dirty shot, but it seems to be complaining.
This is less than idle because it really shouldn't be thinking about
ARP responses and things like that. I just want a dumb box that
passes packets quickly. I'll mess more later and get back to you if
nothing else works.
>
> How big are the packets you are generating ? and what do you use to
> generate packets ? (sorry if it was already mentioned in the thread)
> Have you tried to generate less packets and see when the output matches
> the input ?
>
This is a live data stream so packets are unpredictable in every way you
can imagine. It's not really possible for us to modulate the size of the
stream.
> Maybe you can add a 'ThreadMonitor' element in your configuration to
> have a look at the scheduling ?
>
Doesn't seem to be something I can do. click-install gives me this:
click.conf:5: While configuring `ThreadMonitor at 2 :: ThreadMonitor':
ThreadMonitor requires multithreading
Router could not be initialized!
Perhaps the fact that I can't even ThreadMonitor indicates the problem?
Could my lack of multithreading be the issue? Hrmm
> Which kernel do you use ?
>
I'm using 2.4.18. It's got SMP compiled in even though my box is single
process, dunno if that matters.
> Have you tried something like:
> PollDevice(eth1, true) -> Counter -> Queue -> ToDevice(eth2);
> idle->ToDevide(eth1);
> PollDevice(eth2)->Discard;
>
Yep, as reported in my last e-mail. This helped out a bit, but it's not
nearly good enough.
> and then try with eth1 and eth2 switched (maybe there is a problem with
> a card or so)
>
This gives me about the same using any interface of the 3 interfaces as
the incoming.
> Which type of e1000 cards are you using ?
Intel Pro/1000 T Desktop Adapters
>
> regards,
> Brecht
>
To Gordon:
On Thu, 20 Feb 2003, Gordon Lee wrote:
> 33Mhz slots are typically only 32 bits wide.
> I believe that most if not all e1000 cards are 64 bits wide.
> Do you have a 64 bit wide card sitting in a 32 bit slot ?
Nope, they are definitely 32-bit.
> It would be pretty obvious because the back half of the card
> edge connector would be suspended outside of the PCI slot.
> Some makers say they support that, but it can produce undefined
> behaviour. Frequent Tx hang/restarts might explain the low rate.
>
> Do you mean that the two ports are simply connected to each other ?
> If not, what exactly are they connected to ?
>
Yes, they are connected to one another. There is no device between. Any
packets transmitted on will then be received by the other via the
PollDevice elements that are in place. Could this be a problem? This
effectively makes click see every packet twice. I might try to find some
hardware I can hook up downstream so that this won't happen.
> Another thing that can slow down rates (again not to this extent alone)
> is the speed/duplex negotiated on the link. To get more raw information
> on what is happening at the hardware level, try this:
>
> bash$ cat /proc/net/PRO_LAN_Adapters/eth2.info
> ...
The dump is below if you want to peruse it. All the interface are
identical except that the statistics differ between the incoming and
outgoing of course. The cards are 32-bit wide, on a 33 Mhz bus, full
duplex at gigabit. Interestingly, there are a huge number of transmission
drops. Perhaps related to the crossover issue? Might the outgoing
interfaces be fighting over media access on my crossover between them?
Will
# cat /proc/net/PRO_LAN_Adapters/eth2.info
Description Intel(R) PRO/1000 Network Connection
Part_Number a62947-007
Driver_Name e1000
Driver_Version 4.3.15
PCI_Vendor 0x8086
PCI_Device_ID 0x100c
PCI_Subsystem_Vendor 0x8086
PCI_Subsystem_ID 0x1112
PCI_Revision_ID 0x02
PCI_Bus 2
PCI_Slot 9
PCI_Bus_Type PCI
PCI_Bus_Speed 33MHz
PCI_Bus_Width 32-bit
IRQ 4
System_Device_Name eth2
Current_HWaddr 00:02:B3:C3:2F:10
Permanent_HWaddr 00:02:B3:C3:2F:10
Link up
Speed 1000
Duplex Full
State up
Rx_Packets 72888249
Tx_Packets 2726754363
Rx_Bytes 2714499860
Tx_Bytes 917541367
Rx_Errors 1845297
Tx_Errors 0
Rx_Dropped 9971
Tx_Dropped 177186736
Multicast 0
Collisions 0
Rx_Length_Errors 0
Rx_Over_Errors 0
Rx_CRC_Errors 0
Rx_Frame_Errors 0
Rx_FIFO_Errors 1835326
Rx_Missed_Errors 1835326
Tx_Aborted_Errors 0
Tx_Carrier_Errors 0
Tx_FIFO_Errors 0
Tx_Heartbeat_Errors 0
Tx_Window_Errors 0
Tx_Abort_Late_Coll 0
Tx_Deferred_Ok 0
Tx_Single_Coll_Ok 0
Tx_Multi_Coll_Ok 0
Rx_Long_Length_Errors 0
Rx_Short_Length_Errors 0
Rx_Align_Errors 0
Rx_Flow_Control_XON 0
Rx_Flow_Control_XOFF 0
Tx_Flow_Control_XON 0
Tx_Flow_Control_XOFF 0
Rx_CSum_Offload_Good 71768606
Rx_CSum_Offload_Errors 3491
PHY_Media_Type Copper
PHY_Cable_Length 0-50 Meters (+/- 20 Meters)
PHY_Extended_10Base_T_Distance Disabled
PHY_Cable_Polarity Normal
PHY_Disable_Polarity_Correction Disabled
PHY_Idle_Errors 510
PHY_Receive_Errors 31
PHY_MDI_X_Enabled MDI
PHY_Local_Receiver_Status OK
PHY_Remote_Receiver_Stat
More information about the click
mailing list