[Click] Unable to handle kernel paging request at virtual address
Paine, Thomas Asa
PAINETA at uwec.edu
Thu Oct 26 13:13:09 EDT 2006
Beyers,
No, my maps are small. I had set up some very tight testing
conditions and the map sizes were very small and appear to have no
bearing on the crashing. It only appears to tank when there is SMP
involved. It runs fine without SMP or with SMP with 0 packet load. You
remember my post back in September...
https://amsterdam.lcs.mit.edu/pipermail/click/2006-September/005176.html
, well I don't think I'm out of the woods with that issue. I don't
think SMP, P4 Hyperthreading, and Spinlock (click/include/click/sync.hh)
are working right together. My system can run like a rock star, however
when I call the write_handlers from /proc (which then causes 1000's of
handler calls between elements) do I see the issues. My hypothesis is
that my sched_setaffinity() hack job handled the race condition with the
click's main thread (and only one because of my prior post), however the
system call into click is still making the locks vulnerable, thus the
locks not working.
I complied click and my package with the -g compiler option (via
CXXFLAGS), produced the oops, located the exact line in source code (via
gdb/info line/disassemble) and was able to verify the assembly and
registers are correct at the time of crashes. It would also explain why
it seems to always pick on a couple of my elements, they are the ones
handling the packets and have the most lock exposure.
I must admit, the whole oops/gdb experience is worth its weight
in gold, however, I'd much rather be improving the my software rather
than wrestling with it. These are painful times...
Can anyone confirm or deny any issues with Spinlock and P4
Hyperthreading w/ SMP?
Thanks,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thomas Paine (paineta at uwec.edu)
University of Wisconsin - Eau Claire
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
________________________________
From: Beyers Cronje [mailto:bcronje at gmail.com]
Sent: Wednesday, October 25, 2006 5:50 PM
To: Paine, Thomas Asa
Cc: click at pdos.csail.mit.edu
Subject: Re: [Click] Unable to handle kernel paging request at virtual
address
Hi Thomas,
How big is your hashmap at the point of the crash? Hashmap only supports
a maximum of 32k entries due to kmalloc limitation. If this is your
issue, you can work around this by creating multiple hashmaps and hash
inserts across them.
Maybe hashmap should also be a CLICK_LALLOC candidate.
Cheers
Beyers
On 10/25/06, Paine, Thomas Asa < PAINETA at uwec.edu
<mailto:PAINETA at uwec.edu> > wrote:
Ok, I started a long email but soon realized it would be
confusing as
hell, so let me try the quick and dirty post first :)
I'm getting oops, with EIP always pointing to
HashMap.insert.
However, the call trace can differ (somewhat) and the top of the
call
trace is not always in a method using HashMap.insert (.find
though).
Eliminating calls seen in the trace seems to have no affect on
preventing an oops. Are there any open issues/changes with
HashMap? Is
this a memory allocation error? Possible incorrect kernel
setting?
Anyone have troubleshooting recommendations for this error?
I've
attached my current kernel config for 2.6.16.13. I'm not 100%
sure this
has anything to do specifically with hashmap, or if it is just
hashmap
manifesting the error, for whatever the real reason might be.
Looking for some guidance, thanks.
Unable to handle kernel paging request at virtual address
f9291c89
printing eip:
e08eea67
*pde = 00000000
Oops: 0000 [#1]
SMP
Modules linked in: tms click proclikefs e1000
CPU: 1
EIP: 0060:[<e08eea67>] Tainted: P VLI
EFLAGS: 00010282 (2.6.16.13-tms #3)
EIP is at _ZN7HashMapIjjE6insertERKjS2_+0x57/0xe2 [tms]
eax: 00003fff ebx: 00007fff ecx: dff152c0 edx: f9291c89
esi: dfa23520 edi: 000066de ebp: df703c88 esp: df703c48
ds: 007b es: 007b ss: 0068
Process bash (pid: 1030, threadinfo=df702000 task=c1b76a50)
Stack: <0>e08e135f dfa23520 dfa93a00 dfb67180 dfa939c0 df703c88
e08e1706
dfa23520
df703c88 df703cac dfb67180 00000001 00000004 00000000
00000000
df703cb0
652a1c89 dfa939c0 00000000 df703cb0 dfa939c0 dfe69760
e08e181e
dfa939c0
Call Trace:
[<e08e135f>] _ZN16BlacklistElement14is_whitelistedEm+0x5f/0xc6
[tms]
[<e08e1706>]
_ZN16BlacklistElement9addToListER6Stringj+0x172/0x1c8
[tms]
[<e08e181e>]
_ZN16BlacklistElement13write_handlerERK6StringP7ElementPvP12ErrorHandler
+0xc2/0x144 [tms]
[<e0c5d50a>]
_ZNK7Handler10call_writeERK6StringP7ElementbP12ErrorHandler+0x144/0x1ca
[click]
[<e0c67497>]
_ZN11HandlerCall6assignEP7ElementRK6StringS4_iP12ErrorHandler+0x27/0x1e8
[click]
[<e0c68177>]
_ZN11HandlerCall10call_writeEP7ElementRK6StringS4_P12ErrorHandler+0x93/0
xcc [click]
[<e0c68177>]
_ZN11HandlerCall10call_writeEP7ElementRK6StringS4_P12ErrorHandler+0x93/0
xcc [click]
[<e08e9c48>] _ZN13TMSController4listEm6String+0x6a/0x1de [tms]
[<e0c438e4>] click_lalloc+0x3c/0x3e [click]
[<e0c3c4a2>] _ZN6String4MemoC1Eii+0x24/0x2c [click]
[<e08ec238>]
_ZN13TMSController14whitelist_cidrE6String+0xa8/0xde [tms]
[<e08ecf40>]
_ZN13TMSController13write_handlerERK6StringP7ElementPvP12ErrorHandler+0x
21a/0x2e2 [tms]
[<e0c438e4>] click_lalloc+0x3c/0x3e [click]
[<e0c3c4a2>] _ZN6String4MemoC1Eii+0x24/0x2c [click]
[<e0c3c57a>] _ZN6String14append_garbageEi+0xa4/0x150 [click]
[<e0c5d50a>]
_ZNK7Handler10call_writeERK6StringP7ElementbP12ErrorHandler+0x144/0x1ca
[click]
[<e0c438e4>] click_lalloc+0x3c/0x3e [click]
[<e0c3ce8d>] _ZN6String6appendEPKci+0x2f/0x76 [click]
[<e0cc26e7>] handler_flush+0x577/0x57e [click]
[<c0159c49>] filp_close+0x28/0x6d
[<c016d1c6>] sys_dup2+0xc5/0xee
[<c0102cb9>] syscall_call+0x7/0xb
Code: d2 74 17 3b 0a 75 f5 8b 4c 24 24 8b 01 89 42 04 31 c0 83
c4 08 5b
5e 5f 5d c3 8b 46 0c 3b 46 10 73 41 8b 4e 14 8b 11 85 d2 74 5a
<8b> 02
89 01 89 d3 8b 45 00 89 03 8b 4c 24 24 8b 01 89 43 04 8d
Thanks,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thomas Paine (paineta at uwec.edu)
University of Wisconsin - Eau Claire
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_______________________________________________
click mailing list
click at amsterdam.lcs.mit.edu
https://amsterdam.lcs.mit.edu/mailman/listinfo/click
More information about the click
mailing list