[Click] Conversion of threads
Cliff Frey
cliff at meraki.com
Thu Feb 24 13:45:22 EST 2011
This config ran for ~20 seconds for me before it got stuck forever:
now exactly the following is printing out over and over:
f0/is.scheduled:
false
f0/is.count:
1060904
f5/is.scheduled:
false
f5/is.count:
1149700
uq2.scheduled:
false
master_q.length:
0
master_q.drops:
3499073
c.count:
3140379
On Thu, Feb 24, 2011 at 10:30 AM, Eddie Kohler <kohler at cs.ucla.edu> wrote:
> Hey
>
> I thought I found a bug in ThreadSafeQueue, but convinced myself it is OK.
> There could be problems in the threading core (which has been changed for
> coreperformance work). However, I do not observe Click getting stuck.
> Rather, I observe it performing radically slowly. The attached config
> prints counter information, etc. every second. The counters keep going up.
> Although the InfiniteSources often look unscheduled, this is because they
> *are* unscheduled much of the time; the fast_reschedule() happens at the end
> of InfintieSource::run_task(). I don't know what's going on. Commenting
> out all notification does not help.
>
> Eddie
>
>
>
> On 2/24/11 8:12 AM, Cliff Frey wrote:
>
>> I wanted to benchmark the difference between ThreadSafeQueue and
>> one-queue-per-thread + RoundRobinScheduler + Unqueue + another-Queue.
>> However, I ended up finding a bug.
>>
>> This config:
>>
>> elementclass Foo {
>> is :: InfiniteSource -> [0] output;
>> };
>> f0::Foo,f1::Foo,f2::Foo,f3::Foo,f4::Foo,f5::Foo
>> -> master_q :: ThreadSafeQueue
>> -> uq2 :: Unqueue
>> -> c :: Counter(COUNT_CALL 4000000 stop)
>> -> Discard;
>> StaticThreadSched(
>> f0/is 0,
>> f1/is 1,
>> f2/is 2,
>> f3/is 3,
>> f4/is 4,
>> f5/is 5
>> uq2 7);
>>
>> When run with 'click --threads=8 foo.click' ends up getting suck, with an
>> empty master_q and none of the infinitesource elements scheduled. I'm
>> running
>> on a i7 CPU with 4 cores + hyperthreading, giving 8 CPUs from linux's
>> perspective.
>>
>> Also interesting is how many drops will be seen at the queue in this case
>> (not
>> too surprising really)
>>
>> read f0/is.scheduled
>> false
>> read master_q.length
>> 0
>> read master_q.drops
>> 163540
>> read c.count
>> 231797
>>
>> Cliff
>>
>> On Thu, Feb 24, 2011 at 7:27 AM, Eddie Kohler <kohler at cs.ucla.edu
>> <mailto:kohler at cs.ucla.edu>> wrote:
>>
>> Try ThreadSafeQueue at the converge point, which as Bobby points out
>> should be
>> correct even when multithreaded, but doesn't require locks. However,
>> it will
>> still be slowish, as would be anything that tried to enforce strict
>> order,
>> since the queue pointers will bounce among cores.
>>
>> Eddie
>>
>>
>> On 2/23/11 9:36 PM, Philip Prindeville wrote:
>> > I was wondering... if I'm running multiple duplicate paths each on
>> its
>> own core and I eventually need to collect them up so I can resequence
>> packets and pass them up into user-space... how feasible is it to go
>> from
>> threaded and lockless to converging into a single locked queue
>> (fan-in)?
>> >
>> > I figure it would be the best of both worlds because most of the
>> operations could happen threaded and without locking, and the locking
>> only
>> needs to be enforced in the final stage...
>> >
>> > Thanks.
>> >
>> > -Philip
>> >
>> > _______________________________________________
>> > click mailing list
>> > click at amsterdam.lcs.mit.edu <mailto:click at amsterdam.lcs.mit.edu>
>>
>> > https://amsterdam.lcs.mit.edu/mailman/listinfo/click
>> _______________________________________________
>> click mailing list
>> click at amsterdam.lcs.mit.edu <mailto:click at amsterdam.lcs.mit.edu>
>>
>> https://amsterdam.lcs.mit.edu/mailman/listinfo/click
>>
>>
>>
More information about the click
mailing list