[Back to Grid software page]

The Grid Project: Implementation

Userlevel | Linux Kernel Module

This page provides more detail about how the Grid protocol implementations work in Click.

Userlevel

At userlevel, Grid runs in the Click userlevel process. This process sends and receives packets to and from the operating system's network stack using the tunnel device. This device looks like a file to the Click process (e.g. /dev/tun0), and looks like a network interface to the OS (e.g. interface tun0). Under some versions of Linux, the tun device is known as tap. The Click process sends and receives packets to the wireless device via the BPF (Berkeley Packet Filter) under BSD-compatible OSes, and using raw sockets under Linux (although the Linux equivalent of BPF could be used). The architecture is shown in the diagram:

Although running Grid at userlevel has the benefit of being highly portable, and easy to debug (since fatal protocol bugs don't crash the kernel, and it is easier to access protocol state), there are many types of overhead. For example, packets processed by the userlevel Click process pass through buffers or queues in the tun device and in the BPF device. These queues are mostly out of Grid's control, preventing Grid from giving higher priority to route control packets, for example. Also, the many user-kernel boundary crossing decrease routing performance.

Because running at userlevel is convenient, we typically test and debug protocol code at userlevel before trying it out in the kernel. We can even run ``virtual networks'' in a single Click userlevel process by creating multiple instantiations of the Grid protocol elements and connecting them together appropriately. UC Boulder's ns-click takes this a step further, by integrating Click with the ns simulator, allowing you to use the exact same code in simulation that is used to deploy a protocol.

Linux Kernel Module

At kernel level in Linux, the Click module runs a separate kernel thread. Click is relatively greedy, and will take as much CPU time as you give it, although Click's adaptive scheduler allows you to specify what share of the CPU Click will use. Click code sits between the kernel's network stack and the device drivers. The kernel never sees packets from a device that Click is using, unless Click specifically hands packets to the kernel. Click presents a pseduo-device to the kernel for sending and receiving packets to the kernel. This architecture is shown below:

Although hard to debug, and sometimes resource-hungry, running Click in-kernel gives us the most flexibility. Essentially all packet queing is controlled by Click, allowing us to implement functionality like priority queuing for protocol packets.


Last modified: $Date: 2003/08/18 15:28:47 $