% ls /sfs/owner:/smith bio.txt paper.tex prop.texIn this example the user has requested all documents owned by user smith. This is done by listing the virtual directory owner:/smith. The system creates this directory "on the fly" and arranges to populate it with the files which match the search criterion (i.e. are owned by smith). In this lab you will implement a scaled-down version of the semantic file system using an SFS user level server similar to the toolkit described in the Mazieres paper you read for class . In this document "SFS" will refer to the self-certifying file system; we will refer to the semantic file system by its full name. The final product of this lab will be a user-level file server that enables semantic file system-like access to a local directory. A working daemon might function as follows:
blood% ./sfsusrv -s -p 0 -f ./cfg sfsusrv: version 0.5, pid 82575 sfsusrv: No sfssd detected, running in standalone mode. sfsusrv: Now exporting directory:/tmp/export-fdabek sfsusrv: serving /firstname.lastname@example.org:ueu2nc7xix8uh63he852ymv8tmzaujix blood% ls /tmp/export-fdabek test.c test.o a.out sweat% cd /email@example.com:ueu2nc7xix8uh63he852ymv8tmzaujix sweat% ls test.c test.o a.out sweat% cd :extension=c sweat% ls test.c sweat% cat test.c ...In this example, I have listed the files stored in /tmp/export-fdabek (a directory on blood's local disk) with a .c extension by accessing a specially named semantic directory (:extension=.c) under the SFS mount point for that file system on a remote machine. Note that the ls is done from sweat: because of a risk of deadlock, it is not possible to access files on the local machine via SFS.
Your job will be to extend sfsusrv to answer semantic file system queries as well as ordinary file accesses.
% cd % tar xzvf /home/to2/labs/sfsusrv.tar.gz % cd sfsusrv % gmake
% sfskey gen -KP sfs_host_key Creating new key for sfs_host_key. Key Name: firstname.lastname@example.org Press returnYou'll want to run sfsusrv with these arguments:
export /tmp/export-fdabek keyfile sfs_host_key
blood% mkdir /tmp/export-$USER blood% echo hello > /tmp/export-$USER/test.file blood% echo export /tmp/export-$USER > cfg blood% echo keyfile sfs_host_key >> cfg blood% ./sfsusrv -s -p 0 -f ./cfg sfsusrv: version 0.5, pid 82575 sfsusrv: No sfssd detected, running in standalone mode. sfsusrv: Now exporting directory:/tmp/export-fdabek sfsusrv: serving /email@example.com:ueu2nc7xix8uh63he852ymv8tmzaujixThe last line that sfsusrv prints is the path name under which your exported directory (/tmp/export-$USER on blood) will appear on SFS client machines. Your path will be different from the one printed above. The 2102 is the port number that sfsusrv is listening to. You can log into sweat and look at your exported files (but change the /sfs/... pathname to the one that your sfsusrv printed):
sweat% ls /firstname.lastname@example.org:ueu2nc7xix8uh63he852ymv8tmzaujix/ test.fileYou can learn more about how SFS works, and about why /sfs/... pathnames look the way they do, by reading the SFS paper.
blood% env ASRV_TRACE=10 ./sfsusrv -s -p 0 -f ./cfg
Now, on sweat, browse the exported file system. You'll have to use the /sfs/... pathname printed out by the current instance of sfsusrv, since the port number will change each time. As you browse, the server on blood will print out a complete trace of all NFS requests it receives. (Large structures may be truncated; if this is ever a problem, try higher values than 10.)
Now capture the output in a file:
blood% env ASRV_TRACE=10 ./sfsusrv -s -p 0 -f ./cfg |& tee nfs.traceIf you're using a Bourne-like shell, try this instead:
blood% env ASRV_TRACE=10 ./sfsusrv -s -p 0 -f ./cfg 2>&1 | tee nfs.trace
After setting up sfsusrv to trace NFS traffic, run the following commands (substitutiong the correct self-certifying pathname):
sweat% cd /sfs/... sweat% rm junk rm: junk: No such file or directory sweat% echo hello > junk sweat% cat junk hello sweat% cat junk helloNow stop sfsusrv, and look at the RPCs in the
nfs.tracefile. Answering the following questions will help you understand how NFS and sfsusrv interact.
You are not required to implement the semantic file system as described in Gifford's paper. You only have to serve queries of the following form:
:extension=extWhen you see a reference to a file whose name looks like this, you should produce a virtual directory containing all files in the original directory whose names end with ext. Note that because your program is an NFS server, unlike the implementation described in Gifford's paper, you do not need to create symbolic links to the "real" files.
All of the files that are eligible to satisfy queries will be located in the current working directory; you are not responsible for indexing an entire file system. For example,
ls /sfs/.../foo-dir/:extension=.c/should list all of the files that end in .c in directory foo-dir.
You should use sfsusrv as a starting point to implementing your daemon. You'll "override" the way sfsusrv handles a subset of the NFS3 RPC calls to provide for the semantic queries. You'll need to modify the handling of at least the following RPCs:
Each RPC is implemented in a method named "nfs3_RPCNAME" in the file client.C. client.C contains code which handles NFS calls. filesrv.C contains some utility functions and a cache of file descriptors associated with recently accessed files.
You may also find that you need to keep some additional state about the status of a query. The fh_entry structure defined in filesrv.h is a good place to add this additional state.
Your server is free to access the local file system via standard POSIX system calls (read, write, readdir, etc). Because your program is doing disk I/O you may not access it via SFS locally because of the danger of deadlock.
It is important to remember when designing this experiment that NFS filehandles are opaque data structures. You are free to form the file handles you return in any way you see fit (as long as they are 64 bytes or shorter).
/home/to2/labs/. The script takes a single argument: the self-certifying patname to your server. Since the tester accesses files via SFS it must be run on a machine other than the one your server is running on. For example, if you ran your server on blood, you might run the following on sweat:
sweat% /home/to2/labs/sfs-test.pl /email@example.com:gej8jaf3ky53jipevweaham84iw4xpr2/ Setting up test files firstname.lastname@example.org:gej8jaf3ky53jipevweaham84iw4xpr2/ Testing basic sfsusrv behavior... Testing ls..passed Testing write/read...passed Testing semantic behavior... Testing ls...passed Testing write/read...passed Testing create...passed Testing remove...passed Testing rename...passed Testing semantic behavior in a subdirectory... Testing ls...passed Testing write/read...passed Testing create...passed Testing remove...passed Testing rename...passedThe tests are in three phases:
~/handin/lab4/sfs.tar.gzthat contains source files and a makefile. Do not include object files or binaries please. To make the tarball run the following commands:
% cd % cd sfsusrv % gmake clean % cd % mkdir ~/handin/lab4 % tar czvf ~/handin/lab4/sfs.tar.gz sfsusrvThe lab is due before class on Tuesday, October 16.
Q: What is an NFS filehandle?
A: An NFSv3 filehandle is a 64-byte opaque data structure used to identify a 'file'. The fact that the filehandle is "opaque" means that the server generates the filehandle with whatever internalstructure it wishes and the client never interprets the structure of the filehandle. The client may compare the filehandle to others or return it to the server, but it has no need to understand how it was constructed.
Q: What can/should I put in an NFS filehandle?
A: Anyting you like, as long as you follow a few guidelines:
- distinct files/directories should have distinct file handles. You might ask what a 'file' means in the context of an assignment that asks you to create the appearance of virtual directories. To play it safe, make sure that any two files with different inodes have different filehandles and that any two queries that are for a different extension or reference a different directory have different file handles. This rule means that, for instance, you shouldn't assign a query directory the same filehandle as the file system directory it references (i.e. /sfs/.../dir/:extension=foo/ should not have the same filehandle as /sfs/.../dir/). This also means you shouldn't, for instance, return zero for the filehandle of every file.
- filehandles are consistent. The same file (subject to the definition above) should always have the same filehandle. Don't make filehandles unique by incrementing a counter or repeatedly using a random number generator.
Q: Ok, but how do I actually get my filehandle into the NFS reply?
A: The nfs_fh3 structure defines the representation of a filehandle. It supports two important operations (for this descriptions assume we have declared nfs3_fh *fh):