Design and Implementation of the Sun Network File System Sandberg, Goldberg, Kleiman, Walsh, Lyon Usenix 1995 This paper is neat because: NFS was very successful. Still in widespread use today (we're using it class machines). Much research uses it. Can view much net fs research as fixing problems with NFS. outline: big picture: client, kernels, RPC, LAN, server, disk trace RPCs for e.g. open() write(). what's a file handle? vs stateless. performance (caching, write behind) crash recovery transparency cache coherence What was Sun selling before NFS? UNIX workstation, window system. Often in clusters (i.e. whole department). Diskless to reduce costs: one big server disk vs many small disks. "ND" network disk protocol. What was wrong with the ND approach? Cannot share files among workstations. What's the software/hardware structure? Client machine, app, syscalls, vnode, rpc/nfs, lan, server, disk. What RPCs would be generated by e.g. fd = creat("d/f", 0666); write(fd, "foo", 3); close(fd); What's in a file handle? i-number generation number Why not embed file name in file handle? How does client know what file handle to send? What prevents random people from sending NFS messages to my NFS server? Or from forging NFS replies to my client? What if the server crashes just after client sends it an RPC? What if the server crashes just after replying to a WRITE RPC? So what has to happen on the server during a WRITE? I.e. what does it do before it replies to the RPC? Data safe on disk. Inode with new block # and new length safe on disk. Indirect block safe on disk. Three writes, three seeks, 45 milliseconds. 22 writes per second. 180 kb/sec. How could we do better than 180 kb/sec? Write whole file sequentially at a few MB/sec. Then update inode &c at end. Why doesn't NFS do this? Why do the file handles stored in the client make sense across server reboots? File handle == disk address of i-node. What performance fixes did they apply? Read-caching of data. Why does this help? (re-reading files) Write-caching of data. Why does this help? Caching of file attributes. Why does this help? (ls -l) Caching of name->fh mappings. Why does this help? (cache /home/rtm prefix) How do they make sure my writes are visible to future readers. I flush my write-cache on close(). What consistency semantics does this have? For example, if I write() and close(), and you open() and read(), do you see my modifications? Not if I previously read file and am caching attributes. How can we fix this? GETATTR on each open()? Examples where we might care about consistency? Two windows open, different clients, emacs -> make make -> run the program Or distributed app (cvs?) with its own locks. Examples where we might not care about consistency? I just use one client workstation. Different users don't interact / share files. transparency -- i.e. does NFS provide local UNIX file system semantics? out of space: write() error vs close() error delete a file when in use. why would you do this? if unlink() on the same client, move to .nfsXXX if unlink() on some other client? chmod -r while file is open(). owner always can read? execute-only (implies read) re-send of rename() if packet is lost 2nd send may fail even though operation succeeded Would it be reasonable to use NFS in Athena? Security -- untrusted users with root on workstations Scalability -- how many clients can a server support? Writes &c always go through to server. Even for private files that will soon be deleted. Can you run it on a large complex network? How is it affected by latency? Packet loss? Bottlenecks?