Operating System Organization
the topic is overall o/s design
lots of ways to structure an o/s -- how to decide?
looking for principles and approaches
what does an o/s *have* to do?
for e.g. desktop or server use
let apps use machine resources
multiplex resources among apps
prevent starvation
isolate / protect
allow cooperation / interaction
we'll talk about two approaches, but others exist e.g. Java VM
what's the traditional approach? (Linux, OSX, xv6)
virtualize some resources: cpu and memory
give each app its own virtual cpu and memory system
why? it's a simple model for app programmers
abstract others: storage, network, IPC
layer a sharable abstraction over h/w (file system, IP/TCP)
example: virtualize the cpu
goal: simulate a dedicated cpu for each process
so process doesn't have to worry about sharing
o/s runs different processes in turn, via clock interrupt
clock means process doesn't need to do anything special to switch
also prevents hogging
how to achieve transparency?
o/s saves state, then restores
what does o/s save?
eight regs, EIP, seg regs, page table base ptr
where does o/s save it?
o/s keeps per-process table of saved states
the return from clock interrupt restores a *different* process's state
the point: process doesn't have to worry about multiplexing!
example: virtualize memory
idea: simulate a complete memory system for each process
so process has complete freedom how it uses that memory
doesn't have to worry about other processes
so addresses 0..2^32 all work, but refer to private memory
convenient: all programs can start at zero
and memory looks contiguous, good for large arrays &c
safe: can't even *name* another process's memory
how can we do this? after all, the processes do in fact share the RAM
how to create address spaces?
could have only one process at a time in physical memory
would spend lots of time swapping in and out to disk
made sense 40 years ago w/ small memory machines
could use x86 segments
put each process in a different range of physical memory
CS, DS, &c point to current process's base
looks good: addresses starts at zero, contiguous, isolation
this is how x86 and original unix worked
need to prevent process from modifying seg regs
but allow kernel to modify them
386 has the hardware we need
h/w "privilege level" bit: on in kernel, off in apps
and ways to jump back and forth (syscalls, interrupts, return)
but: fragmentation, all mem must be resident, can't have vm > phys
could use x86 paging hardware
MMU array w/ entry for each 4k range of "virtual" address space
refers to phy address for that "page"
this is the page table
now we don't have a fragmentation problem
o/s tells h/w to switch page table when switching process
level of indirection allows o/s to play other tricks
process too big? write pages to disk, leave PTEs blank
on-demand page-in from disk via faults on blank PTEs
this works because apps use only a fraction of mem at a given time
need "present" flag, page faults, and re-start
sharing and copy-on-write for faster fork() (+ exec())
so need write-protect flag
all of this done transparently to application
still thinks it has simple dedicated memory from 0..2^32
not aware of virtual vs phys
paging h/w has turned out to be one of the most fruitful ideas in o/s
you'll be using it a lot in labs, to perform above tricks
o/s organization
step back, what does a traditional o/s look like?
monolithic o/s
h/w, kernel, user
kernel is a big program: process ctl, vm, fs, network
all of kernel runs w/ full hardware privilege (very convenient)
good: easy for sub-systems to cooperate (e.g. paging and file system)
bad: complex, bugs are easy, no isolation within o/s
ideology: convenience (for app or o/s programmer)
for any problem, either hide it from app, or add a new system call
(we need ideology because there is not much science here)
very successful approach
alterate organization: microkernel
ideology: IPC and user-space servers
for any problem, make a new server, talk to it w/ RPC
h/w, kernel, server processes, apps
servers: VM, FS, TCP/IP, Print, Display
split up kernel sub-systems into server processes
some servers have privileged access to some h/w (e.g. FS and disks)
apps talk to them via IPC / RPC
kernel's main job: fast IPC
good: simple/efficient kernel, sub-systems isolated, enforced better modularity
bad: cross-sub-system optimization harder, lots of IPCs may be slow
in the end, lots of good individual ideas, but overall plan didn't catch on
alternate organization: exokernel
ideology: eliminate all abstractions
for any problem, expose h/w or info to app, let app do what it wants
h/w, kernel, environments, libOS, app
an exokernel would not provide address space, virtual cpu, file system, TCP
instead, give raw pages, page mappings, clock interrupts, disk i/o, net i/o
directly to app!
let app build nice address space if it wants, or not
should give aggressive apps much more flexibility
challenges:
how to multiplex cpu/mem/&c if you expose directly to apps?
how to prevent apps from hogging cpu/mem?
how to get security/isolation despite apps having low-level control?
how to multiplex w/o understanding: disk (file system), incoming tcp pkts
exokernel example: memory
what are the resources? (phys pages, mappings)
what does an app need to ask the kernel to do?
pa = AllocPage()
DeallocPage(pa)
TLBwr(va, pa)
TLBvadelete(va)
and these kernel->app upcalls:
PageFault(va)
PleaseReleaseAPage()
what does o/s need to do to make multiplexing work?
ensure app only creates mappings to phys pages it owns
track what env owns what phys pages
decide which app to ask to give up a phys page when system runs out
that app gets to decide which of its pages
example cool thing you could do w/ exokernel-style memory
databases like to keep a cache of disk pages in memory
problem on traditional o/s:
if DB caches some disk data, and o/s needs phys page,
o/s may transparently write to disk a DB page holding a disk block
but that's a waste of time: if DB knew, it could release phys
page w/o writing, and later read it back from DB file (not paging area)
1. exokernel needs phys mem for some other app
2. exokernel sends DB a "please free a phys page" upcall
3. DB picks a clean page, calls TLBvadelete(va), DeallocPage(pa)
4. OR DB picks dirty page, writes to disk, then 3.
exokernel example: cpu
what does it mean to expose cpu to app?
kernel tells app when it is taking away cpu
kernel tells app when it gives cpu to app
so if app is running and timer interrupt causes end of slice
cpu jumps from app into kernel
kernel jumps back into app at "please yield" upcall
app saves state (registers, EIP, &c)
app calls Yield()
when kernel decides to resume app
kernel jumps into app at "resume" upcall
app restores those saved registers and EIP
what cool things could an app do w/ exokernel-style cpu management?
suppose time slice ends in the middle of
acquire(lock);
atomic operations...
release(lock);
you don't want the app to be holding the lock the whole time!
then maybe other apps can't make forward progress
so the "please yield" upcall can first complete or back out of atomic operations
fast RPC with direct cpu management
how does traditional o/s let apps communicate?
pipes (or sockets)
picture: buffer in kernel, lots of copying and system calls
RPC probably takes 8 kernel/user crossings (read()s and write()s)
how does exokernel help?
Yield() can take a target process argument
almost a direct jump to an instruction in target process
kernel allows only entries at approved locations in target
kernel leaves regs alone, so can contain arguments
(in constrast to traditional restore of target's registers)
target app uses Yield() to return
so only 4 crossings