This document describes how to build and run a CFS server and file
system client. The Chord simlulator is documented in the file
simulator/README
suppiled with the Chord distribution.
Chord runs on any system that supports SFS. This includes Linux, FreeBSD, OpenBSD, Solaris. Macintosh OS X and NetBSD may work as well.
Chord is based on the SFS user-level file system toolkit. To build Chord you will need to have a build SFS tree to link against. You can link against an installed tree (/usr/local/lib/sfs-0.5/ on PDOS machines) but we track the latest SFS from CVS so its more convenient to build against an SFS tree other than the one you use on a daily basis.
Obtain SFS as documented at fs.net. It's fine to use the anonymous SFS repository (it's updated daily). To build SFS you'll need a few tools:
You'll probably want to do an "out of place" build, so you'll take the following steps (the SFS build process is documented in depth at the SFS home page but we summarize the process here).
./setup
in your source directory
% cd ~/src/sfs1 % ./setup + gm4 libsfs/Makefile.am.m4 > libsfs/Makefile.am + gm4 svc/Makefile.am.m4 > svc/Makefile.am + uvfs/setup + chmod +x setup + aclocal + autoheader + automake --add-missing .... % mkdir ~/sfs-build % cd ~/sfs-build % ~/src/sfs1/configure --with-dmalloc --with-db3 creating cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found ... % gmake
SFS takes a long time to build (about 10 minutes on a fast Athlon,
a couple of hours on a PPro). If you already have a working SFS
installed, you are done. If not, type gmake install
as
root to install SFS. If you wish to use the CFS NFS-loopback
filesystem you must have SFS installed.
First, make sure you are a part of the chorddev
group by
editing the /etc/group
file on redlab (be sure to push
the group file to amsterdam by running /etc/rdist/pushpwd
when done adding yourself).
Now check out the sources from amsterdam:
% cvs -d /sfs/new-york/pub/shome/am1/sfsnetcvs co -P sfsnetBe sure you have your SFS agent running or you won't be able to write the lock file. You can also access the repository via SSH. The path to the root in that case is different:
% env CVS_RSH=ssh cvs -d user@amsterdam:/disk/am1/sfsnetcvs co -P sfsnet
chorddev@pdos.lcs.mit.edu
To check out the sources via anonymous CVS run the following commands:
% cvs -d :pserver:cfscvs@chordfs.lcs.mit.edu:/cvs login (Logging in to cfscvs@chordfs.lcs.mit.edu) CVS password: press return % cvs -d :pserver:cfscvs@chordfs.lcs.mit.edu:/cvs co -P sfsnet
If you just want to browse the source code and look at revision histories, try the CVSweb interface.
% cd src/sfsnet % ./setup + gm4 svc/Makefile.am.m4 > svc/Makefile.am + chmod +x setup + aclocal + autoheader + automake --add-missing ... % mkdir ~/chord-build % cd ~/chord-build % ~/src/sfsnet/configure --with-db3 --with-dmalloc --with-sfs=~/sfs-build/ creating cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... found checking host system type... i386-unknown-freebsdelf4.3 checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes % gmakeBTW: if you find the linking is painfully slow over NFS, it is common practice to store build directories locally (i.e. /disk/su0/fdabek/sfsnet-build/)
Chord takes a bevy of confusing options on the command line. The
Chord daemon is named lsd
. Run it with no options just to
make sure the build went right:
% ./lsd Usage: lsd -d-j hostname:port -p port ...
sure.lcs.mit.edu:10000
) as the bootstrap node.
% ./lsd -j sure.lcs.mit.edu:10000 -p 10000 chord: running on 18.26.4.29:10000 init_chordID: my address: 18.26.4.29.10000.0 1004828619:637146 myID is caa42d5de473ac83e5be5cd96cdcaa6f7b85da56 lsd: insert: caa42d5de473ac83e5be5cd96cdcaa6f7b85da56 1004828622:099189 stabilize: caa42d5de473ac83e5be5cd96cdcaa6f7b85da56 stable! with estimate # nodes 1
-j
parameter. If no port is specified, lsd will
choose an unused port.
% ./lsd -j sure.lcs.mit.edu:10000 init_chordID: my address: 18.26.4.29.2496.0 1004829020:385471 myID is b678329a53d1a0a3e54fcd7a46d0d09d097fee34 lsd: insert: b678329a53d1a0a3e54fcd7a46d0d09d097fee34 1004829027:570355 stabilize: b678329a53d1a0a3e54fcd7a46d0d09d097fee34 stable! with estimate # nodes 12
lsd accepts additional command-line options:
The easiest way to exercise Chord is with the synthetic client "dbm". DBM was used to generate the performance numbers presented in the CFS paper. You'll need to do "gmake dbm" in the devel subdirectory to build it. DBM sports a less than advanced command parser. It's arguments must be given in order and all are mandatory. To see the usage just call "./dbm". Here's an example of using dbm:
./dbm 0 /tmp/chord-sock 128 8192 - s 0 1This will store 128 8K blocks into the Chord system and print the results (just the elapsed time in this case) to standard out. Here is a more exciting example:
./dbm 1 /tmp/chord-sock 128 8192 - f 10 2This asks the first virtual node to fetch 128 8K blocks using "2" as the random seed to generate the blocks and to run 11 simultaneous lookups. The results will be printed to standard out. If you actually run this command you will see a series of error messages since the blocks generated by random seed 2 were never inserted.
To run the "real" client you'll need to have a "run in place" sfs directory; this allows you to run another copy of the sfscd daemon that is aware of CFS without disturbing your existing SFS setup.
The 'in place' directory can be thought of as a chrooted environment for
SFS. SFSCD will read its config files from the etc directory inside this
directory. To get started, untar this file
which contains a skeleton runinplace directory. You'll need to modify
the "etc/sfscd_config" file to point to your copy of chordcd (the
chord client). Modify the line that starts with "Protocol Chord" so
that the given path points to
/your/build/tree/chordcd/chordcd
. To run-in-place you'll
need to set some environment variables. I'd do it like this:
setenv SFS_RUNINPLACE /disk/su0/fdabek/sfs-fdabek/ setenv SFS_PORT 11977 setenv SFS_ROOT /sfsrewtYou should set the
SFS_RUNINPLACE
variable to point to
your runinplace directory. The last two prevent you from interfering
with the "real" SFS running on your machine. You may choose any valid
port and mount point.
% sfskey gen -KP key_file Creating new key for key_file. Key Name: fdabek@supervised-residence.lcs.mit.edu % your/chord-build/dir/sfsrodb/sfsrodb -s key_file -d dir_to_export sfsrodb: Database good from: Thu Nov 1 16:44:55 2001 until: Fri Nov 2 16:44:55 2001 sfsrodb: exporting file system under IaLCvdNTyE8wX103EAQ2uMqMdYUWhen sfsrodb completes it will print out what looks like a random string. This string is the unique identifier for the file system you've just inserted.
sfscd
. This
time you do want the one in
/usr/local/
. sfscd
must be run as root.
% /usr/local/sbin/sfscd -d sfscd: version 0.5, pid 8178 chordcd: chordcd version 0.1 running under PID 8179 sfscd: not dropping privileges for debugging nfsmounter: version 0.5, pid 8180 nfsmounter: mounted /sfsrewt nfsmounter: mounted /sfsrewt/.mnt/wait
ls /sfsrewt/chord:[magic string]you should see a listing of the directory you inserted with
sfsrodb
after you see a bunch of debugging garbage.
simulator/README
.