(this posting is a bit long; but might be useful to people who want a detailed example of chroot-ing a web tree)
Earlier this year I chroot-ed our web tree, and I'm REALLY glad I did. Our web site fulfills many functions, and grows like mad. Various people contribute to the tree, and they will try almost _anything_, even people who you thought knew little about unix...
Why do this? Well, it suffices to read comp.security.unix, or comp.infosystems.www.authoring.cgi to understand why you should be aware of possible security pitfalls in serving a web tree. So why not take extra precautions to protect your server? 'chroot'ing an application definitely limits the byte-space that an application can roam. It will NOT solve all problems, but at least it will contain things. Holy smokes! There is so much Internet-mania right now, and there are so many uninformed people jumping on the bandwagon... So if you are a system administrator than you should (try to) stay one step ahead of them all...
There is an extra benefit in chroot-ing a web tree: we can move our web tree anywhere, anytime if a disk dies (especially if you have a 'spare' host that can suddenly 'assume' your web-hosts identity when your boot volume dies). This might be important if you cannot live without your tree. Don't laugh --if all of your colleagues' documentation lives there, then, well, you can't live without it. Sometimes documentation really IS important.
Before you start you have to decide if this is a do-able task. If your entire tree can live on one file system, then this may be for you. But if links and cgi-scripts reach out across filesystems and nodes and people's home directories (in this 'automounted' or 'afs-ed' world), then this probably isn't your cup of tea. You have to know your web tree really well first. In particular, take a close look at your cgi-scripts and all scripts and utilities called by your cgi-scripts before you start.
We use the CERN http daemon, and our web site is served by an HP running 9.05 HP-UX and NIS (but it is not an NIS server). This information is necessarily HP-specific, but it should generalize. It took me a couple of afternoons of work to produce a working web tree.
So these are the steps I followed to chroot a LIVE web tree. It wasn't as painful as I thought it would be, but it requires a bit of work if you want to provide a high level of functionality.
In the following steps I have assumed: the web tree owner is: www living in group: webgroup I have also assumed that the new web root is at: /wtree
chown -R www:webgroup /wtree chmod -R 755 /wtree (or 775 if 'webgroup' needs write permission)You might also choose to create some kind of a 'home' directory structure (see http://hoohoo.ncsa.uiuc.edu/docs/tutorials/chroot.html)
**From this point you work as user 'www'
If you decide to put sharable libraries in the tree, then you have to figure out which ones. This might not be easy! Anyway, you should copy a useful set of utilities to your /wtree/bin directory, and copy any necessary libraries to /wtree/lib or /wtree/usr/lib.
Note: the 'useful set of utilities' is necessary if you use cgi scripts in your web tree. Therefore which ones you need will depend on which utilities are referenced by your cgi-scripts.
If you do as I did and opt for staticly-linked versions, then
the easiest thing is to get a bunch of GNU file utilities
and compile them staticly, so that you don't need shared libraries.
These utilities are available in these GNU files sets:
(bash, binutils, diffutils, find, gawk, grep, sed, textutils)
Only install what you will use; for example, don't install 'df' unless you want to try to provide it with the 'mount table'. This is an example set of GNU utilities:
bash cat cksum comm cp csplit cut du expand find fmt fold gawk grep head join ln locate ls mkdir mv nl od paste pr rm rmdir sort split sum tac tail touch tr unexpand uniq wc xargsCopy all of these files into /wtree/bin
I also compiled a staticly-linked version of perl (version 5). This
took a few iterations, mostly because I dislike the 'Configure' script.
So I installed perl in /wtree/bin/ and the Perl libraries in
/wtree/lib/perl5/
In addition, 'date' and 'file' are useful. So I copied the HP versions of them, and took the shared library and dynamic loader that I needed for them. Thus on an HP system you need to copy /lib/libc.sl and /lib/dld.sl into /wtree/lib/ For 'file' you also need 'magic', which you should put in /wtree/etc
It is also useful to create a symbolic link from bash to 'sh' and from gawk to 'awk' in /wtree/bin. Note: pretending that bash is 'sh' is quite functional; however on HP-UX the 'system()' C-function wants /bin/posix/sh. Trying to fool it with a link to bash won't work (I was compiling 'glimpse' for our web tree, and it uses lots of inane system() calls. So I was forced to copy /bin/posix/sh into /wtree/bin/posix/)
PLEASE NOTE: place COPIES of files in the web tree, do not use hard links! Otherwise, why are you bothering to chroot the tree? Anyway, the web tree should be able to live on any disk... hard links can't!
hosts resolv.conf ## the DNS resolver file and maybe: nsswitch.conf ## Naming Server fall-over file; useful with NIS
/wtree/icons /wtree/sounds /wtree/images /wtree/log (or just copy these from your existing web tree)And of course create a directory for your cgi-bin tree, using whatever name you have specified in the http configuration file. Copy your prepared configuration file 'httpd.conf' into /wtree/etc/ (or whatever sub-directory you have designated for this purpose). Also prepare and copy any other httpd files that you will need; for example, 'passwd', 'group', 'protection' (and copy an appropriate .www_acl file into these directories as well).
blah:run_level:once:/usr/local/bin/httpd /wtree >>/tmp/httpd.log 2>&1An example wrapper follows. The 'uMsg()' calls are just home-brewed function calls that output error messages. Substitute your own error messages:
/** wrapper BEGINS **/ #include <stdio.h> #include <unistd.h> #include "uUtil.h" /* for uMsg() */ void main( int argc, char *argv[] ) { uid_t uid = your_web_user_uid_here; gid_t gid = your_web_user_gid_here; int ierr = 1; char *p; if( argc != 2 ) { fprintf( stderr, "USAGE: %s WEB_ROOT\n", argv[0] ); fprintf( stderr, "WHERE: WEB_ROOT - is the root of the web tree\n" ); } else { p = argv[1]; if( chdir(p) ) { uMsg( U_FATAL, "chdir to %s failed: %S", p ); } else if( chroot(p) ) { uMsg( U_FATAL, "chroot to %s failed: %S", p ); } else if( setuid(uid) != 0 ) { uMsg( U_FATAL, "setuid failed: %S" ); } else if( setgid(gid) != 0 ) { uMsg( U_FATAL, "setuid failed: %S" ); } else { execl( "/bin/httpd","httpd",(char *)0 ); uMsg( U_FATAL, "execl failed for httpd: %S" ); } } exit( ierr ); } /** wrapper ENDS **/
cd /old_web_tree for i in dir1 dir2 dir3 dir4 blahblahblah ; do cp -r $i /wtree/$i doneYou will have to correct any html files that have full pathnames in their links. You will also have to correct any cgi-scripts or shell scripts that have incorrect pathnames in them;
#!/usr/local/bin/perl
execl( "/bin/bash","bash", (char *)0 );Note that it has to be setuid root.
For example, if you call this chroot-ed shell: cr_shell, then on your web host, you can launch a chroot-ed shell to test scripts (but do it in a sub-shell so that you don't destroy your environment):
$ (export PATH=/bin; export HOME=/; /my/path/name/to/cr_shell /wtree )
-- denice.deatrich@cern.ch | "Les dieux: nom commode pour designer ce que University of Victoria, | les chercheurs n'ont pas encore trouve." OPAL Experiment, CERN | - F. Giroud, 'Une Femme Honorable'