Installation Instructions
*************************

I assume you are familiar with the GNU build system, so I push generic
instructions below (search for "Original Installation Instructions".)

This file is somewhat long as it also includes text that will eventually be
moved to each command's man page.  For the time being, please bear with me.

File locations and other macros are defined in config_names.h and some of them
have equivalent --enable-* configure options.  For example, in order to build
debug versions without disturbing the running system, I do like so:

	./configure CFLAGS="-O0 -g -DIPQBDB_DEBUG" \
	   --enable-database-prefix=$PWD/testdb/    \
	   --enable-pcre-file=$PWD/testdb/pcre.conf  \
	   --enable-connkill-cmd=$PWD/testdb/connkill \
	   --enable-option-file=$PWD/testdb/ipqbdb.popt

The ibd-config utility displays such compile-time configuration.

You should not build the package as root.  In particular, make check will fail
if you run it as root.  Anyway, make check only checks basic program workings,
including regular expressions and database, but not iptables.  You need root
privileges to run ibd-judge, and also to install it.

Having suitable configuration helps invoking each program with fewer options
and some consistency.  See also "Setting popt options" below.

Upgrading
=========
Versions 1.* are compatible with version 1.0. However, you have to stop the
running daemons before `make install'.  If you don't disable the iptables rules
you'll be unreachable until you restart the (new) ibd-judge.  If you changed
your Berkeley DB library and utilities, you may need to upgrade the database
files.  Make sure the DB environment is clean (that is, no __db.00? files exist
in the relevant directories; use --db-cleanup options or remove them manually
if not.)  Then, run the db*_upgrade utility of the new version before starting
ibd-judge and ibd-parse.  Restart syslog as needed after restarting ibd-parse.

Versions 1.* are not compatible with earlier release 0.1 --that's the reason
for bumping the major version number.  The simplest solution is to uninstall
the old version completely, which includes

   * deleting the iptables rules that NQUEUE to ipqbdbd,
   * removing the ipqbdbd executable (its name is ibd-judge in v1.0),
   * deleting the existing database files (by default /var/lib/ipqbdb/*.db).

(An alternative solution is to adjust paths and queue numbers so as to keep
both versions installed at the same time; not suggested.  Since 1.0, ibd-del
can be used in conjunction with ibd-ban to save part of the block table.)
 
After `make install', you should complete the installation by carrying out the
relevant configuration.  I split that into three sections: iptables, databases,
and popt.

To recap, a tentative command sequence for "compatible" upgrades is as follows:
* configure
* make
* [make check]
* su
* daemons=$(pgrep ibd-)
* for p in $daemons; do echo "$(cat /proc/$p/cmdline | tr '\0' ' ')"; done
* kill -TERM $daemons
* make install
* [chmod u+s $(which ibd-ban)]
* cd $(ibd-config | awk '/^IPQBDB_DATABASE_PREFIX/{print $2;}')
* ls -lab
* [rm -f __db.00?]
* dbX.Y_upgrade -v *.db
* ibd-judge <cmdline arguments as displayed above>
* ibd-parse <cmdline arguments as displayed above>
* /etc/init.d/sysklogd restart


Testing iptables
================

You're somewhat familiar with iptables, aren't you?  If not, Arno's IPtables-
firewall[1] and Shorewall[2] are two tools that build customized scripts that
can be used as an example of how to structure an iptables configuration.  For
the purpose of making examples, let's assume you have a chain named "my_server"
that is the last step in the filter table, e.g. you might set it up for a web
server like so:

iptables -A previous -p tcp --syn -m multiport --dports 13,80,443 -j my_server

Note that that rule uses --syn to match new connections, implying that
established connections are done differently.  An alternative technique
using connection tracking instead of --syn is considered below, together
with a method for killing established connections.

The previous rule provides for an open port 13.  Most inetd version can serve
the daytime tcp service internally, and it is quite handy to run telnet to port
13, for the first one of the two tests we'll do.  The actual my_server chain
might consist of, say,

iptables -A my_server -m limit --limit 10/second --limit-burst 40 -j ACCEPT
iptables -A my_server -m limit --limit 5/hour --limit-burst 5 -j\
	LOG --log-level $LOG_LEVEL --log-prefix "syn limit:"
iptables -A my_server -j REJECT

In such environment, we insert a rule for testing ipqbdb like so:

server# iptables -I my_server -p tcp --destination-port 13 -j NFQUEUE

If all goes well, xt_NFQUEUE will be loaded (check it with lsmod), and if you
try and telnet to port 13, your connection times out.  Now run ibd-judge with
-D and -v flags for no-daemon verbose debugging.  Trying again a connection
should now work, and ibd-judge reports that the address is not in the database.
Run ibd-ban for that address from a different terminal.  Using higher verbosity
(-v is the same as -v 0) you see more data, such as probabilities and results
of rand() tosses.  If the verdict is BLOCK, you may see "BLOCK <ip> again",
in response to automatic retries that the remote software does automatically.
Run `ibd-del -Lv' to see your database.  After some time, the address is
rehabilitated.  Give Ctrl-C to quit ibd-judge and, again, you won't be able to
connect, until you either run ibd-judge or delete the relevant NFQUEUE rule.

A production rule could be obtained by suitably replacing ACCEPT targets with
NFQUEUE ones.  For example,

iptables -A my_server -m limit --limit 10/second --limit-burst 40 -j NFQUEUE

Notice that the syn flood limit above allows to calculate the maximum number of
records in the block database(s).  If each of the 10 connections per seconds
results in a record, up to 864,000 records can be added in one day.

For speed, the issue is the number of packets actually transmitted by the
kernel to the daemon.  If the daemon cannot keep up, the kernel will drop
packets and deliver an ENOBUFS error to the next daemon's request.  For bursts,
the size of nfnetlink's socket buffer (in bytes) can be increased.  Its default
value is that rmem_default that you can query running "sysctl -a | grep rmem_".
On newer kernels, the value of rmem_max can be overridden by root.  However, if
ibd-judge just can't keep up with your average traffic, you need to adjust its
niceness.  In extreme cases, even reporting the error from the kernel to the
daemon, which implies a context switch without payload, may cause the loss of
one further packet; that's why newer kernels (>= 2.6.30) have the option to
suppress reporting ENOBUFS errors.  Yet another limit is given by the max queue
size (in packets) for each queue.  The default value, possibly 1024, is hard
coded in the macro NFQNL_QMAX_DEFAULT, inside the kernel.  These parameters can
be set on the command line.  Run "ibd-judge --help" for the syntax.

Now for stopping established connections.  For TCP, it is customary to set up
iptables to accept established or related connections.  However, you may want
to kill an existing connection before, say, it completes a dictionary attack.
Let me describe two methods for doing so.

The first method consists in marking output packets.  The iptable rule example
above implicitly implied queue 0, which is also ibd-judge's default.  In this
case, the NFQUEUE target is equivalent to either ACCEPT or DROP, according to
the verdict.  Instead, let's run

   ibd-judge [OPTIONS] Q0S Q2DM4

(--no-daemon and --verbose=4 may be suitable options for a test.)  "Q0S" means
"take queue 0 and look at the source address".  It is what we did earlier.
"Q2DM4" means "take queue 2, look at the destination address, and mark the
packet with value 4 in case it's found guilty".  You want to look at the
destination address when you issue NQUEUE from the OUTPUT hook.  The M in the
queue argument tells to mark and ACCEPT the packet, so it won't be seen again
on the same table.  The raw table runs earlier, thus we can do like so:

iptables -t raw -A OUTPUT -p tcp ! --syn -m multiport --sports $MY_TCP -j NFQUEUE --queue-num 2
iptables -A OUTPUT -p tcp ! --syn -m mark --mark 4 -j REJECT --reject-with tcp-reset

MY_TCP are server's ports.  The second rule matches packets that have been
sentenced by ibd-judge.  The ipt_REJECT kernel module will then send a TCP RST
packet to the source, which is our server.  Thus, our server will close its
socket while the remote guilty client will wait for timeout.

The second method uses an external tool to kill any existing connection.  It
doesn't involve ibd-judge at all.  An IP address is caught red-handed by either
ibd-ban or ibd-parse.  The --exec-connkill option tells them to run an external
program when the blocking probability reaches 100%.  Before describing the
syntax, let's consider running "conntrack -D" as an external tool, for an
effect much similar to the one obtained with the first method.  Keep in mind
that "conntrack -D" just deletes relevant entries from the list of currently
tracked connections.  TCP RST packets are still to be sent by ipt_REJECT.

For the local server, we start our ruleset like so:

iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m conntrack --ctstate NEW -p tcp ! --syn -j DROP

The first rule is quite popular.  The second one drops incoming packets that
belong to deleted connections.  For outgoing packets, we let our server close
its socket by:

iptables -A OUTPUT -m conntrack --ctstate NEW -p tcp ! --syn -j REJECT --reject-with tcp-reset

For consistency, you should never use --syn except in rules like those two
above that match deleted connections.  That is, you should trust conntrack and
match using ctstate rather than --syn.  For example, rather than the rule near
the beginning of this section, use the following:

iptables -A previous -p tcp -m conntrack --ctstate NEW -m multiport --dports 13,80,443 -j my_server

Now for the syntax.  As ibd-ban can be installed setuid, it is unsafe for it to
run an executable that a user specifies on the command line.  The only option
accepted on the command line is whether to run the external program, in case.
The external program is determined by the IPQBDB_CONNKILL_CMD compile-time
definition.  By default it is /etc/ipqbdb-connkill.cmd.  That file must be
owned by root, not world-writable, and some additional sanity constraints, such
as "write access implies read access", and "execute access must be consistent".
If the file has execute access, or if it is a symbolic link to an executable,
it will be executed passing the caught IP address as the first parameter.
Otherwise, the file is read and parsed looking for a command line.  The full
path to the executable must be given, and any parameters.  The parsing function
is very basic, it discards empty and comment lines (beginning with a # (hash)),
breaks arguments according to whitespace but recognizes quotes (which can be
escaped with a \ (backslash)).  If an argument consists of the string "{}", it
is replaced with the IP address whose connections are to be killed, otherwise
that is appended as an additional argument.  For example, you can do

server# echo "# run by ibd-ban and ibd-parse on reaching 100% block prob.
             /usr/sbin/conntrack -D -s" > /etc/ipqbdb-connkill.cmd
server# chown root:root /etc/ipqbdb-connkill.cmd
server# chmod u=rw,go= /etc/ipqbdb-connkill.cmd

Ibd-ban adds the address it received on the command line, ibd-parse uses the
one read in the log file; conntrack deletes any connection using that address.

There are user-space alternatives for killing existing TCP connections, e.g.
tcpkill, killcx, cutter.  Unlike the kernel, they have to assume they are
running concurrently with other software that may send ACKs, so they try and
send various combination of ACK/SEQ numbers.  Some use conntrack for checking
the existence of an ESTABLISHED connection, but then either have to wait for
a packet to actually arrive, or send bogus packets hoping that the other end
will supply the correct ACK/SEQ numbers in response.

Thanks to Jan Engelhardt for suggesting the "conntrack -D" method.  It's neat.


Note that ibd-judge does not use child processes, except for detaching on
startup.  Thus you can safely use killall -e ibd-judge from a script.
Ibd-parse forks childs for running connkill commands, thus may leave behind an
occasional zombie when terminated (their count is logged on exit).


If you build your kernel with CONFIG_NETFILTER_DEBUG your dmesg will be filled
with "nf_hook: Verdict = QUEUE." So, don't enable that.


See also:
[1] Arno's IPtables-firewall http://rocky.eld.leidenuniv.nl/
[2] Shorewall http://shorewall.net/


Filling the databases
=====================

Review your log files, catching IPs that youd'like to block. Use an editor
to create the IPQBDB_PCRE_FILE file. You have to match the log lines with
Perl 5 Compatible Regular Expression (pcre) expressions, e.g.:

   /LOGIN FAILED, user=\S* ip=\[<HOST>]/ * "dictionary attack" 5

The format is:

   /regex/  I  "reason"  [initial-count]  [filename]  [decay]

Blank lines and lines whose first non-blank char is a # (hash) are ignored.
The six fields of germane lines are explained in the following six paragraphs.
A - (minus) can be used as a placeholder for optional fields.

Around regex you may use whatever delimiter (not only slashes). See
man pcresyntax for help. Like fail2ban, you may use the <HOST> macro;
unlike fail2ban it is replaced by (?P<IP>([0-9]{1,3}\.){3}[0-9]{1,3})
that only matches IPv4 addresses, and is named IP (not host).

The I field should be a single * (star) if you use <HOST>, otherwise it should
be a - (minus) if you use a named subexpression, otherwise it should be the
number of the subexpression that delivers the IPv4 address.

The reason field will be searched and possibly inserted in a description table,
if it's not a numeric key already. (Although using description numbers may get
an imperceivable efficiency gain, if you do that you'll have to prepare a
script that restores the descriptions in the given order; not recommended.)

The initial-count is the number of times after the first one that you want an
IP to be caught before reaching 100% probability of being blocked.  The lower
the initial-count, the sooner the IP will be blocked.  An initial-count of 0
blocks an IP as soon as it is caught.  The max usable initial-count is 30 on a
64-bit system.  It is the number of bits in the C library's RAND_MAX value and
is displayed by ibd-config.  Higher values are replaced with that maximum.  The
initial-count determines the blocking probability on the first time an IP is
caught.  Thereafter the probability doubles each time, unless the new initial
value implies a worse probability.  For example, say an IP is initially caught
issuing an initial count of 3; the probability is set to 12.50%, so that the IP
will subsequently get 25%, 50%, and 100% on the third time, if it will be
caught in a similar fashion.  If, however, on the second time the IP is caught
by a more severe rule that specifies an initial-count of 1, its probability is
set from 12.50% to 50% directly.  In the latter case, the reason is changed
too.  Note that probability doubling occurs without applying any further
rehabilitation (the address was presumably rehabilitated when it passed.)

The filename is the database of blocked IPv4 addresses, where records from
matching expressions are inserted or updated.  Prefix and suffix will be added.
Ibd-parse checks that all databases are in the same directory, and writes the
socket there.

The decay field specifies how many seconds are needed for the probability
to halve.  The higher the decay, the longer the IP will be kept blocked.
The decay gets also affected by passing a probability threshold in the upward
direction, so that oscillating IPs are rehabilitated more and more slowly.  See
the definition of IPQBDB_PROBABILITY_THRESHOLD in config_names.h.

The last four fields, reason, initial-count, filename, and decay, have exactly
the same meaning in ibd-ban, except that ibd-ban accepts initial-count 999 as
a way to specifying 0 probability.

Before running ibd-parse, make sure the directory of IPQBDB_DATABASE_PREFIX
exists and has the permissions you want.  If you didn't change it, the prefix
is /var/lib/ipqbdb.  However, you don't have to be root for testing ibd-parse,
and if you run it with, say, --db-descr=~/my-tests/descr, while no filename
in the pcre file specifies paths, ibd-parse will use that directory.  It will
also contain the socket, and files named __db.00?.  They are the Berkeley db
Environment.  You may want to adjust bdb behavior by writing a DB_CONFIG file
in that directory.  Using DB_CONFIG you may set values such as cache size, page
size, etcetera.  See bdb documentation[1].  IPQ uses the Concurrent Data Store
(CDB) BDB model, that supports deadlock-free, multiple-readers/single-writer
semantics, but not record or region locking. (See READMEconcurrency.)

To test pcre expressions, run ibd-parse with the -D and -v options, for
no-daemon verbose debugging. The verbose output will display some values for
each expression, such as the highest back reference and the size of a compiled
pattern, that may be useful if performance is a issue. See also the pcreperform
man page.

After all expressions are compiled, check, on a different terminal window
that the socket file has been created, and give it some stuff, e.g.

  cat /var/log/mail.log > ibd-parse.sock
  
Check the previous terminal window for results.  You may send SIGUSR1 to a
running ibd-parse process to get the number of matches and errors for each
expression.  There should be no error! (If the program is daemonized, it will
write on daemon.log.)  You may send it SIGHUP to re-read its config file.
Ctrl-C, SIGINT, or SIGTERM terminate it cleanly.

To install ibd-parse properly, you should configure your syslog.conf so that
it writes the log lines you need to the ibd-parse.sock named fifo.  Sysklogd,
for example, the stock Debian logger, wants a | (pipe) before the filename.
Try and address only the facilities and severities that are relevant for the
lines that you want to catch.  It may deliver more performance to monitor
different facilities using different processes listening on different sockets
with less expressions each, because for each line of log ibd-parse tries every
expression until one possibly matches.  If you want to run it as a different
user, just make sure it can read config files and write the databases.  In
addition, ibd-parse should create its socket before syslogd starts.

You can run ibd-del -Lv to list the contents of a block database; ibd-del is
for deleting entries, but it requires the --del flag to actually do that.  Old
records should be deleted to avoid wasting space, and it is up to you to decide
what is the amount of time that makes a record obsolete and schedule ibd-del
execution with the appropriate permissions and options, e.g.

  ibd-del --min-time=4d --max-records=4M --del

will delete all records that have not been updated for at least four days.  By
specifying --max-records you allow ibd-del to enlarge the min-time given, as
long as the number of records is not exceeded.  When records are deleted from
block.db, the relevant counters are added to the currently ascribed reason in
descr.db.  Use ibd-del --stats -v to display the latter counters.  The verbose
(-v) option enables column headers in ibd-del's output for --ls and --stats.

Ibd-del has further "ls" options, one of which, --ls-ban, produces a sequence
of ibd-ban commands that could be used to rewrite part of the existing block
table.  If you run

   ibd-del --ls-ban > save-script
   mv block.db old.block.db
   . ./save-script

you get part of the old block table into the new one.  Creation and update
times, as well as caught, blocked, and threshold counts are lost in this way.
However, ip, reasons, and decays are saved; probabilities will be nearly the
same, if save-script is sourced at about the same time.

If you need to block IPs from your own scripts, you don't have to write a log
line to ibd-parse: ibd-ban if just for that.  You can specify the same values
by giving the relevant command line options.  In addition, by not specifying an
IP address, you can use ibd-ban to populate descr.db.

You'll need to give it proper permissions, e.g. make it suid by

   server# chmod u+s /usr/local/bin/ipb-ban

where ownership of the databases match that of the executables.

You can use the white.db to twist the decay value for specific IPs.  The
ibd-white utility reads from stdin lines formatted like so:

   192.0.2.1                0.0
   192.0.2.10-192.0.2.20  194.5
   192.0.2.8/25            32
  
**_WARNING_** although the utility accepts IP, range, or CIDR notation, it
inserts single records in the database. Hence DON'T SET WIDE RANGES or your
database will blow up to unmanageable sizes. This state of affairs may be
bettered in a further ipqbdb release.

The decay value in white.db overrides the decay that would be inserted from
either ibd-ban or ibd-parse. If the decay is 0, the record will not even do it
to the block database. However, the decay from white.db doesn't have to be
shorter: it can be used to worsen the situation for the remote host.

To recap this section, for new installation:
* write the ipqbdb pcre configuration,
* check the work directory (/var/lib/ipqbdb),
* schedule execution of ibd-del,
* start ibd-parse early in the boot sequence (before syslog),
* start ibd-judge as soon as the firewall is configured (see previous section),
* adjust syslog.conf to write to ibd-parse's named fifo,
* review scripts that control blocking, calling ibd-ban as needed, and
* checkout suid flags and ownership of the executables (ibd-ban) if needed.

See also:
[1] DB_CONFIG
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/env/db_config.html


Setting popt options
====================
The option-file, by default /etc/ipqbdb.popt, can be used to define program-
specific options: it consists of an arbitrary number of lines formatted like
this:

   <program> alias <newoption> <expansion>

See man popt for further details. For example, ibd-ban default values can be
configured in ipqbdb.popt with lines like this:

   ibd-ban alias --my-opt --reason="client is ugly" --initial-decay=86400

You can then check whether it works by looking at the output of

   ibd-ban --my-opt --help

If you use that to change databases, keep in mind that all the .db files that
a program works with must live in the same directory. While ibd-parse enforces
this restriction, ibd-ban and ibd-del can be fooled by giving wrong arguments.


Installing libnetfilter_queue on lenny
======================================

Libnetfilter_queue was released in March 2009, but is not available in lenny
nor in lenny-backports, as of July 2010.  It is available in squeeze, though.

I've installed it pinning, like so:

1) ensure apt configuration reads squeeze too: this is consists of editing
three files in /etc/apt/: sources.list, preferences, and apt.conf, as detailed
in http://jaqque.sbih.org/kplug/apt-pinning.html  which I repeat that here:

in /etc/apt/sources.list I've added the line (because I'm located in IT)
-----8<-----
deb http://debian.fastweb.it/debian/ squeeze main contrib
----->8-----

in /etc/apt/preferences I have
-----8<-----
Package: *
Pin: release a=stable
Pin-Priority: 800

Package: *
Pin: release a=testing
Pin-Priority: 200

Package: *
Pin: release a=unstable
Pin-Priority: 150
----->8-----

in /etc/apt/apt.conf I have
-----8<-----
APT
{
	Default-Release "lenny";
	Cache-Limit "33554432"; // quadruple of default 0x400000
};
----->8-----


2# apt-get update
...

3# apt-get install -t testing libnetfilter-queue1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  libnetfilter-queue-dev libnetfilter-queue1-dbg libnfnetlink-dev libnfnetlink0
The following packages will be upgraded:
  libnetfilter-queue-dev libnetfilter-queue1 libnetfilter-queue1-dbg libnfnetlink-dev libnfnetlink0
5 upgraded, 0 newly installed, 0 to remove and 1225 not upgraded.
Need to get 64.5kB of archives.
After this operation, 4096B of additional disk space will be used.
Do you want to continue [Y/n]? Y
...

Now, I have
# apt-cache policy libnetfilter-queue1
libnetfilter-queue1:
  Installed: 0.0.17-1
  Candidate: 0.0.17-1
  Version table:
 *** 0.0.17-1 0
        200 http://debian.fastweb.it squeeze/main Packages
        100 /var/lib/dpkg/status
     0.0.13-1 0
        800 http://debian.fastweb.it lenny/main Packages

YMMV


Original Installation Instructions
**********************************

Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005,
2006 Free Software Foundation, Inc.

This file is free documentation; the Free Software Foundation gives
unlimited permission to copy, distribute and modify it.

Basic Installation
==================

Briefly, the shell commands `./configure; make; make install' should
configure, build, and install this package.  The following
more-detailed instructions are generic; see the `README' file for
instructions specific to this package.

   The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation.  It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions.  Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').

   It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring.  Caching is
disabled by default to prevent problems with accidental use of stale
cache files.

   If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release.  If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.

   The file `configure.ac' (or `configure.in') is used to create
`configure' by a program called `autoconf'.  You need `configure.ac' if
you want to change it or regenerate `configure' using a newer version
of `autoconf'.

The simplest way to compile this package is:

  1. `cd' to the directory containing the package's source code and type
     `./configure' to configure the package for your system.

     Running `configure' might take a while.  While running, it prints
     some messages telling which features it is checking for.

  2. Type `make' to compile the package.

  3. Optionally, type `make check' to run any self-tests that come with
     the package.

  4. Type `make install' to install the programs and any data files and
     documentation.

  5. You can remove the program binaries and object files from the
     source code directory by typing `make clean'.  To also remove the
     files that `configure' created (so you can compile the package for
     a different kind of computer), type `make distclean'.  There is
     also a `make maintainer-clean' target, but that is intended mainly
     for the package's developers.  If you use it, you may have to get
     all sorts of other programs in order to regenerate files that came
     with the distribution.

Compilers and Options
=====================

Some systems require unusual options for compilation or linking that the
`configure' script does not know about.  Run `./configure --help' for
details on some of the pertinent environment variables.

   You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment.  Here
is an example:

     ./configure CC=c99 CFLAGS=-g LIBS=-lposix

   *Note Defining Variables::, for more details.

Compiling For Multiple Architectures
====================================

You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory.  To do this, you can use GNU `make'.  `cd' to the
directory where you want the object files and executables to go and run
the `configure' script.  `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.

   With a non-GNU `make', it is safer to compile the package for one
architecture at a time in the source code directory.  After you have
installed the package for one architecture, use `make distclean' before
reconfiguring for another architecture.

Installation Names
==================

By default, `make install' installs the package's commands under
`/usr/local/bin', include files under `/usr/local/include', etc.  You
can specify an installation prefix other than `/usr/local' by giving
`configure' the option `--prefix=PREFIX'.

   You can specify separate installation prefixes for
architecture-specific files and architecture-independent files.  If you
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
PREFIX as the prefix for installing programs and libraries.
Documentation and other data files still use the regular prefix.

   In addition, if you use an unusual directory layout you can give
options like `--bindir=DIR' to specify different values for particular
kinds of files.  Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.

   If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.

Optional Features
=================

Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System).  The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.

   For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.

Specifying the System Type
==========================

There may be some features `configure' cannot figure out automatically,
but needs to determine by the type of machine the package will run on.
Usually, assuming the package is built to be run on the _same_
architectures, `configure' can figure that out, but if it prints a
message saying it cannot guess the machine type, give it the
`--build=TYPE' option.  TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:

     CPU-COMPANY-SYSTEM

where SYSTEM can have one of these forms:

     OS KERNEL-OS

   See the file `config.sub' for the possible values of each field.  If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.

   If you are _building_ compiler tools for cross-compiling, you should
use the option `--target=TYPE' to select the type of system they will
produce code for.

   If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.

Sharing Defaults
================

If you want to set default values for `configure' scripts to share, you
can create a site shell script called `config.site' that gives default
values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists.  Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.

Defining Variables
==================

Variables not defined in a site shell script can be set in the
environment passed to `configure'.  However, some packages may run
configure again during the build, and the customized values of these
variables may be lost.  In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'.  For example:

     ./configure CC=/usr/local2/bin/gcc

causes the specified `gcc' to be used as the C compiler (unless it is
overridden in the site shell script).

Unfortunately, this technique does not work for `CONFIG_SHELL' due to
an Autoconf bug.  Until the bug is fixed you can use this workaround:

     CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash

`configure' Invocation
======================

`configure' recognizes the following options to control how it operates.

`--help'
`-h'
     Print a summary of the options to `configure', and exit.

`--version'
`-V'
     Print the version of Autoconf used to generate the `configure'
     script, and exit.

`--cache-file=FILE'
     Enable the cache: use and save the results of the tests in FILE,
     traditionally `config.cache'.  FILE defaults to `/dev/null' to
     disable caching.

`--config-cache'
`-C'
     Alias for `--cache-file=config.cache'.

`--quiet'
`--silent'
`-q'
     Do not print messages saying which checks are being made.  To
     suppress all normal output, redirect it to `/dev/null' (any error
     messages will still be shown).

`--srcdir=DIR'
     Look for the package's source code in directory DIR.  Usually
     `configure' can determine that directory automatically.

`configure' also accepts some other, not widely useful, options.  Run
`configure --help' for more details.

