TUCoPS :: General Information :: findholz.txt

Security Holes Manifest Themselves in Four Ways

From: Manifestation
Subject: Security holes manifest themselves in (broadly) four ways...
Date: 11.10.93

( Please contribute by sending E-Mail to <scott@santafe.edu> ... )

[quoting from the comp.security.unix FAQ]
Security holes manifest themselves in (broadly) four ways:

1) Physical Security Holes.

- Where the potential problem is caused by giving unauthorised persons
physical access to the machine, where this might allow them to perform
things that they shouldn't be able to do.

A good example of this would be a public workstation room where it would
be trivial for a user to reboot a machine into single-user mode and muck
around with the workstation filestore, if precautions are not taken.

Another example of this is the need to restrict access to confidential
backup tapes, which may (otherwise) be read by any user with access to
the tapes and a tape drive, whether they are meant to have permission or

2) Software Security Holes

- Where the problem is caused by badly written items of "privledged"
software (daemons, cronjobs) which can be compromised into doing things
which they shouldn't oughta.

The most famous example of this is the "sendmail debug" hole (see
bibliography) which would enable a cracker to bootstrap a "root" shell.
This could be used to delete your filestore, create a new account, copy
your password file, anything.

(Contrary to popular opinion, crack attacks via sendmail were not just
restricted to the infamous "Internet Worm" - any cracker could do this
by using "telnet" to port 25 on the target machine.  The story behind a
similar hole (this time in the EMACS "move-mail" software) is described
in [Stoll].)

New holes like this appear all the time, and your best hopes are to:

  a: try to structure your system so that as little software as possible
  runs with root/daemon/bin privileges, and that which does is known to
  be robust.

  b: subscribe to a mailing list which can get details of problems
  and/or fixes out to you as quickly as possible, and then ACT when you
  receive information.

>From: Wes Morgan <morgan@edu.uky.ms>
> c: When installing/upgrading a given system, try to install/enable only
> those software packages for which you have an immediate or foreseeable
> need.  Many packages include daemons or utilities which can reveal
> information to outsiders.  For instance, AT&T System V Unix' accounting
> package includes acctcom(1), which will (by default) allow any user to
> review the daily accounting data for any other user.  Many TCP/IP packa-
> ges automatically install/run programs such as rwhod, fingerd, and
> <occasionally> tftpd, all of which can present security problems.
> Careful system administration is the solution.  Most of these programs
> are initialized/started at boot time; you may wish to modify your boot
> scripts (usually in the /etc, /etc/rc, /etc/rcX.d directories) to pre-
> vent their execution.  You may wish to remove some utilities completely.
> For some utilities, a simple chmod(1) can prevent access from unauthorized
> users.
> tend to install/run everything in the package without asking you.  Most
> installation documentation includes lists of "the programs included in
> this package"; be sure to review it.

3) Incompatible Usage Security Holes

- Where, through lack of experience, or no fault of his/her own, the
System Manager assembles a combination of hardware and software which
when used as a system is seriously flawed from a security point of view.
It is the incompatibility of trying to do two unconnected but useful
things which creates the security hole.

Problems like this are a pain to find once a system is set up and
running, so it is better to build your system with them in mind.  It's
never too late to have a rethink, though.

Some examples are detailed below; let's not go into them here, it would
only spoil the surprise.

4) Choosing a suitable security philosophy and maintaining it.

>From: Gene Spafford <spaf@cs.purdue.edu>
>The fourth kind of security problem is one of perception and
>understanding.  Perfect software, protected hardware, and compatible
>components don't work unless you have selected an appropriate security
>policy and turned on the parts of your system that enforce it.  Having
>the best password mechanism in the world is worthless if your users
>think that their login name backwards is a good password! Security is
>relative to a policy (or set of policies) and the operation of a system
>in conformance with that policy.


From: Hacking
Subject: Hacking Ideas
Date: 11/10/93

( Please contribute by sending E-Mail to <scott@santafe.edu> ... )

[ Many ideas taken from: HaxNet - APG V1.3 : Guide to finding new holes]

NOTE: I think this should be divided into general categories:
1) General principles
2) Looking for holes in src (most items here)
3) Looking in binary distributions
4) Looking in site specific configurations

  The following general classifications suggest themselves:
2) Return codes/error conditions
3) unexpected input
4) race conditions
5) authentication
6) implicit trust
7) parameters
8) permissions
9) interrupts
10) I/O
11) symbolic links
12) Daemons, particularly those taking user input.
13) Kernel race conditions
14) what else? - please add categories

(Suggested splitting of above into main and sub-catagories)
I:   Suid binaries and scripts
     unexpected user interactions
     flawed liberary calls
     implicit assumptions of external conditions (sym links, loc. paths)
     race conditions
II:  daemons running with priviliged uid's
     race conditions
     poor file protectons
     implicit file protections
III: Kernel problems
     Kernel race conditions
     device driver code

The following four step method was created by System Development
Corporation, who report a 65% success rate on the flaw hypotheses
generated.  Doing a comprehensive search for operating system flaws
requires four steps:

Step 1) Knowledge of system control structure.
  To find security holes, and identifying design weaknesses it is
necessary to understand the system control structure, and layers.
  One should be able to list the:
A) security objects: items to be protected. ie: a users file.
B) control objects: items that protect security objects. ie: a i-node
C) mutual objects  : objects in both classes. ie: the password file
  With such a list, it is possible to graphically represent a control
hierarchy and identify potential points of attack. Making flow charts
to give a visual breakdown of relationships definitely helps.
  Reading the various users, operators, and administrators manuals should
provide this information.
(following para's should probably be moved to a "legal" section)
  Reading and greping source code should also prove valuable. For those
without a source licence, I would suggest we use LINUX, NET2, and BSD386
distributions in order to stay legal. At some future time we may be able
to form a working contract between someone or a company with legal access
to other distributions and members actively participating in this project.
  It appears that extracts of proprietary code may be used for academic
study, so long as they are not reused in a commercial product - more
checking is necessary though.

Step 2) Generate an inventory of suspected flaws. (i.e. flaw hypotheses)
In particular we want:
  Code history:
    What UNIX src does a particular flavor derive from? This is important
for cross references (very often only one vendor patches certain code,
which may get reused, in it's unpatched reincarnation by others)
  A solid cross reference:
    Who checked which bug in what OS and what version prevents us from
duplicating work.

  A good start would be listing all the suid binaries on the various OS
flavors/versions. Then try to work out why each program is suid. i.e.:
    rcp is suid root because it must use a privilaged port to do user
    name authentication.
  Often code that was never designed to be suid, is made suid, durring
porting to solve file access problems.
  We need to develope a data base that will be able to look at pairs and
triplets of data, specificly: program name, suid, sgid, object accessed
(why prog is suid/sgid), OS flavor/version, and flav/vers geniology.
  Any sugestions on how to implement such a DB?

Step 3) Confirm hypotheses. (test and exploit flaws)

Step 4) Make generalizations of the underlying system weaknesses, for
        which the flaw represents a specific instance.

Tool Box:
AGREP: I suggest everyone obtain, and install agrep from:
    ftp cs.arizona.edu /agrep/agrep.tar.Z
  Agrep supports "windowing" so it can look for routines, and subroutines.
It also supports logical operators and is thus ideally suited to automating
the search for many of the following flaws. i.e. <psudocode>
      agrep WINDOW {suid() NOT taintperl()} /usr/local/*.pl
or    agrep WINDOW {[suid() OR sgid()] AND [system() OR popen() OR execlp()
            OR execvp()]} /usr/local/src/*.c

PERMUTATION PROGRAM: Another tool worth producing is a program to generate
all possible permutations of command line flags/arguments in order to uncover
undocumented features, and try to produce errors.


CRASH: Posted to USENET (what FTP archive?) (descrip?)

PAPERS: There are several papers that discuss methods of finding flaws, and
  present test suites.
  1) An Emphirical Study of the reliability of UNIX Utilities, by Barton P.
    Miller, Lars Fredriksen, and Bryan So, Comm ACM, v33 n12, pp32-44,
    Dec '90. Describes a test suite for testing random input strings.
    Results indicated that 25% of the programs hung, crashed, or misbehaved.
    In one case the OS crashed. An understanding of buffer and register
    layout on the environment in question, and the expected input is likely
    to produce the desired results.
  2) The Mothra tools set, in Proceedings of the 22nd Hawaii International
    Conference on Systems and Software, pages 275-284, Kona, HI, January '89
  3) Extending Mutation Testing to Find Environmental Bugs, by Eugene H.
    Spafford, Software Practice and Experience, 20(2):181-189, Feb '90
  4) A paper by IBM was mentioned that was submitted to USENIX a few years
    ago. (Anyone have a citation?).

Specific Flaws to Check For:
1) Look for routines that don't do boundary checking, or verify input.
   ie: the gets() family of routines, where it is possible to overwrite
   buffer boundaries. ( sprintf()?, gets(), etc. )
   also: strcpy() which is why most src has:
     #define SCYPYN((a)(b)) strcpy(a, b, sizeof(a))

2) SUID/SGID routines written in one of the shells, instead of C or

3) SUID/SGID routines written in PERL that don't use the "taintperl"

4) SUID/SGID routines that use the system(), popen(), execlp(), or
   execvp() calls to run something else.

5) Any program that uses relative path names inside the program.

6) The use of relative path names to specify dynamically linked libraries.
   (look in Makefile).

7) Routines that don't check error return codes from system calls. (ie:
   fork(2), suid(2), etc), setuid() rather, as in the famous rcp bug

8) Holes can often be found in code that:
  A) is ported to a new environment.
  B) receives unexpected input.
  C) interacts with other local software.
  D) accesses system files like passwd, L.sys, etc.
  E) reads input from a publicly writable file/directory.
  F) diagnostic programs which are typically not user-proofed.

9) Test code for unexpected input. Coverage, data flow, and mutation
   testing tools are available.

10) Look in man pages, and users guides for warnings against doing X, and
   try variations of X. Ditto for "bugs" section.

11) Look for seldom used, or unusual functions or commands - read backwards.
   In particular looking for undocumented flags/arguments may prove useful.
   Check flags that were in prior rel

TUCoPS is optimized to look best in Firefox® on a widescreen monitor (1440x900 or better).
Site design & layout copyright © 1986-2024 AOH