This manual documents version 5.93 of the gnu core utilities, including the standard programs for text and file manipulation.
Copyright © 1994, 1995, 1996, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
--- The Detailed Node Listing ---
Common Options
Output of entire files
Formatting file contents
Output of parts of files
Summarizing files
Operating on sorted files
ptx: Produce permuted indexes
Operating on fields within a line
Operating on characters
tr: Translate, squeeze, and/or delete characters
Directory listing
ls: List directory contents
Basic operations
Special file types
Changing file attributes
Disk usage
Printing text
Conditions
test: Check file types and compare values
expr: Evaluate expression
Redirection
File name manipulation
Working context
stty: Print or change terminal characteristics
User information
System context
date: Print or set system date and time
Modified command invocation
Process control
Delaying
Numeric operations
File permissions
Date input formats
Opening the software toolbox
GNU Free Documentation License
This manual is a work in progress: many sections make no attempt to explain basic concepts in a way suitable for novices. Thus, if you are interested, please get involved in improving this manual. The entire gnu community will benefit.
The gnu utilities documented here are mostly compatible with the POSIX standard. Please report bugs to bug-coreutils@gnu.org. Remember to include the version number, machine architecture, input files, and any other information needed to reproduce the bug: your input, what you expected, what you got, and why it is wrong. Diffs are welcome, but please include a description of the problem as well, since this is sometimes difficult to infer. See Bugs (Using and Porting GNU CC).
This manual was originally derived from the Unix man pages in the distributions, which were written by David MacKenzie and updated by Jim Meyering. What you are reading now is the authoritative documentation for these utilities; the man pages are no longer being maintained. The original fmt man page was written by Ross Paterson. François Pinard did the initial conversion to Texinfo format. Karl Berry did the indexing, some reorganization, and editing of the results. Brian Youmans of the Free Software Foundation office staff combined the manuals for textutils, fileutils, and sh-utils to produce the present omnibus manual. Richard Stallman contributed his usual invaluable insights to the overall process.
Certain options are available in all of these programs. Rather than writing identical descriptions for each of the programs, they are described here. (In fact, every gnu program accepts (or should accept) these options.)
Normally options and operands can appear in any order, and programs act as if all the options appear before any operands. For example, sort -r passwd -t : acts like sort -r -t : passwd, since : is an option-argument of -t. However, if the POSIXLY_CORRECT environment variable is set, options must appear before operands, unless otherwise specified for a particular command.
A few programs can usefully have trailing operands with leading -. With such a program, options must precede operands even if POSIXLY_CORRECT is not set, and this fact is noted in the program description. For example, the env command's options must appear before its operands, since in some cases the operands specify a command that itself contains options.
Some of these programs recognize the --help and --version options only when one of them is the sole command line argument.
A single - operand is not really an option, though it looks like one. It stands for standard input, or for standard output if that is clear from the context. For example, sort - reads from standard input, and is equivalent to plain sort, and tee - writes an extra copy of its input to standard output. Unless otherwise specified, - can appear as any operand that requires a file name.
Nearly every command invocation yields an integral exit status that can be used to change how other commands work. For the vast majority of commands, an exit status of zero indicates success. Failure is indicated by a nonzero value—typically 1, though it may differ on unusual platforms as POSIX requires only that it be nonzero.
However, some of the programs documented here do produce other exit status values and a few associate different meanings with the values 0 and 1. Here are some of the exceptions: chroot, env, expr, nice, nohup, printenv, sort, su, test, tty.
Some gnu programs (at least cp, install, ln, and mv) optionally make backups of files before writing new versions. These options control the details of these backups. The options are also briefly mentioned in the descriptions of the particular programs.
Note that the short form of this option, -b does not accept any argument. Using -b is equivalent to using --backup=existing.
This option corresponds to the Emacs variable version-control; the values for method are the same as those used in Emacs. This option also accepts more descriptive names. The valid methods are (unique abbreviations are accepted):
Some gnu programs (at least df, du, and ls) display sizes in “blocks”. You can adjust the block size and method of display to make sizes easier to read. The block size used for display is independent of any file system block size. Fractional block counts are rounded up to the nearest integer.
The default block size is chosen by examining the following environment variables in turn; the first one that is set determines the block size.
DF_BLOCK_SIZE
BLOCK_SIZE
BLOCKSIZE
ls -l
output.
POSIXLY_CORRECT
If none of the above environment variables are set, the block size currently defaults to 1024 bytes in most contexts, but this number may change in the future. For ls file sizes, the block size defaults to 1 byte.
A block size specification can be a positive integer specifying the number
of bytes per block, or it can be human-readable
or si
to
select a human-readable format. Integers may be followed by suffixes
that are upward compatible with the
SI prefixes
for decimal multiples and with the
IEC 60027-2 prefixes for binary multiples.
With human-readable formats, output sizes are followed by a size letter
such as M for megabytes. BLOCK_SIZE=human-readable
uses
powers of 1024; M stands for 1,048,576 bytes.
BLOCK_SIZE=si
is similar, but uses powers of 1000 and appends
B; MB stands for 1,000,000 bytes.
A block size specification preceded by ' causes output sizes to be displayed with thousands separators. The LC_NUMERIC locale specifies the thousands separator and grouping. For example, in an American English locale, --block-size="'1kB" would cause a size of 1234000 bytes to be displayed as 1,234. In the default C locale, there is no thousands separator so a leading ' has no effect.
An integer block size can be followed by a suffix to specify a multiple of that size. A bare size letter, or one followed by iB, specifies a multiple using powers of 1024. A size letter followed by B specifies powers of 1000 instead. For example, 1M and 1MiB are equivalent to 1048576, whereas 1MB is equivalent to 1000000.
A plain suffix without a preceding integer acts as if 1 were prepended, except that it causes a size indication to be appended to the output. For example, --block-size="kB" displays 3000 as 3kB.
The following suffixes are defined. Large sizes like 1Y
may be rejected by your computer due to limitations of its arithmetic.
Block size defaults can be overridden by an explicit --block-size=size option. The -k option is equivalent to --block-size=1K, which is the default unless the POSIXLY_CORRECT environment variable is set. The -h or --human-readable option is equivalent to --block-size=human-readable. The --si option is equivalent to --block-size=si.
The cp, install, ln, and mv commands normally treat the last operand specially when it is a directory or a symbolic link to a directory. For example, cp source dest is equivalent to cp source dest/source if dest is a directory. Sometimes this behavior is not exactly what is wanted, so these commands support the following options to allow more fine-grained control:
In the opposite situation, where you want the last operand to be
treated as a directory and want a diagnostic otherwise, you can use
the --target-directory (-t) option.
The interface for most programs is that after processing options and a finite (possibly zero) number of fixed-position arguments, the remaining argument list is either expected to be empty, or is a list of items (usually files) that will all be handled identically. The xargs program is designed to work well with this convention.
The commands in the mv-family are unusual in that they take
a variable number of arguments with a special case at the end
(namely, the target directory). This makes it nontrivial to perform some
operations, e.g., “move all files from here to ../d/”, because
mv * ../d/
might exhaust the argument space, and ls | xargs ...
doesn't have a clean way to specify an extra final argument for each
invocation of the subject command. (It can be done by going through a
shell command, but that requires more human labor and brain power than
it should.)
The --target-directory (-t) option allows the cp,
install, ln, and mv programs to be used
conveniently with xargs. For example, you can move the files
from the current directory to a sibling directory, d
like this:
ls | xargs mv -t ../d --
However, this doesn't move files whose names begin with .. If you use the gnu find program, you can move those files too, with this command:
find . -mindepth 1 -maxdepth 1 \ | xargs mv -t ../d
But both of the above approaches fail if there are no files in the current directory, or if any file has a name containing a blank or some other special characters. The following example removes those limitations and requires both gnu find and gnu xargs:
find . -mindepth 1 -maxdepth 1 -print0 \ | xargs --null --no-run-if-empty \ mv -t ../d
The --target-directory (-t) and --no-target-directory (-T) options cannot be combined.
Some gnu programs (at least cp and mv) allow you to remove any trailing slashes from each source argument before operating on it. The --strip-trailing-slashes option enables this behavior.
This is useful when a source argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, mv, for example, (via the system's rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced directory and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard.
The following options modify how chown and chgrp traverse a hierarchy when the --recursive (-R) option is also specified. If more than one of the following options is specified, only the final one takes effect. These options specify whether processing a symbolic link to a directory entails operating on just the symbolic link or on all files in the hierarchy rooted at that directory.
These options are independent of --dereference and --no-dereference (-h), which control whether to modify a symlink or its referent.
Certain commands can operate destructively on entire hierarchies. For example, if a user with appropriate privileges mistakenly runs rm -rf / tmp/junk or cd /bin; rm -rf ../, that may remove all files on the entire system. Since there are so few 1 legitimate uses for such a command, gnu rm provides the --preserve-root option to make it so rm declines to operate on any directory that resolves to /. The default is still to allow rm -rf / to operate unimpeded. Another new option, --no-preserve-root, cancels the effect of any preceding --preserve-root option. Note that the --preserve-root behavior may become the default for rm.
The commands chgrp, chmod and chown can also operate destructively on entire hierarchies, so they too support these options. Although, unlike rm, they don't actually unlink files, these commands are arguably more dangerous when operating recursively on /, since they often work much more quickly, and hence damage more files before an alert user can interrupt them.
Some programs like nice can invoke other programs; for example, the command nice cat file invokes the program cat by executing the command cat file. However, special built-in utilities like exit cannot be invoked this way. For example, the command nice exit does not have a well-defined behavior: it may generate an error message instead of exiting.
Here is a list of the special built-in utilities that are standardized by POSIX 1003.1-2004.
. : break continue eval exec exit export readonly return set shift times trap unset
For example, because ., :, and exec are special, the commands nice . foo.sh, nice :, and nice exec pwd do not work as you might expect.
Many shells extend this list. For example, Bash has several extra special built-in utilities like history, and suspend, and with Bash the command nice suspend generates an error message instead of suspending.
In a few cases, the gnu utilities' default behavior is incompatible with the POSIX standard. To suppress these incompatibilities, define the POSIXLY_CORRECT environment variable. Unless you are checking for POSIX conformance, you probably do not need to define POSIXLY_CORRECT.
Newer versions of POSIX are occasionally incompatible with older versions. For example, older versions of POSIX required the command sort +1 to sort based on the second and succeeding fields in each input line, but starting with POSIX 1003.1-2001 the same command is required to sort the file named +1, and you must instead use the command sort -k 2 to get the field-based sort.
The gnu utilities normally conform to the version of POSIX that is standard for your system. To cause them to conform to a different version of POSIX, define the _POSIX2_VERSION environment variable to a value of the form yyyymm specifying the year and month the standard was adopted. Two values are currently supported for _POSIX2_VERSION: 199209 stands for POSIX 1003.2-1992, and 200112 stands for POSIX 1003.1-2001. For example, if you have a newer system but are running software that assumes an older version of POSIX and uses sort +1 or tail +10, you can work around any compatibility problems by setting _POSIX2_VERSION=199209 in your environment.
These commands read and write entire files, possibly transforming them in some way.
cat copies each file (- means standard input), or standard input if none are given, to standard output. Synopsis:
cat [option] [file]...
The program accepts the following options. Also see Common options.
On systems like MS-DOS that distinguish between text and binary files, cat normally reads and writes in binary mode. However, cat reads in text mode if one of the options -bensAE is used or if cat is reading from standard input and standard input is a terminal. Similarly, cat writes in text mode if one of the options -bensAE is used or if standard output is a terminal.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output f's contents, then standard input, then g's contents. cat f - g # Copy standard input to standard output. cat
tac copies each file (- means standard input), or standard input if none are given, to standard output, reversing the records (lines by default) in each separately. Synopsis:
tac [option]... [file]...
Records are separated by instances of a string (newline by default). By default, this separator string is attached to the end of the record that it follows in the file.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
nl writes each file (- means standard input), or standard input if none are given, to standard output, with line numbers added to some or all of the lines. Synopsis:
nl [option]... [file]...
nl decomposes its input into (logical) pages; by default, the line number is reset to 1 at the top of each logical page. nl treats all of the input files as a single document; it does not reset line numbers or logical pages between files.
A logical page consists of three sections: header, body, and footer. Any of the sections can be empty. Each can be numbered in a different style from the others.
The beginnings of the sections of logical pages are indicated in the input file by a line containing exactly one of these delimiter strings:
The two characters from which these strings are made can be changed from \ and : via options (see below), but the pattern and length of each string cannot be changed.
A section delimiter is replaced by an empty line on output. Any text that comes before the first section delimiter string in the input file is considered to be part of a body section, so nl treats a file that contains no section delimiters as a single body section.
The program accepts the following options. Also see Common options.
rn
):
An exit status of zero indicates success, and a nonzero value indicates failure.
od writes an unambiguous representation of each file (- means standard input), or standard input if none are given. Synopses:
od [option]... [file]... od [-abcdfilosx]... [file] [[+]offset[.][b]] od [option]... --traditional [file] [[+]offset[.][b] [[+]label[.][b]]]
Each line of output consists of the offset in the input, followed by
groups of data from the file. By default, od prints the offset in
octal, and each group of file data is a C short int
's worth of input
printed as a single octal number.
If offset is given, it specifies how many input bytes to skip before formatting and writing. By default, it is interpreted as an octal number, but the optional trailing decimal point causes it to be interpretated as decimal. If no decimal is specified and the offset begins with 0x or 0X it is interpreted as a hexadecimal number. If there is a trailing b, the number of bytes skipped will be offset multiplied by 512.
If a command is of both the first and second forms, the second form is assumed if the last operand begins with + or (if there are two operands) a digit. For example, in od foo 10 and od +10 the 10 is an offset, whereas in od 10 the 10 is a file name.
The program accepts the following options. Also see Common options.
The default is octal.
bytes
are interpreted as for the -j option.
If n is omitted with --strings, the default is 3.
Adding a trailing “z” to any type specification appends a display of the ASCII character representation of the printable characters to the output line generated by the type specification.
The type a
outputs things like sp for space, nl for
newline, and nul for a null (zero) byte. Type c
outputs
, \n, and \0
, respectively.
Except for types a and c, you can specify the number of bytes to use in interpreting each number in the given data type by following the type indicator character with a decimal integer. Alternately, you can specify the size of one of the C compiler's built-in data types by following the type indicator character with one of the following characters. For integers (d, o, u, x):
For floating point (f
):
n
input bytes per output line. This must be a multiple of
the least common multiple of the sizes associated with the specified
output types.
If this option is not given at all, the default is 16. If n is omitted, the default is 32.
The next several options are shorthands for format specifications. gnu od accepts any combination of shorthands and format specification options. These options accumulate.
od --traditional [file] [[+]offset[.][b] [[+]label[.][b]]]
can be used to specify at most one file and optional arguments specifying an offset and a pseudo-start address, label. The label argument is interpreted just like offset, but it specifies an initial pseudo-address. The pseudo-addresses are displayed in parentheses following any normal address.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands reformat the contents of files.
fmt fills and joins lines to produce output lines of (at most) a given number of characters (75 by default). Synopsis:
fmt [option]... [file]...
fmt reads from the specified file arguments (or standard input if none are given), and writes to standard output.
By default, blank lines, spaces between words, and indentation are preserved in the output; successive input lines with different indentation are not joined; tabs are expanded on input and introduced on output.
fmt prefers breaking lines at the end of a sentence, and tries to avoid line breaks after the first word of a sentence or before the last word of a sentence. A sentence break is defined as either the end of a paragraph or a word ending in any of .?!, followed by two spaces or end of line, ignoring any intervening parentheses or quotes. Like TeX, fmt reads entire “paragraphs” before choosing line breaks; the algorithm is a variant of that given by Donald E. Knuth and Michael F. Plass in “Breaking Paragraphs Into Lines”, Software—Practice & Experience 11, 11 (November 1981), 1119–1184.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
pr writes each file (- means standard input), or standard input if none are given, to standard output, paginating and optionally outputting in multicolumn format; optionally merges all files, printing all in parallel, one per column. Synopsis:
pr [option]... [file]...
By default, a 5-line header is printed at each page: two blank lines; a line with the date, the file name, and the page count; and two more blank lines. A footer of five blank lines is also printed. With the -F option, a 3-line header is printed: the leading two blank lines are omitted; no footer is used. The default page_length in both cases is 66 lines. The default number of text lines changes from 56 (without -F) to 63 (with -F). The text line of the header takes the form date string page, with spaces inserted around string so that the line takes up the full page_width. Here, date is the date (see the -D or --date-format option for details), string is the centered header string, and page identifies the page number. The LC_MESSAGES locale category affects the spelling of page; in the default C locale, it is Page number where number is the decimal page number.
Form feeds in the input cause page breaks in the output. Multiple form feeds produce empty pages.
Columns are of equal width, separated by an optional string (default is space). For multicolumn output, lines will always be truncated to page_width (default 72), unless you use the -J option. For single column output no line truncation occurs by default. Use -W option to truncate lines in that case.
The following changes were made in version 1.22i and apply to later versions of pr: - Brian
The program accepts the following options. Also see Common options.
Normally the date format defaults to %Y-%m-%d %H:%M (for example, 2001-12-04 23:59); but if the POSIXLY_CORRECT environment variable is set and the LC_TIME locale category specifies the POSIX locale, the default is %b %e %H:%M %Y (for example, Dec 4 23:59 2001.
Time stamps are listed according to the time zone rules specified by
the TZ environment variable, or by the system default rules if
TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library).
An exit status of zero indicates success, and a nonzero value indicates failure.
fold writes each file (- means standard input), or standard input if none are given, to standard output, breaking long lines. Synopsis:
fold [option]... [file]...
By default, fold breaks lines wider than 80 columns. The output is split into as many lines as necessary.
fold counts screen columns by default; thus, a tab may count more than one column, backspace decreases the column count, and carriage return sets the column to zero.
The program accepts the following options. Also see Common options.
For compatibility fold supports an obsolete option syntax -width. New scripts should use -w width instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands output pieces of the input.
head prints the first part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of -. Synopsis:
head [option]... [file]...
If more than one file is specified, head prints a one-line header consisting of:
==> file name <==
before the output for each file.
The program accepts the following options. Also see Common options.
For compatibility head also supports an obsolete option syntax -countoptions, which is recognized only if it is specified first. count is a decimal number optionally followed by a size letter (b, k, m) as in -c, or l to mean count by lines, or other option letters (cqv). New scripts should use -c count or -n count instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
tail prints the last part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of -. Synopsis:
tail [option]... [file]...
If more than one file is specified, tail prints a one-line header consisting of:
==> file name <==
before the output for each file.
gnu tail can output any amount of data (some other versions of tail cannot). It also has no -r option (print in reverse), since reversing a file is really a different job from printing the end of a file; BSD tail (which is the one with -r) can only reverse files that are at most as large as its buffer, which is typically 32 KiB. A more reliable and versatile way to reverse files is the gnu tac command.
If any option-argument is a number n starting with a +, tail begins printing with the nth item from the start of each file, instead of from the end.
The program accepts the following options. Also see Common options.
There are two ways to specify how you'd like to track files with this option, but that difference is noticeable only when a followed file is removed or renamed. If you'd like to continue to track the end of a growing file even after it has been unlinked, use --follow=descriptor. This is the default behavior, but it is not useful if you're tracking a log file that may be rotated (removed or renamed, then reopened). In that case, use --follow=name to track the named file by reopening it periodically to see if it has been removed and recreated by some other program.
No matter which method you use, if the tracked file is determined to have shrunk, tail prints a message saying the file has been truncated and resumes tracking the end of the file from the newly-determined endpoint.
When a file is removed, tail's behavior depends on whether it is following the name or the descriptor. When following by name, tail can detect that a file has been removed and gives a message to that effect, and if --retry has been specified it will continue checking periodically to see if the file reappears. When following a descriptor, tail does not detect that the file has been unlinked or renamed and issues no message; even though the file may no longer be accessible via its original name, it may still be growing.
The option values descriptor and name may be specified only
with the long form of the option, not with -f.
tail -f
process yourself.
$ make >& makerr & tail --pid=$! -f makerr
If you specify a pid that is not in use or that does not correspond to the process that is writing to the tailed files, then tail may terminate long before any files stop growing or it may not terminate until long after the real writer has terminated. Note that --pid cannot be supported on some systems; tail will print a warning if this is the case.
open
/fstat
the file to determine if that file name is
still associated with the same device/inode-number pair as before.
When following a log file that is rotated, this is approximately the
number of seconds between when tail prints the last pre-rotation lines
and when it prints the lines that have accumulated in the new log file.
This option is meaningful only when following by name.
For compatibility tail also supports an obsolete usage tail -count[bcl][f] [file], which is recognized only if it does not conflict with the usage described above. count is an optional decimal number optionally followed by a size letter (b, c, l) to mean count by 512-byte blocks, bytes, or lines, optionally followed by f which has the same meaning as -f. New scripts should use -c count[b], -n count, and/or -f instead.
On older systems, the leading - can be replaced by + in the obsolete option syntax with the same meaning as in counts, and obsolete usage overrides normal usage when the two conflict. This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose behavior depends on this variable. For example, use tail -- - main.c or tail main.c rather than the ambiguous tail - main.c, tail -c4 or tail -c 10 4 rather than the ambiguous tail -c 4, and tail ./+4 or tail -n +4 rather than the ambiguous tail +4.
An exit status of zero indicates success, and a nonzero value indicates failure.
split creates output files containing consecutive sections of input (standard input if none is given or input is -). Synopsis:
split [option] [input [prefix]]
By default, split puts 1000 lines of input (or whatever is left over for the last section), into each output file.
The output files' names consist of prefix (x by default) followed by a group of characters (aa, ab, ... by default), such that concatenating the output files in traditional sorted order by file name produces the original input file. If the output file names are exhausted, split reports an error without deleting the output files that it did create.
The program accepts the following options. Also see Common options.
For compatibility split also supports an obsolete
option syntax -lines. New scripts should use -l
lines instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
csplit creates zero or more output files containing sections of input (standard input if input is -). Synopsis:
csplit [option]... input pattern...
The contents of the output files are determined by the pattern arguments, as detailed below. An error occurs if a pattern argument refers to a nonexistent line of the input file (e.g., if no remaining line matches a given regular expression). After every pattern has been matched, any remaining input is copied into one last output file.
By default, csplit prints the number of bytes written to each output file after it has been created.
The types of pattern arguments are:
The output files' names consist of a prefix (xx by default) followed by a suffix. By default, the suffix is an ascending sequence of two-digit decimal numbers from 00 to 99. In any case, concatenating the output files in sorted order by file name produces the original input file.
By default, if csplit encounters an error or receives a hangup, interrupt, quit, or terminate signal, it removes any output files that it has created so far before it exits.
The program accepts the following options. Also see Common options.
printf(3)
-style conversion specification, possibly including
format specification flags, a field width, a precision specifications,
or all of these kinds of modifiers. The format letter must convert a
binary integer argument to readable form; thus, only d, i,
u, o, x, and X conversions are allowed. The
entire suffix is given (with the current output file number) to
sprintf(3)
to form the file name suffixes for each of the
individual output files in turn. If this option is used, the
--digits option is ignored.
An exit status of zero indicates success, and a nonzero value indicates failure.
These commands generate just a few numbers representing entire contents of files.
wc counts the number of bytes, characters, whitespace-separated words, and newlines in each given file, or standard input if none are given or for a file of -. Synopsis:
wc [option]... [file]...
wc prints one line of counts for each file, and if the file was given as an argument, it prints the file name following the counts. If more than one file is given, wc prints a final line containing the cumulative counts, with the file name total. The counts are printed in this order: newlines, words, characters, bytes. Each count is printed right-justified in a field with at least one space between fields so that the numbers and file names normally line up nicely in columns. The width of the count fields varies depending on the inputs, so you should not depend on a particular field width. However, as a GNU extension, if only one count is printed, it is guaranteed to be printed without leading spaces.
By default, wc prints three counts: the newline, words, and byte counts. Options can specify that only certain counts be printed. Options do not undo others previously given, so
wc --bytes --words
prints both the byte counts and the word counts.
With the --max-line-length option, wc prints the length of the longest line per file, and if there is more than one file it prints the maximum (not the sum) of those lengths.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
sum computes a 16-bit checksum for each given file, or standard input if none are given or for a file of -. Synopsis:
sum [option]... [file]...
sum prints the checksum for each file followed by the number of blocks in the file (rounded up). If more than one file is given, file names are also printed (by default). (With the --sysv option, corresponding file names are printed when there is at least one file argument.)
By default, gnu sum computes checksums using an algorithm compatible with BSD sum and prints file sizes in units of 1024-byte blocks.
The program accepts the following options. Also see Common options.
sum is provided for compatibility; the cksum program (see next section) is preferable in new applications.
An exit status of zero indicates success, and a nonzero value indicates failure.
cksum computes a cyclic redundancy check (CRC) checksum for each given file, or standard input if none are given or for a file of -. Synopsis:
cksum [option]... [file]...
cksum prints the CRC checksum for each file along with the number of bytes in the file, and the file name unless no arguments were given.
cksum is typically used to ensure that files transferred by unreliable means (e.g., netnews) have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).
The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms (see the previous section); it is more robust.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
md5sum computes a 128-bit checksum (or fingerprint or message-digest) for each specified file.
Note: The MD5 digest is more reliable than a simple CRC (provided by the cksum command) for detecting accidental file corruption, as the chances of accidentally having two files with indentical MD5 are vanishingly small. However, it should not be considered truly secure against malicious tampering: although finding a file with a given MD5 fingerprint, or modifying a file so as to retain its MD5 are considered infeasible at the moment, it is known how to produce different files with identical MD5 (a “collision”), something which can be a security issue in certain contexts. For more secure hashes, consider using SHA-1 or SHA-2. See sha1sum invocation, and sha2 utilities.
If a file is specified as - or if no files are given md5sum computes the checksum for the standard input. md5sum can also determine whether a file and checksum are consistent. Synopsis:
md5sum [option]... [file]...
For each file, md5sum outputs the MD5 checksum, a flag indicating a binary or text input file, and the file name. If file is omitted or specified as -, standard input is read.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
sha1sum computes a 160-bit checksum for each specified file. The usage and options of this command are precisely the same as for md5sum. See md5sum invocation.
Note: The SHA-1 digest is more secure than MD5, and no collisions of it are known (different files having the same fingerprint). However, it is known that they can be produced with considerable, but not unreasonable, resources. For this reason, it is generally considered that SHA-1 should be gradually phased out in favor of the more secure SHA-2 hash algorithms. See sha2 utilities.
The commands sha224sum, sha256sum, sha384sum and sha512sum compute checksums of various lengths (respectively 224, 256, 384 and 512 bits), collectively known as the SHA-2 hashes. The usage and options of these commands are precisely the same as for md5sum. See md5sum invocation.
Note: The SHA384 and SHA512 digests are considerably slower to compute, especially on 32-bit computers, than SHA224 or SHA256.
These commands work with (or produce) sorted files.
sort sorts, merges, or compares all the lines from the given files, or standard input if none are given or for a file of -. By default, sort writes the results to standard output. Synopsis:
sort [option]... [file]...
sort has three modes of operation: sort (the default), merge, and check for sortedness. The following options change the operation mode:
A pair of lines is compared as follows: sort compares each pair of fields, in the order specified on the command line, according to the associated ordering options, until a difference is found or no fields are left. If no key fields are specified, sort uses a default key of the entire line. Finally, as a last resort when all keys compare equal, sort compares entire lines as if no ordering options other than --reverse (-r) were specified. The --stable (-s) option disables this last-resort comparison so that lines in which all fields compare equal are left in their original relative order. The --unique (-u) option also disables the last-resort comparison.
Unless otherwise specified, all comparisons use the character collating sequence specified by the LC_COLLATE locale.2
gnu sort (as specified for all gnu utilities) has no limit on input line length or restrictions on bytes allowed within lines. In addition, if the final byte of an input file is not a newline, gnu sort silently supplies one. A line's trailing newline is not part of the line for comparison purposes.
0 if no error occurred
1 if invoked with -c and the input is not properly sorted
2 if an error occurred
If the environment variable TMPDIR is set, sort uses its value as the directory for temporary files instead of /tmp. The --temporary-directory (-T) option in turn overrides the environment variable.
The following options affect the ordering of output lines. They may be specified globally or as part of a specific key field. If no key fields are specified, global options apply to comparison of entire lines; otherwise the global options are inherited by key fields that do not specify any special options of their own. In pre-POSIX versions of sort, global options affect only later key fields, so portable shell scripts should specify global options first.
strtod
to convert
a prefix of each line to a double-precision floating point number.
This allows floating point numbers to be specified in scientific notation,
like 1.0e-34
and 10e100
.
The LC_NUMERIC locale determines the decimal-point character.
Do not report overflow, underflow, or conversion errors.
Use the following collating sequence:
Use this option only if there is no alternative; it is much slower than
--numeric-sort (-n) and it can lose information when
converting to floating point.
Numeric sort uses what might be considered an unconventional method to
compare strings representing floating point numbers. Rather than first
converting each string to the C double
type and then comparing
those values, sort aligns the decimal-point characters in the
two strings and compares the strings a character at a time. One benefit
of using this approach is its speed. In practice this is much more
efficient than performing the two corresponding string-to-double (or
even string-to-integer) conversions and then comparing doubles. In
addition, there is no corresponding loss of precision. Converting each
string to double
before comparison would limit precision to about
16 digits on most systems.
Neither a leading + nor exponential notation is recognized.
To compare such strings numerically, use the
--general-numeric-sort (-g) option.
Other options are:
sort -o F F
and cat F | sort -o F
.
However, sort with --merge (-m) can open
the output file before reading all input, so a command like cat
F | sort -m -o F - G
is not safe as sort might start
writing F before cat is done reading it.
On newer systems, -o cannot appear after an input file if
POSIXLY_CORRECT is set, e.g., sort F -o F. Portable
scripts should specify -o output-file before any input
files.
This option can improve the performance of sort by causing it
to start with a larger or smaller sort buffer than the default.
However, this option affects only the initial buffer size. The buffer
grows beyond size if sort encounters input lines larger
than size.
To specify a null character (ASCII nul) as
the field separator, use the two-character string \0, e.g.,
sort -t '\0'.
This option also disables the default last-resort comparison.
The commands sort -u
and sort | uniq
are equivalent, but
this equivalence does not extend to arbitrary sort options.
For example, sort -n -u
inspects only the value of the initial
numeric string when checking for uniqueness, whereas sort -n |
uniq
inspects the entire line. See uniq invocation.
Historical (BSD and System V) implementations of sort have differed in their interpretation of some options, particularly -b, -f, and -n. gnu sort follows the POSIX behavior, which is usually (but not always!) like the System V behavior. According to POSIX, -n no longer implies -b. For consistency, -M has been changed in the same way. This may affect the meaning of character positions in field specifications in obscure cases. The only fix is to add an explicit -b.
A position in a sort field specified with the -k option has the form f.c, where f is the number of the field to use and c is the number of the first character from the beginning of the field. In a start position, an omitted .c stands for the field's first character. In an end position, an omitted or zero .c stands for the field's last character. If the start field falls after the end of the line or after the end field, the field is empty. If the -b option was specified, the .c part of a field specification is counted from the first nonblank character of the field.
A sort key position may also have any of the option letters Mbdfinr appended to it, in which case the global ordering options are not used for that particular field. The -b option may be independently attached to either or both of the start and end positions of a field specification, and if it is inherited from the global options it will be attached to both. If input lines can contain leading or adjacent blanks and -t is not used, then -k is typically combined with -b, -g, -M, or -n; otherwise the varying numbers of leading blanks in fields can cause confusing results.
Keys can span multiple fields.
On older systems, sort supports an obsolete origin-zero syntax +pos1 [-pos2] for specifying sort keys. This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose behavior depends on this variable. For example, use sort ./+2 or sort -k 3 rather than the ambiguous sort +2.
Here are some examples to illustrate various combinations of options.
sort -n -r
sort -k 3b
sort -t : -k 2,2n -k 5.3,5.4
Note that if you had written -k 2n instead of -k 2,2n sort would have used all characters beginning in the second field and extending to the end of the line as the primary numeric key. For the large majority of applications, treating keys spanning more than one field as numeric will not do what you expect.
Also note that the n modifier was applied to the field-end specifier for the first key. It would have been equivalent to specify -k 2n,2 or -k 2n,2n. All modifiers except b apply to the associated field, regardless of whether the modifier character is attached to the field-start and/or the field-end part of the key specifier.
sort -t : -k 5b,5 -k 3,3n /etc/passwd sort -t : -n -k 5b,5 -k 3,3 /etc/passwd sort -t : -b -k 5,5 -k 3,3n /etc/passwd
These three commands have equivalent effect. The first specifies that the first key's start position ignores leading blanks and the second key is sorted numerically. The other two commands rely on global options being inherited by sort keys that lack modifiers. The inheritance works in this case because -k 5b,5b and -k 5b,5 are equivalent, as the location of a field-end lacking a .c character position is not affected by whether initial blanks are skipped.
4.150.156.3 - - [01/Apr/2004:06:31:51 +0000] message 1 211.24.3.231 - - [24/Apr/2004:20:17:39 +0000] message 2
Fields are separated by exactly one space. Sort IPv4 addresses lexicographically, e.g., 212.61.52.2 sorts before 212.129.233.201 because 61 is less than 129.
sort -s -t ' ' -k 4.9n -k 4.5M -k 4.2n -k 4.14,4.21 file*.log | sort -s -t '.' -k 1,1n -k 2,2n -k 3,3n -k 4,4n
This example cannot be done with a single sort invocation, since IPv4 address components are separated by . while dates come just after a space. So it is broken down into two invocations of sort: the first sorts by time stamp and the second by IPv4 address. The time stamp is sorted by year, then month, then day, and finally by hour-minute-second field, using -k to isolate each field. Except for hour-minute-second there's no need to specify the end of each key field, since the n and M modifiers sort based on leading prefixes that cannot cross field boundaries. The IPv4 addresses are sorted lexicographically. The second sort uses -s so that ties in the primary key are broken by the secondary key; the first sort uses -s so that the combination of the two sorts is stable.
find src -type f -print0 | sort -z -f | xargs -0 etags --append
The use of -print0, -z, and -0 in this case means that file names that contain blanks or other special characters are not broken up by the sort operation.
uniq writes the unique lines in the given input, or standard input if nothing is given or for an input name of -. Synopsis:
uniq [option]... [input [output]]
By default, uniq prints its input lines, except that it discards all but the first of adjacent repeated lines, so that no output lines are repeated. Optionally, it can instead discard lines that are not repeated, or all repeated lines.
The input need not be sorted, but repeated input lines are detected
only if they are adjacent. If you want to discard non-adjacent
duplicate lines, perhaps you want to use sort -u
.
See sort invocation.
Comparisons use the character collating sequence specified by the LC_COLLATE locale category.
If no output file is specified, uniq writes to standard output.
The program accepts the following options. Also see Common options.
For compatibility uniq supports an obsolete option syntax
-n. New scripts should use -f n instead.
On older systems, uniq supports an obsolete option syntax
+n.
This obsolete behavior can be enabled or disabled with the
_POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose
behavior depends on this variable.
For example, use uniq ./+10 or uniq -s 10 rather than
the ambiguous uniq +10.
Note that when groups are delimited and the input stream contains two or more consecutive blank lines, then the output is ambiguous. To avoid that, filter the input through tr -s '\n' to replace each sequence of consecutive newlines with a single newline.
This is a gnu extension.
An exit status of zero indicates success, and a nonzero value indicates failure.
comm writes to standard output lines that are common, and lines that are unique, to two input files; a file name of - means standard input. Synopsis:
comm [option]... file1 file2
Before comm can be used, the input files must be sorted using the collating sequence specified by the LC_COLLATE locale. If an input file ends in a non-newline character, a newline is silently appended. The sort command with no options always outputs a file that is suitable input to comm.
With no options, comm produces three-column output. Column one contains lines unique to file1, column two contains lines unique to file2, and column three contains lines common to both files. Columns are separated by a single TAB character.
The options -1, -2, and -3 suppress printing of the corresponding columns. Also see Common options.
Unlike some other comparison utilities, comm has an exit status that does not depend on the result of the comparison. Upon normal completion comm produces an exit code of zero. If there is an error it exits with nonzero status.
tsort performs a topological sort on the given file, or standard input if no input file is given or for a file of -. For more details and some history, see tsort background. Synopsis:
tsort [option] [file]
tsort reads its input as pairs of strings, separated by blanks, indicating a partial ordering. The output is a total ordering that corresponds to the given partial ordering.
For example
tsort <<EOF a b c d e f b c d e EOF
will produce the output
a b c d e f
Consider a more realistic example.
You have a large set of functions all in one file, and they may all be
declared static except one. Currently that one (say main
) is the
first function defined in the file, and the ones it calls directly follow
it, followed by those they call, etc. Let's say that you are determined
to take advantage of prototypes, so you have to choose between declaring
all of those functions (which means duplicating a lot of information from
the definitions) and rearranging the functions so that as many as possible
are defined before they are used. One way to automate the latter process
is to get a list for each function of the functions it calls directly.
Many programs can generate such lists. They describe a call graph.
Consider the following list, in which a given line indicates that the
function on the left calls the one on the right directly.
main parse_options main tail_file main tail_forever tail_file pretty_name tail_file write_header tail_file tail tail_forever recheck tail_forever pretty_name tail_forever write_header tail_forever dump_remainder tail tail_lines tail tail_bytes tail_lines start_lines tail_lines dump_remainder tail_lines file_lines tail_lines pipe_lines tail_bytes xlseek tail_bytes start_bytes tail_bytes dump_remainder tail_bytes pipe_bytes file_lines dump_remainder recheck pretty_name
then you can use tsort to produce an ordering of those functions that satisfies your requirement.
example$ tsort call-graph | tac dump_remainder start_lines file_lines pipe_lines xlseek start_bytes pipe_bytes tail_lines tail_bytes pretty_name write_header tail recheck parse_options tail_file tail_forever main
tsort detects any cycles in the input and writes the first cycle encountered to standard error.
Note that for a given partial ordering, generally there is no unique
total ordering. In the context of the call graph above, the function
parse_options
may be placed anywhere in the list as long as it
precedes main
.
The only options are --help and --version. See Common options.
tsort exists because very early versions of the Unix linker processed an archive file exactly once, and in order. As ld read each object in the archive, it decided whether it was needed in the program based on whether it defined any symbols which were undefined at that point in the link.
This meant that dependencies within the archive had to be handled
specially. For example, scanf
probably calls read
. That means
that in a single pass through an archive, it was important for scanf.o
to appear before read.o, because otherwise a program which calls
scanf
but not read
might end up with an unexpected unresolved
reference to read
.
The way to address this problem was to first generate a set of dependencies of one object file on another. This was done by a shell script called lorder. The GNU tools don't provide a version of lorder, as far as I know, but you can still find it in BSD distributions.
Then you ran tsort over the lorder output, and you used the resulting sort to define the order in which you added objects to the archive.
This whole procedure has been obsolete since about 1980, because Unix archives now contain a symbol table (traditionally built by ranlib, now generally built by ar itself), and the Unix linker uses the symbol table to effectively make multiple passes over an archive file.
Anyhow, that's where tsort came from. To solve an old problem with the way the linker handled archive files, which has since been solved in different ways.
An exit status of zero indicates success, and a nonzero value indicates failure.
ptx reads a text file and essentially produces a permuted index, with each keyword in its context. The calling sketch is either one of:
ptx [option ...] [file ...] ptx -G [option ...] [input [output]]
The -G (or its equivalent: --traditional) option disables all gnu extensions and reverts to traditional mode, thus introducing some limitations and changing several of the program's default option values. When -G is not specified, gnu extensions are always enabled. gnu extensions to ptx are documented wherever appropriate in this document. For the full list, see See Compatibility in ptx.
Individual options are explained in the following sections.
When gnu extensions are enabled, there may be zero, one or several files after the options. If there is no file, the program reads the standard input. If there is one or several files, they give the name of input files which are all read in turn, as if all the input files were concatenated. However, there is a full contextual break between each file and, when automatic referencing is requested, file names and line numbers refer to individual text input files. In all cases, the program outputs the permuted index to the standard output.
When gnu extensions are not enabled, that is, when the program operates in traditional mode, there may be zero, one or two parameters besides the options. If there are no parameters, the program reads the standard input and outputs the permuted index to the standard output. If there is only one parameter, it names the text input to be read instead of the standard input. If two parameters are given, they give respectively the name of the input file to read and the name of the output file to produce. Be very careful to note that, in this case, the contents of file given by the second parameter is destroyed. This behavior is dictated by System V ptx compatibility; gnu Standards normally discourage output parameters not introduced by an option.
Note that for any file named as the value of an option or as an input text file, a single dash - may be used, in which case standard input is assumed. However, it would not make sense to use this convention more than once per program invocation.
An exit status of zero indicates success, and a nonzero value indicates failure.
As it is set up now, the program assumes that the input file is coded using 8-bit ISO 8859-1 code, also known as Latin-1 character set, unless it is compiled for MS-DOS, in which case it uses the character set of the IBM-PC. (gnu ptx is not known to work on smaller MS-DOS machines anymore.) Compared to 7-bit ASCII, the set of characters which are letters is different; this alters the behavior of regular expression matching. Thus, the default regular expression for a keyword allows foreign or diacriticized letters. Keyword sorting, however, is still crude; it obeys the underlying character set ordering quite blindly.
When gnu extensions are enabled, the only way to avoid newline as a
break character is to write all the break characters in the file with no
newline at all, not even at the end of the file. When gnu extensions
are disabled, spaces, tabs and newlines are always considered as break
characters even if not included in the Break file.
There is a default Ignore file used by ptx when this option is
not specified, usually found in /usr/local/lib/eign if this has
not been changed at installation time. If you want to deactivate the
default Ignore file, specify /dev/null
instead.
There is no default for the Only file. When both an Only file and an
Ignore file are specified, a word is considered a keyword only
if it is listed in the Only file and not in the Ignore file.
Using this option, the program does not try very hard to remove
references from contexts in output, but it succeeds in doing so
when the context ends exactly at the newline. If option
-r is used with -S default value, or when gnu extensions
are disabled, this condition is always met and references are completely
excluded from the output contexts.
[.?!][]\"')}]*\\($\\|\t\\| \\)[ \t\n]*
Whenever gnu extensions are disabled or if -r option is used, end of lines are used; in this case, the default regexp is just:
\n
Using an empty regexp is equivalent to completely disabling end of line or end of sentence recognition. In this case, the whole file is considered to be a single big line or sentence. The user might want to disallow all truncation flag generation as well, through option -F "". See Syntax of Regular Expressions (The GNU Emacs Manual).
When the keywords happen to be near the beginning of the input line or sentence, this often creates an unused area at the beginning of the output context line; when the keywords happen to be near the end of the input line or sentence, this often creates an unused area at the end of the output context line. The program tries to fill those unused areas by wrapping around context in them; the tail of the input line or sentence is used to fill the unused area on the left of the output line; the head of the input line or sentence is used to fill the unused area on the right of the output line.
As a matter of convenience to the user, many usual backslashed escape
sequences from the C language are recognized and converted to the
corresponding characters by ptx itself.
An empty regexp is equivalent to not using this option. See Syntax of Regular Expressions (The GNU Emacs Manual).
As a matter of convenience to the user, many usual backslashed escape sequences, as found in the C language, are recognized and converted to the corresponding characters by ptx itself.
Output format is mainly controlled by the -O and -T options
described in the table below. When neither -O nor -T are
selected, and if gnu extensions are enabled, the program chooses an
output format suitable for a dumb terminal. Each keyword occurrence is
output to the center of one line, surrounded by its left and right
contexts. Each field is properly justified, so the concordance output
can be readily observed. As a special feature, if automatic
references are selected by option -A and are output before the
left context, that is, if option -R is not selected, then
a colon is added after the reference; this nicely interfaces with gnu
Emacs next-error
processing. In this default output format, each
white space character, like newline and tab, is merely changed to
exactly one space, with no special attempt to compress consecutive
spaces. This might change in the future. Except for those white space
characters, every other character of the underlying set of 256
characters is transmitted verbatim.
Output format is further controlled by the following options.
This option is automatically selected whenever gnu extensions are
disabled.
string may have more than one character, as in -F .... Also, in the particular case when string is empty (-F ""), truncation flagging is disabled, and no truncation marks are appended in this case.
As a matter of convenience to the user, many usual backslashed escape
sequences, as found in the C language, are recognized and converted to
the corresponding characters by ptx itself.
.xx "tail" "before" "keyword_and_after" "head" "ref"
so it will be possible to write a .xx roff macro to take care of the output typesetting. This is the default output format when gnu extensions are disabled. Option -M can be used to change xx to another macro name.
In this output format, each non-graphical character, like newline and
tab, is merely changed to exactly one space, with no special attempt to
compress consecutive spaces. Each quote character: " is doubled
so it will be correctly processed by nroff or troff.
\xx {tail}{before}{keyword}{after}{head}{ref}
so it will be possible to write a \xx
definition to take care of
the output typesetting. Note that when references are not being
produced, that is, neither option -A nor option -r is
selected, the last parameter of each \xx
call is inhibited.
Option -M can be used to change xx to another macro
name.
In this output format, some special characters, like $, %,
&, # and _ are automatically protected with a
backslash. Curly brackets {, } are protected with a
backslash and a pair of dollar signs (to force mathematical mode). The
backslash itself produces the sequence \backslash{}
.
Circumflex and tilde diacritical marks produce the sequence ^\{ }
and
~\{ }
respectively. Other diacriticized characters of the
underlying character set produce an appropriate TeX sequence as far
as possible. The other non-graphical characters, like newline and tab,
and all other characters which are not part of ASCII, are merely
changed to exactly one space, with no special attempt to compress
consecutive spaces. Let me know how to improve this special character
processing for TeX.
This version of ptx contains a few features which do not exist in System V ptx. These extra features are suppressed by using the -G command line option, unless overridden by other command line options. Some gnu extensions cannot be recovered by overriding, so the simple rule is to avoid -G if you care about gnu extensions. Here are the differences between this program and System V ptx.
Having output parameters not introduced by options is a dangerous practice which gnu avoids as far as possible. So, for using ptx portably between gnu and System V, you should always use it with a single input file, and always expect the result on standard output. You might also want to automatically configure in a -G option to ptx calls in products using ptx, if the configurator finds that the installed ptx accepts -G.
cut writes to standard output selected parts of each line of each input file, or standard input if no files are given or for a file name of -. Synopsis:
cut [option]... [file]...
In the table which follows, the byte-list, character-list, and field-list are one or more numbers or ranges (two numbers separated by a dash) separated by commas. Bytes, characters, and fields are numbered starting at 1. Incomplete ranges may be given: -m means 1-m; n- means n through end of line or last field. The list elements can be repeated, can overlap, and can be specified in any order; but the selected input is written in the same order that it is read, and is written exactly once.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
paste writes to standard output lines consisting of sequentially corresponding lines of each given file, separated by a TAB character. Standard input is used for a file name of - or if no input files are given.
For example:
$ cat num2 1 2 $ cat let3 a b c $ paste num2 let3 1 a 2 b c
Synopsis:
paste [option]... [file]...
The program accepts the following options. Also see Common options.
$ paste -s num2 let3 1 2 a b c
$ paste -d '%_' num2 let3 num2 1%a_1 2%b_2 %c_
An exit status of zero indicates success, and a nonzero value indicates failure.
join writes to standard output a line for each pair of input lines that have identical join fields. Synopsis:
join [option]... file1 file2
Either file1 or file2 (but not both) can be -, meaning standard input. file1 and file2 should be sorted on the join fields.
Normally, the sort order is that of the
collating sequence specified by the LC_COLLATE locale. Unless
the -t option is given, the sort comparison ignores blanks at
the start of the join field, as in sort -b
. If the
--ignore-case option is given, the sort comparison ignores
the case of characters in the join field, as in sort -f
.
However, as a GNU extension, if the input has no unpairable lines the sort order can be any order that considers two fields to be equal if and only if the sort comparison described above considers them to be equal. For example:
$ cat file1 a a1 c c1 b b1 $ cat file2 a a2 c c2 b b2 $ join file1 file2 a a1 a2 c c1 c2 b b1 b2
The defaults are:
The program accepts the following options. Also see Common options.
A field specification of 0 denotes the join field. In most cases, the functionality of the 0 field spec may be reproduced using the explicit m.n that corresponds to the join field. However, when printing unpairable lines (using either of the -a or -v options), there is no way to specify the join field using m.n in field-list if there are unpairable lines in both files. To give join that functionality, POSIX invented the 0 field specification notation.
The elements in field-list are separated by commas or blanks. Blank separators typically need to be quoted for the shell. For example, the commands join -o 1.2,2.2 and join -o '1.2 2.2' are equivalent.
All output lines—including those printed because of any -a or -v
option—are subject to the specified field-list.
An exit status of zero indicates success, and a nonzero value indicates failure.
This commands operate on individual characters.
tr [option]... set1 [set2]
tr copies standard input to standard output, performing one of the following operations:
The set1 and (if given) set2 arguments define ordered sets of characters, referred to below as set1 and set2. These sets are the characters of the input that tr operates on. The --complement (-c, -C) option replaces set1 with its complement (all of the characters that are not in set1).
Currently tr fully supports only single-byte characters. Eventually it will support multibyte characters; when it does, the -C option will cause it to complement the set of characters, whereas -c will cause it to complement the set of values. This distinction will matter only when some values are not characters, and this is possible only in locales using multibyte encodings when the input contains encoding errors.
The program accepts the --help and --version options. See Common options. Options must precede operands.
An exit status of zero indicates success, and a nonzero value indicates failure.
The format of the set1 and set2 arguments resembles the format of regular expressions; however, they are not regular expressions, only lists of characters. Most characters simply represent themselves in these strings, but the strings can contain the shorthands listed below, for convenience. Some of them can be used only in set1 or set2, as noted below.
gnu tr does not support the System V syntax that uses square brackets to enclose ranges. Translations specified in that format sometimes work as expected, since the brackets are often transliterated to themselves. However, they should be avoided because they sometimes behave unexpectedly. For example, tr -d '[0-9]' deletes brackets as well as digits.
Many historically common and even accepted uses of ranges are not
portable. For example, on EBCDIC hosts using the A-Z
range will not do what most would expect because A through Z
are not contiguous as they are in ASCII.
If you can rely on a POSIX compliant version of tr, then
the best way to work around this is to use character classes (see below).
Otherwise, it is most portable (and most ugly) to enumerate the members
of the ranges.
upper
and lower
classes,
which expand in ascending order. When the --delete (-d)
and --squeeze-repeats (-s) options are both given, any
character class can be used in set2. Otherwise, only the
character classes lower
and upper
are accepted in
set2, and then only if the corresponding character class
(upper
and lower
, respectively) is specified in the same
relative position in set1. Doing this specifies case conversion.
The class names are given below; an error results when an invalid class
name is given.
alnum
alpha
blank
cntrl
digit
graph
lower
print
punct
space
upper
xdigit
tr performs translation when set1 and set2 are both given and the --delete (-d) option is not given. tr translates each character of its input that is in set1 to the corresponding character in set2. Characters not in set1 are passed through unchanged. When a character appears more than once in set1 and the corresponding characters in set2 are not all the same, only the final one is used. For example, these two commands are equivalent:
tr aaa xyz tr a z
A common use of tr is to convert lowercase characters to uppercase. This can be done in many ways. Here are three of them:
tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ tr a-z A-Z tr '[:lower:]' '[:upper:]'
But note that using ranges like a-z
above is not portable.
When tr is performing translation, set1 and set2 typically have the same length. If set1 is shorter than set2, the extra characters at the end of set2 are ignored.
On the other hand, making set1 longer than set2 is not portable; POSIX says that the result is undefined. In this situation, BSD tr pads set2 to the length of set1 by repeating the last character of set2 as many times as necessary. System V tr truncates set1 to the length of set2.
By default, gnu tr handles this case like BSD tr. When the --truncate-set1 (-t) option is given, gnu tr handles this case like the System V tr instead. This option is ignored for operations other than translation.
Acting like System V tr in this case breaks the relatively common BSD idiom:
tr -cs A-Za-z0-9 '\012'
because it converts only zero bytes (the first element in the complement of set1), rather than all non-alphanumerics, to newlines.
By the way, the above idiom is not portable because it uses ranges, and it assumes that the octal code for newline is 012. Assuming a POSIX compliant tr, here is a better way to write it:
tr -cs '[:alnum:]' '[\n*]'
When given just the --delete (-d) option, tr removes any input characters that are in set1.
When given just the --squeeze-repeats (-s) option, tr replaces each input sequence of a repeated character that is in set1 with a single occurrence of that character.
When given both --delete and --squeeze-repeats, tr first performs any deletions using set1, then squeezes repeats from any remaining characters using set2.
The --squeeze-repeats option may also be used when translating, in which case tr first performs translation, then squeezes repeats from any remaining characters using set2.
Here are some examples to illustrate various combinations of options:
tr -d '\0'
tr -cs '[:alnum:]' '[\n*]'
tr -s '\n'
#!/bin/sh cat -- "$@" \ | tr -s '[:punct:][:blank:]' '[\n*]' \ | tr '[:upper:]' '[:lower:]' \ | uniq -d
tr -d axM
However, when - is one of those characters, it can be tricky because
- has special meanings. Performing the same task as above but also
removing all - characters, we might try tr -d -axM
, but
that would fail because tr would try to interpret -a as
a command-line option. Alternatively, we could try putting the hyphen
inside the string, tr -d a-xM
, but that wouldn't work either because
it would make tr interpret a-x
as the range of characters
a...x rather than the three.
One way to solve the problem is to put the hyphen at the end of the list
of characters:
tr -d axM-
Or you can use -- to terminate option processing:
tr -d -- -axM
More generally, use the character class notation [=c=]
with - (or any other character) in place of the c:
tr -d '[=-=]axM'
Note how single quotes are used in the above example to protect the square brackets from interpretation by a shell.
expand writes the contents of each given file, or standard input if none are given or for a file of -, to standard output, with tab characters converted to the appropriate number of spaces. Synopsis:
expand [option]... [file]...
By default, expand converts all tabs to spaces. It preserves backspace characters in the output; they decrement the column count for tab calculations. The default action is equivalent to -t 8 (set tabs every 8 columns).
The program accepts the following options. Also see Common options.
For compatibility, GNU expand also accepts the obsolete
option syntax, -t1[,t2].... New scripts
should use -t t1[,t2]... instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
unexpand writes the contents of each given file, or standard input if none are given or for a file of -, to standard output, converting blanks at the beginning of each line into as many tab characters as needed. In the default POSIX locale, a blank is a space or a tab; other locales may specify additional blank characters. Synopsis:
unexpand [option]... [file]...
By default, unexpand converts only initial blanks (those that precede all non-blank characters) on each line. It preserves backspace characters in the output; they decrement the column count for tab calculations. By default, tabs are set at every 8th column.
The program accepts the following options. Also see Common options.
For compatibility, GNU unexpand supports the obsolete option syntax,
-tab1[,tab2]..., where tab stops must be
separated by commas. (Unlike -t, this obsolete option does
not imply -a.) New scripts should use --first-only -t
tab1[,tab2]... instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes the ls command and its variants dir and vdir, which list information about files.
The ls program lists information about files (of any type, including directories). Options and file arguments can be intermixed arbitrarily, as usual.
For non-option command-line arguments that are directories, by default ls lists the contents of directories, not recursively, and omitting files with names beginning with .. For other non-option arguments, by default ls lists just the file name. If no non-option argument is specified, ls operates on the current directory, acting as if it had been invoked with a single argument of ..
By default, the output is sorted alphabetically, according to the locale settings in effect.3 If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is listed one per line and control characters are output as-is.
Because ls is such a fundamental program, it has accumulated many options over the years. They are described in the subsections below; within each section, options are listed alphabetically (ignoring case). The division of options into the subsections is not absolute, since some options affect more than one aspect of ls's operation.
0 success 1 minor problems (e.g., a subdirectory was not found) 2 serious trouble (e.g., memory exhausted)
Also see Common options.
These options determine which files ls lists information for. By default, ls lists files and the contents of any directories on the command line, except that in directories it ignores files whose names start with ..
This option can be useful in shell aliases. For example, if
lx is an alias for ls --hide='*~' and ly is
an alias for ls --ignore='*~', then the command lx -A
lists the file README~ even though ly -A would not.
$ ls --ignore='.??*' --ignore='.[^.]' --ignore='#*'
The first option ignores names of length 3 or more that start with .,
the second ignores all two-character names that start with .
except .., and the third ignores names that start with #.
These options affect the information that ls displays. By default, only file names are shown.
//DIRED// beg1 end1 beg2 end2 ...
The begN and endN are unsigned integers that record the byte position of the beginning and end of each file name in the output. This makes it easy for Emacs to find the names, even when they contain unusual characters such as space or newline, without fancy searching.
If directories are being listed recursively (-R), output a similar line with offsets for each subdirectory name:
//SUBDIRED// beg1 end1 ...
Finally, output a line of the form:
//DIRED-OPTIONS// --quoting-style=word
where word is the quoting style (see Formatting the file names).
Here is an actual example:
$ mkdir -p a/sub/deeper a/sub2 $ touch a/f1 a/f2 $ touch a/sub/deeper/file $ ls -gloRF --dired a a: total 8 -rw-r--r-- 1 0 Jun 10 12:27 f1 -rw-r--r-- 1 0 Jun 10 12:27 f2 drwxr-xr-x 3 4096 Jun 10 12:27 sub/ drwxr-xr-x 2 4096 Jun 10 12:27 sub2/ a/sub: total 4 drwxr-xr-x 2 4096 Jun 10 12:27 deeper/ a/sub/deeper: total 0 -rw-r--r-- 1 0 Jun 10 12:27 file a/sub2: total 0 //DIRED// 48 50 84 86 120 123 158 162 217 223 282 286 //SUBDIRED// 2 3 167 172 228 240 290 296 //DIRED-OPTIONS// --quoting-style=literal
Note that the pairs of offsets on the //DIRED// line above delimit these names: f1, f2, sub, sub2, deeper, file. The offsets on the //SUBDIRED// line delimit the following directory names: a, a/sub, a/sub/deeper, a/sub2.
Here is an example of how to extract the fifth entry name, deeper, corresponding to the pair of offsets, 222 and 228:
$ ls -gloRF --dired a > out $ dd bs=1 skip=222 count=6 < out 2>/dev/null; echo deeper
Note that although the listing above includes a trailing slash for the deeper entry, the offsets select the name without the trailing slash. However, if you invoke ls with --dired along with an option like --escape (aka -b) and operate on a file whose name contains special characters, notice that the backslash is included:
$ touch 'a b' $ ls -blog --dired 'a b' -rw-r--r-- 1 0 Jun 10 12:28 a\ b //DIRED// 30 34 //DIRED-OPTIONS// --quoting-style=escape
If you use a quoting style that adds quote marks
(e.g., --quoting-style=c), then the offsets include the quote marks.
So beware that the user may select the quoting style via the environment
variable QUOTING_STYLE. Hence, applications using --dired
should either specify an explicit --quoting-style=literal option
(aka -N or --literal) on the command line, or else be
prepared to parse the escaped names.
Normally the size is printed as a byte count without punctuation, but this can be overridden (see Block size). For example, -h prints an abbreviated, human-readable count, and --block-size="'1" prints a byte count with the thousands separator of the current locale.
For each directory that is listed, preface the files with a line total blocks, where blocks is the total disk allocation for all files in that directory. The block size currently defaults to 1024 bytes, but this can be overridden (see Block size). The blocks computed counts each hard link separately; this is arguably a deficiency.
The permissions listed are similar to symbolic mode specifications (see Symbolic Modes). But ls combines multiple bits into the third character of each set of permissions as follows:
Following the permission bits is a single character that specifies
whether an alternate access method applies to the file. When that
character is a space, there is no alternate access method. When it
is a printing character (e.g., +), then there is such a method.
Normally the disk allocation is printed in units of 1024 bytes, but this can be overridden (see Block size).
For files that are NFS-mounted from an HP-UX system to a BSD system, this option reports sizes that are half the correct values. On HP-UX systems, it reports sizes that are twice the correct values for files that are NFS-mounted from BSD systems. This is due to a flaw in HP-UX; it also affects the HP-UX ls program.
These options change the order in which ls sorts the information it outputs. By default, sorting is done by character code (e.g., ASCII order).
The version sort takes into account the fact that file names frequently include indices or version numbers. Standard sorting functions usually do not produce the ordering that people expect because comparisons are made on a character-by-character basis. The version sort addresses this problem, and is especially useful when browsing directories that contain many files with indices/version numbers in their names:
$ ls -1 $ ls -1v foo.zml-1.gz foo.zml-1.gz foo.zml-100.gz foo.zml-2.gz foo.zml-12.gz foo.zml-6.gz foo.zml-13.gz foo.zml-12.gz foo.zml-2.gz foo.zml-13.gz foo.zml-25.gz foo.zml-25.gz foo.zml-6.gz foo.zml-100.gz
Note also that numeric parts with leading zeroes are considered as fractional one:
$ ls -1 $ ls -1v abc-1.007.tgz abc-1.007.tgz abc-1.012b.tgz abc-1.01a.tgz abc-1.01a.tgz abc-1.012b.tgz
This functionality is implemented using the strverscmp
function.
See String/Array Comparison (The GNU C Library Reference Manual).
One result of that implementation decision is that ls -v
does not
use the locale category, LC_COLLATE. As a result, non-numeric prefixes
are sorted as if LC_COLLATE were set to C
.
These options affect the appearance of the overall output.
more -f
does seem to work.
By default, file timestamps are listed in abbreviated form. Most locales use a timestamp like 2002-03-30 23:45. However, the default POSIX locale uses a date like Mar 30 2002 for non-recent timestamps, and a date-without-year and time like Mar 30 23:45 for recent timestamps.
A timestamp is considered to be recent if it is less than six months old, and is not dated in the future. If a timestamp dated today is not listed in recent form, the timestamp is in the future, which means you probably have clock skew problems which may break programs like make that rely on file timestamps.
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library).
The following option changes how file timestamps are printed.
If format contains two format strings separated by a newline,
the former is used for non-recent files and the latter for recent
files; if you want output columns to line up, you may need to insert
spaces in one of the two formats.
This is useful because the time output includes all the information that
is available from the operating system. For example, this can help
explain make's behavior, since GNU make
uses the full timestamp to determine whether a file is out of date.
newline=' ' ls -l --time-style="+%Y-%m-%d $newline%m-%d %H:%M" ls -l --time-style="iso"
The LC_TIME locale category specifies the timestamp format. The default POSIX locale uses timestamps like Mar 30 2002 and Mar 30 23:45; in this locale, the following two ls invocations are equivalent:
newline=' ' ls -l --time-style="+%b %e %Y$newline%b %e %H:%M" ls -l --time-style="locale"
Other locales behave differently. For example, in a German locale,
--time-style="locale" might be equivalent to
--time-style="+%e. %b %Y $newline%e. %b %H:%M"
and might generate timestamps like 30. Mär 2002 and
30. Mär 23:45.
You can specify the default value of the --time-style option with the environment variable TIME_STYLE; if TIME_STYLE is not set the default style is posix-long-iso. GNU Emacs 21 and later can parse ISO dates, but older Emacs versions do not, so if you are using an older version of Emacs and specify a non-POSIX locale, you may need to set TIME_STYLE="locale".
To avoid certain denial-of-service attacks, timestamps that would be longer than 1000 bytes may be treated as errors.
These options change how file names themselves are printed.
You can specify the default value of the --quoting-style option
with the environment variable QUOTING_STYLE. If that environment
variable is not set, the default value is literal, but this
default may change to shell in a future version of this package.
dir is equivalent to ls -C
-b
; that is, by default files are listed in columns, sorted vertically,
and special characters are represented by backslash escape sequences.
See ls.
vdir is equivalent to ls -l
-b
; that is, by default files are listed in long format and special
characters are represented by backslash escape sequences.
dircolors outputs a sequence of shell commands to set up the terminal for color output from ls (and dir, etc.). Typical usage:
eval "`dircolors [option]... [file]`"
If file is specified, dircolors reads it to determine which colors to use for which file types and extensions. Otherwise, a precompiled database is used. For details on the format of these files, run dircolors --print-database.
The output is a shell command to set the LS_COLORS environment variable. You can specify the shell syntax to use on the command line, or dircolors will guess it from the value of the SHELL environment variable.
The program accepts the following options. Also see Common options.
SHELL
ends with
csh or tcsh.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes the commands for basic file manipulation: copying, moving (renaming), and deleting (removing).
cp copies files (or, optionally, directories). The copy is completely independent of the original. You can either copy one file to another, or copy arbitrarily many files to a destination directory. Synopses:
cp [option]... [-T] source dest cp [option]... source... directory cp [option]... -t directory source...
Generally, files are written just as they are read. For exceptions, see the --sparse option below.
By default, cp does not copy directories. However, the -R, -a, and -r options cause cp to copy recursively by descending into source directories and copying files to corresponding destination directories.
By default, cp follows symbolic links only when not copying recursively. This default can be overridden with the --archive (-a), -d, --dereference (-L), --no-dereference (-P), and -H options. If more than one of these options is specified, the last one silently overrides the others.
By default, cp copies the contents of special files only when not copying recursively. This default can be overridden with the --copy-contents option.
cp generally refuses to copy a file onto itself, with the following exception: if --force --backup is specified with source and dest identical, and referring to a regular file, cp will make a backup file, either regular or numbered, as specified in the usual ways (see Backup options). This is useful when you simply want to make a backup of an existing file before changing it.
The program accepts the following options. Also see Common options.
#!/bin/sh
# Usage: backup FILE...
# Create a gnu-style backup of each listed FILE.
for i; do
cp --backup --force -- "$i" "$i"
done
cp -R --copy-contents
will hang indefinitely trying to read
from FIFOs and special files like /dev/console, and it will
fill up your destination disk if you use it to copy /dev/zero.
This option has no effect unless copying recursively, and it does not
affect the copying of symbolic links.
Using --preserve with no attribute_list is equivalent to --preserve=mode,ownership,timestamps.
In the absence of this option, each destination file is created with the permissions of the corresponding source file, minus the bits set in the umask and minus the set-user-ID and set-group-ID bits. See File permissions.
cp --parents a/b/c existing_dir
copies the file a/b/c to existing_dir/a/b/c, creating any missing intermediate directories.
The when value can be one of the following:
An exit status of zero indicates success, and a nonzero value indicates failure.
dd copies a file (from standard input to standard output, by default) with a changeable I/O block size, while optionally performing conversions on it. Synopses:
dd [operand]... dd option
The only options are --help and --version. See Common options. dd accepts the following operands.
Conversions:
The ascii, ebcdic, and ibm conversions are
mutually exclusive.
The block and unblock conversions are mutually exclusive.
The lcase and ucase conversions are mutually exclusive.
The excl and nocreat conversions are mutually exclusive.
Flags:
These flags are not supported on all systems, and dd rejects attempts to use them when they are not supported. When reading from standard input or writing to standard output, the nofollow and noctty flags should not be specified, and the other flags (e.g., nonblock) can affect how other processes behave with the affected file descriptors, even after dd exits.
The numeric-valued strings above (bytes and blocks) can be followed by a multiplier: b=512, c=1, w=2, xm=m, or any of the standard block size suffixes like k=1024 (see Block size).
Use different dd invocations to use different block sizes for skipping and I/O. For example, the following shell commands copy data in 512 KiB blocks between a disk and a tape, but do not save or restore a 4 KiB label at the start of the disk:
disk=/dev/rdsk/c0t1d0s2 tape=/dev/rmt/0 # Copy all but the label from disk to tape. (dd bs=4k skip=1 count=0 && dd bs=512k) <$disk >$tape # Copy from tape back to disk, but leave the disk label alone. (dd bs=4k seek=1 count=0 && dd bs=512k) <$tape >$disk
Sending an INFO signal to a running dd process makes it print I/O statistics to standard error and then resume copying. In the example below, dd is run in the background to copy 10 million blocks. The kill command makes it output intermediate I/O statistics, and when dd completes, it outputs the final statistics.
$ dd if=/dev/zero of=/dev/null count=10MB & pid=$! $ kill -s INFO $pid; wait $pid 3385223+0 records in 3385223+0 records out 1733234176 bytes (1.7 GB) copied, 6.42173 seconds, 270 MB/s 10000000+0 records in 10000000+0 records out 5120000000 bytes (5.1 GB) copied, 18.913 seconds, 271 MB/s
On systems lacking the INFO signal dd responds to the USR1 signal instead, unless the POSIXLY_CORRECT environment variable is set.
An exit status of zero indicates success, and a nonzero value indicates failure.
install copies files while setting their permission modes and, if possible, their owner and group. Synopses:
install [option]... [-T] source dest install [option]... source... directory install [option]... -t directory source... install [option]... -d directory...
install is similar to cp, but allows you to control the attributes of destination files. It is typically used in Makefiles to copy programs into their destination directories. It refuses to copy files onto themselves.
The program accepts the following options. Also see Common options.
root
. owner may be either a user name or a numeric user
ID.
An exit status of zero indicates success, and a nonzero value indicates failure.
mv moves or renames files (or directories). Synopses:
mv [option]... [-T] source dest mv [option]... source... directory mv [option]... -t directory source...
mv can move any type of file from one file system to another.
Prior to version 4.0
of the fileutils,
mv could move only regular files between file systems.
For example, now mv can move an entire directory hierarchy
including special device files from one partition to another. It first
uses some of the same code that's used by cp -a
to copy the
requested directories and files, then (assuming the copy succeeded)
it removes the originals. If the copy fails, then the part that was
copied to the destination partition is removed. If you were to copy
three directories from one partition to another and the copy of the first
directory succeeded, but the second didn't, the first would be left on
the destination partition and the second and third would be left on the
original partition.
If a destination file exists but is normally unwritable, standard input is a terminal, and the -f or --force option is not given, mv prompts the user for whether to replace the file. (You might own the file, or have write permission on its directory.) If the response is not affirmative, the file is skipped.
Warning: If you try to move a symlink that points to a directory, and you specify the symlink with a trailing slash, then mv doesn't move the symlink but instead moves the directory referenced by the symlink. See Trailing slashes.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
rm removes each given file. By default, it does not remove directories. Synopsis:
rm [option]... [file]...
If a file is unwritable, standard input is a terminal, and the -f or --force option is not given, or the -i or --interactive option is given, rm prompts the user for whether to remove the file. If the response is not affirmative, the file is skipped.
Warning: If you use rm to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using shred.
The program accepts the following options. Also see Common options.
unlink
function unconditionally rather than attempting
to check whether the file is a directory and using rmdir
if it
is a directory. This can be useful on corrupted file systems where
unlink
works even though other, file-checking functions fail.
For directories, this works
only if you have appropriate privileges and if your operating system supports
unlink
for directories. Because unlinking a directory causes any files
in the deleted directory to become unreferenced, it is wise to fsck
the file system afterwards.
One common question is how to remove files whose names begin with a
-. gnu rm, like every program that uses the getopt
function to parse its arguments, lets you use the -- option to
indicate that all following arguments are non-options. To remove a file
called -f in the current directory, you could type either:
rm -- -f
or:
rm ./-f
The Unix rm program's use of a single - for this purpose predates the development of the getopt standard syntax.
An exit status of zero indicates success, and a nonzero value indicates failure.
shred overwrites devices or files, to help prevent even very expensive hardware from recovering the data.
Ordinarily when you remove a file (see rm invocation), the data is not actually destroyed. Only the index listing where the file is stored is destroyed, and the storage is made available for reuse. There are undelete utilities that will attempt to reconstruct the index and can bring the file back if the parts were not reused.
On a busy system with a nearly-full drive, space can get reused in a few seconds. But there is no way to know for sure. If you have sensitive data, you may want to be sure that recovery is not possible by actually overwriting the file with non-sensitive data.
However, even after doing that, it is possible to take the disk back to a laboratory and use a lot of sensitive (and expensive) equipment to look for the faint “echoes” of the original data underneath the overwritten data. If the data has only been overwritten once, it's not even that hard.
The best way to remove something irretrievably is to destroy the media it's on with acid, melt it down, or the like. For cheap removable media like floppy disks, this is the preferred method. However, hard drives are expensive and hard to melt, so the shred utility tries to achieve a similar effect non-destructively.
This uses many overwrite passes, with the data patterns chosen to maximize the damage they do to the old data. While this will work on floppies, the patterns are designed for best effect on hard drives. For more details, see the source code and Peter Gutmann's paper Secure Deletion of Data from Magnetic and Solid-State Memory, from the proceedings of the Sixth USENIX Security Symposium (San Jose, California, July 22–25, 1996).
Please note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. Exceptions include:
data=journal
mode),
BFS, NTFS, etc. when they are configured to journal data.
In the particular case of ext3 file systems, the above disclaimer applies (and
shred is thus of limited effectiveness) only in data=journal
mode, which journals file data in addition to just metadata. In both
the data=ordered
(default) and data=writeback
modes,
shred works as usual. Ext3 journaling modes can be changed
by adding the data=something
option to the mount options for a
particular file system in the /etc/fstab file, as documented in
the mount man page (man mount).
If you are not sure how your file system operates, then you should assume that it does not overwrite data in place, which means that shred cannot reliably operate on regular files in your file system.
Generally speaking, it is more reliable to shred a device than a file, since this bypasses the problem of file system design mentioned above. However, even shredding devices is not always completely reliable. For example, most disks map out bad sectors invisibly to the application; if the bad sectors contain sensitive data, shred won't be able to destroy it.
shred makes no attempt to detect or report this problem, just as it makes no attempt to do anything about backups. However, since it is more reliable to shred devices than files, shred by default does not truncate or remove the output file. This default is more suitable for devices, which typically cannot be truncated and should not be removed.
Finally, consider the risk of backups and mirrors. File system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a shredded file to be recovered later. So if you keep any data you may later want to destroy using shred, be sure that it is not backed up or mirrored.
shred [option]... file[...]
The program accepts the following options. Also see Common options.
You might use the following command to erase all trace of the file system you'd created on the floppy disk in your first drive. That command takes about 20 minutes to erase a “1.44MB” (actually 1440 KiB) floppy.
shred --verbose /dev/fd0
Similarly, to erase all data on a selected partition of your hard disk, you could give a command like this:
shred --verbose /dev/sda5
A file of - denotes standard output. The intended use of this is to shred a removed temporary file. For example:
i=`tempfile -m 0600` exec 3<>"$i" rm -- "$i" echo "Hello, world" >&3 shred - >&3 exec 3>-
However, the command shred - >file does not shred the contents of file, since the shell truncates file before invoking shred. Use the command shred file or (if using a Bourne-compatible shell) the command shred - 1<>file instead.
An exit status of zero indicates success, and a nonzero value indicates failure.
This chapter describes commands which create special types of files (and rmdir, which removes directories, one special file type).
Although Unix-like operating systems have markedly fewer special file types than others, not everything can be treated only as the undifferentiated byte stream of normal files. For example, when a file is created or removed, the system must record this information, which it does in a directory—a special type of file. Although you can read directories as normal files, if you're curious, in order for the system to do its job it must impose a structure, a certain order, on the bytes of the file. Thus it is a “special” type of file.
Besides directories, other special file types include named pipes (FIFOs), symbolic links, sockets, and so-called special files.
link creates a single hard link at a time.
It is a minimalist interface to the system-provided
link
function. See Hard Links (The GNU C Library Reference Manual).
It avoids the bells and whistles of the more commonly-used
ln command (see ln invocation).
Synopsis:
link filename linkname
filename must specify an existing file, and linkname
must specify a nonexistent entry in an existing directory.
link simply calls link (
filename,
linkname)
to create the link.
On a GNU system, this command acts like ln --directory --no-target-directory filename linkname. However, the --directory and --no-target-directory options are not specified by POSIX, and the link command is more portable in practice.
An exit status of zero indicates success, and a nonzero value indicates failure.
ln makes links between files. By default, it makes hard links; with the -s option, it makes symbolic (or soft) links. Synopses:
ln [option]... [-T] target linkname ln [option]... target ln [option]... target... directory ln [option]... -t directory target...
Normally ln does not remove existing files. Use the --force (-f) option to remove them unconditionally, the --interactive (-i) option to remove them conditionally, and the --backup (-b) option to rename them.
A hard link is another name for an existing file; the link and the original are indistinguishable. Technically speaking, they share the same inode, and the inode contains all the information about a file—indeed, it is not incorrect to say that the inode is the file. On all existing implementations, you cannot make a hard link to a directory, and hard links cannot cross file system boundaries. (These restrictions are not mandated by POSIX, however.)
Symbolic links (symlinks for short), on the other hand, are a special file type (which not all kernels support: System V release 3 (and older) systems lack symlinks) in which the link file actually refers to a different file, by name. When most operations (opening, reading, writing, and so on) are passed the symbolic link file, the kernel automatically dereferences the link and operates on the target of the link. But some operations (e.g., removing) work on the link file itself, rather than on its target. See Symbolic Links (The GNU C Library Reference Manual).
The program accepts the following options. Also see Common options.
When the destination is an actual directory (not a symlink to one), there is no ambiguity. The link is created in that directory. But when the specified destination is a symlink to a directory, there are two ways to treat the user's request. ln can treat the destination just as it would a normal directory and create the link in it. On the other hand, the destination can be viewed as a non-directory—as the symlink itself. In that case, ln must delete or backup that symlink before creating the new link. The default is to treat a destination that is a symlink to a directory just like a directory.
This option is weaker than the --no-target-directory
(-T) option, so it has no effect if both options are given.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
Bad Example: # Create link ../a pointing to a in that directory. # Not really useful because it points to itself. ln -s a .. Better Example: # Change to the target before creating symlinks to avoid being confused. cd .. ln -s adir/a . Bad Example: # Hard coded file names don't move well. ln -s $(pwd)/a /some/dir/ Better Example: # Relative file names survive directory moves and also # work across networked file systems. ln -s afile anotherfile ln -s ../adir/afile yetanotherfile
mkdir creates directories with the specified names. Synopsis:
mkdir [option]... name...
If a name is an existing file but not a directory, mkdir prints a warning message on stderr and will exit with a status of 1 after processing any remaining names. The same is done when a name is an existing directory and the -p option is not given. If a name is an existing directory and the -p option is given, mkdir will ignore it. That is, mkdir will not print a warning, raise an error, or change the mode of the directory (even if the -m option is given), and will move on to processing any remaining names.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
mkfifo creates FIFOs (also called named pipes) with the specified names. Synopsis:
mkfifo [option] name...
A FIFO is a special file type that permits independent processes to communicate. One process opens the FIFO file for writing, and another for reading, after which data can flow as with the usual anonymous pipe in shells or elsewhere.
The program accepts the following option. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
mknod creates a FIFO, character special file, or block special file with the specified name. Synopsis:
mknod [option]... name type [major minor]
Unlike the phrase “special file type” above, the term special file has a technical meaning on Unix: something that can generate or receive data. Usually this corresponds to a physical piece of hardware, e.g., a printer or a disk. (These files are typically created at system-configuration time.) The mknod command is what creates files of this type. Such devices can be read either a character at a time or a “block” (many characters) at a time, hence we say there are block special files and character special files.
The arguments after name specify the type of file to make:
When making a block or character special file, the major and minor device numbers must be given after the file type. If a major or minor device number begins with 0x or 0X, it is interpreted as hexadecimal; otherwise, if it begins with 0, as octal; otherwise, as decimal.
The program accepts the following option. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
readlink may work in one of two supported modes:
readlink [option] file
By default, readlink operates in readlink mode.
The program accepts the following options. Also see Common options.
The readlink utility first appeared in OpenBSD 2.1.
An exit status of zero indicates success, and a nonzero value indicates failure.
rmdir removes empty directories. Synopsis:
rmdir [option]... directory...
If any directory argument does not refer to an existing empty directory, it is an error.
The program accepts the following option. Also see Common options.
See rm invocation, for how to remove non-empty directories (recursively).
An exit status of zero indicates success, and a nonzero value indicates failure.
unlink deletes a single specified file name.
It is a minimalist interface to the system-provided
unlink
function. See Deleting Files (The GNU C Library Reference Manual). Synopsis:
It avoids the bells and whistles of the more commonly-used
rm command (see rm invocation).
unlink filename
On some systems unlink
can be used to delete the name of a
directory. On others, it can be used that way only by a privileged user.
In the GNU system unlink
can never delete the name of a directory.
The unlink command honors the --help and --version options. To remove a file whose name begins with -, prefix the name with ./, e.g., unlink ./--help.
An exit status of zero indicates success, and a nonzero value indicates failure.
A file is not merely its contents, a name, and a file type (see Special file types). A file also has an owner (a user ID), a group (a group ID), permissions (what the owner can do with the file, what people in the group can do, and what everyone else can do), various timestamps, and other information. Collectively, we call these a file's attributes.
These commands change file attributes.
chown changes the user and/or group ownership of each given file to new-owner or to the user and group of an existing reference file. Synopsis:
chown [option]... {new-owner | --reference=ref_file} file...
If used, new-owner specifies the new owner and/or group as follows (with no embedded white space):
[owner] [ : [group] ]
Specifically:
Some older scripts may still use . in place of the : separator. POSIX 1003.1-2001 (see Standards conformance) does not require support for that, but for backward compatibility GNU chown supports . so long as no ambiguity results. New scripts should avoid the use of . because it is not portable, and because it has undesirable results if the entire owner.group happens to identify a user whose name contains ..
The chown command sometimes clears the set-user-ID or
set-group-ID permission bits. This behavior depends on the policy and
functionality of the underlying chown
system call, which may
make system-dependent file mode modifications outside the control of
the chown command. For example, the chown command
might not affect those bits when operated as the superuser, or if the
bits signify some function other than executable permission (e.g.,
mandatory locking).
When in doubt, check the underlying system behavior.
The program accepts the following options. Also see Common options.
root
might run
find / -owner OLDUSER -print0 | xargs -0 chown -h NEWUSER
But that is dangerous because the interval between when the find tests the existing file's owner and when the chown is actually run may be quite large. One way to narrow the gap would be to invoke chown for each file as it is found:
find / -owner OLDUSER -exec chown -h NEWUSER {} \;
But that is very slow if there are many affected files. With this option, it is safer (the gap is narrower still) though still not perfect:
chown -h -R --from=OLDUSER NEWUSER /
lchown
system call.
On systems that do not provide the lchown
system call,
chown fails when a file specified on the command line
is a symbolic link.
By default, no diagnostic is issued for symbolic links encountered
during a recursive traversal, but see --verbose.
lchown
system call, and --no-dereference
is in effect, then issue a diagnostic saying neither the symbolic link nor
its referent is being changed.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Change the owner of /u to "root". chown root /u # Likewise, but also change its group to "staff". chown root:staff /u # Change the owner of /u and subfiles to "root". chown -hR root /u
chgrp changes the group ownership of each given file to group (which can be either a group name or a numeric group ID) or to the group of an existing reference file. Synopsis:
chgrp [option]... {group | --reference=ref_file} file...
The program accepts the following options. Also see Common options.
lchown
system call.
On systems that do not provide the lchown
system call,
chgrp fails when a file specified on the command line
is a symbolic link.
By default, no diagnostic is issued for symbolic links encountered
during a recursive traversal, but see --verbose.
lchown
system call, and --no-dereference
is in effect, then issue a diagnostic saying neither the symbolic link nor
its referent is being changed.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Change the group of /u to "staff". chgrp staff /u # Change the group of /u and subfiles to "staff". chgrp -hR staff /u
chmod changes the access permissions of the named files. Synopsis:
chmod [option]... {mode | --reference=ref_file} file...
chmod never changes the permissions of symbolic links, since the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals.
If used, mode specifies the new permissions. For details, see the section on File permissions. If you really want mode to have a leading -, you should use -- first, e.g., chmod -- -w file. Typically, though, chmod a-w file is preferable, and chmod -w file (without the --) complains if it behaves differently from what chmod a-w file would do.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
touch changes the access and/or modification times of the specified files. Synopsis:
touch [option]... file...
Any file that does not exist is created empty.
A file of - causes touch to change the times of the file associated with standard output.
If changing both the access and modification times to the current time, touch can change the timestamps for files that the user running it does not own but has write permission for. Otherwise, the user must own the files.
Although touch provides options for changing two of the times—the
times of last access and modification—of a file, there is actually
a third one as well: the inode change time. This is often referred to
as a file's ctime
.
The inode change time represents the time when the file's meta-information
last changed. One common example of this is when the permissions of a
file change. Changing the permissions doesn't access the file, so
the atime doesn't change, nor does it modify the file, so the mtime
doesn't change. Yet, something about the file itself has changed,
and this must be noted somewhere. This is the job of the ctime field.
This is necessary, so that, for example, a backup program can make a
fresh copy of the file, including the new permissions value.
Another operation that modifies a file's ctime without affecting
the others is renaming. In any case, it is not possible, in normal
operations, for a user to change the ctime field to a user-specified value.
Time stamps assume the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library). You can avoid avoid ambiguities during daylight saving transitions by using utc time stamps.
The program accepts the following options. Also see Common options.
On older systems, touch supports an obsolete syntax, as follows. If no timestamp is given with any of the -d, -r, or -t options, and if there are two or more files and the first file is of the form MMDDhhmm[YY] and this would be a valid argument to the -t option (if the YY, if any, were moved to the front), and if the represented year is in the range 1969–1999, that argument is interpreted as the time for the other files instead of as a file name. This obsolete behavior can be enabled or disabled with the _POSIX2_VERSION environment variable (see Standards conformance), but portable scripts should avoid commands whose behavior depends on this variable. For example, use touch ./12312359 main.c or touch -t 12312359 main.c rather than the ambiguous touch 12312359 main.c.
An exit status of zero indicates success, and a nonzero value indicates failure.
No disk can hold an infinite amount of data. These commands report on how much disk storage is in use or available. (This has nothing much to do with how much main memory, i.e., RAM, a program is using when it runs; for that, you want ps or pstat or swap or some such command.)
df reports the amount of disk space used and available on file systems. Synopsis:
df [option]... [file]...
With no arguments, df reports the space used and available on all currently mounted file systems (of all types). Otherwise, df reports on the file system containing each argument file.
Normally the disk space is printed in units of 1024 bytes, but this can be overridden (see Block size). Non-integer quantities are rounded up to the next higher unit.
If an argument file is a disk device file containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node (i.e., the root file system). gnu df does not attempt to determine the disk usage on unmounted file systems, because on most kinds of systems doing so requires extremely nonportable intimate knowledge of file system structures.
The program accepts the following options. Also see Common options.
sync
system call before getting any usage data.
This may make df run significantly faster on systems with many
disks, but on some systems (notably SunOS) the results may be slightly
out of date. This is the default.
sync
system call before getting any usage data. On
some systems (notably SunOS), doing this yields more up to date results,
but in general this option makes df much slower, especially when
there are many or very busy file systems.
An exit status of zero indicates success, and a nonzero value indicates failure.
du reports the amount of disk space used by the specified files and for each subdirectory (of directory arguments). Synopsis:
du [option]... [file]...
With no arguments, du reports the disk space for the current directory. Normally the disk space is printed in units of 1024 bytes, but this can be overridden (see Block size). Non-integer quantities are rounded up to the next higher unit.
The program accepts the following options. Also see Common options.
wc -c
on regular files,
or more generally, ls -l --block-size=1
or stat --format=%s
.
For example, a file containing the word zoo with no newline would,
of course, have an apparent size of 3. Such a small file may require
anywhere from 0 to 16 KiB or more of disk space, depending on
the type and configuration of the file system on which the file resides.
However, a sparse file created with this command:
dd bs=1 seek=2GiB if=/dev/null of=big
has an apparent size of 2 GiB, yet on most modern
systems, it actually uses almost no disk space.
--apparent-size --block-size=1
.
du --max-depth=0
is equivalent to du -s
.
You can specify the default value of the --time-style option
with the environment variable TIME_STYLE; if TIME_STYLE is not set
the default style is long-iso. For compatibility with ls,
if TIME_STYLE begins with + and contains a newline,
the newline and any later characters are ignored; if TIME_STYLE
begins with posix- the posix- is ignored; and if
TIME_STYLE is locale it is ignored.
du --exclude='*.o'
excludes files whose names
end in .o.
On BSD systems, du reports sizes that are half the correct values for files that are NFS-mounted from HP-UX systems. On HP-UX systems, it reports sizes that are twice the correct values for files that are NFS-mounted from BSD systems. This is due to a flaw in HP-UX; it also affects the HP-UX du program.
An exit status of zero indicates success, and a nonzero value indicates failure.
stat displays information about the specified file(s). Synopsis:
stat [option]... [file]...
With no option, stat reports all information about the given files. But it also can be used to report the information of the file systems the given files are located on. If the files are links, stat can also give information about the files the links point to.
The valid format sequences for files are:
The valid format sequences for file systems are:
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library).
An exit status of zero indicates success, and a nonzero value indicates failure.
sync writes any data buffered in memory out to disk. This can
include (but is not limited to) modified superblocks, modified inodes,
and delayed reads and writes. This must be implemented by the kernel;
The sync program does nothing but exercise the sync
system
call.
The kernel keeps data in memory to avoid doing (relatively slow) disk reads and writes. This improves performance, but if the computer crashes, data may be lost or the file system corrupted as a result. The sync command ensures everything in memory is written to disk.
Any arguments are ignored, except for a lone --help or --version (see Common options).
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that display text strings.
echo writes each given string to standard output, with a space between each and a newline after the last one. Synopsis:
echo [option]... [string]...
The program accepts the following options. Also see Common options. Options must precede operands, and the normally-special argument -- has no special meaning and is treated like any other string.
If the POSIXLY_CORRECT environment variable is set, then when
echo's first argument is not -n it outputs
option-like arguments instead of treating them as options. For
example, echo -ne hello
outputs -ne hello instead of
plain hello.
POSIX does not require support for any options, and says that the behavior of echo is implementation-defined if any string contains a backslash or if the first argument is -n. Portable programs can use the printf command if they need to omit trailing newlines or output control characters or backslashes. See printf invocation.
An exit status of zero indicates success, and a nonzero value indicates failure.
printf does formatted printing of text. Synopsis:
printf format [argument]...
printf prints the format string, interpreting % directives and \ escapes to format numeric and string arguments in a way that is mostly similar to the C printf function. The differences are as follows:
A floating-point argument must use a period before any fractional digits, but is printed according to the LC_NUMERIC category of the current locale. For example, in a locale whose radix character is a comma, the command printf %g 3.14 outputs 3,14 whereas the command printf %g 3,14 is an error.
printf interprets \ooo in format as an octal number (if ooo is 1 to 3 octal digits) specifying a character to print, and \xhh as a hexadecimal number (if hh is 1 to 2 hex digits) specifying a character to print.
printf interprets two character syntaxes introduced in ISO C 99: \u for 16-bit Unicode (ISO/IEC 10646) characters, specified as four hexadecimal digits hhhh, and \U for 32-bit Unicode characters, specified as eight hexadecimal digits hhhhhhhh. printf outputs the Unicode characters according to the LC_CTYPE locale.
The processing of \u and \U requires a full-featured
iconv
facility. It is activated on systems with glibc 2.2 (or newer),
or when libiconv
is installed prior to this package. Otherwise
\u and \U will print as-is.
The only options are a lone --help or --version. See Common options. Options must precede operands.
The Unicode character syntaxes are useful for writing strings in a locale independent way. For example, a string containing the Euro currency symbol
$ /usr/local/bin/printf '\u20AC 14.95'
will be output correctly in all locales supporting the Euro symbol (ISO-8859-15, UTF-8, and others). Similarly, a Chinese string
$ /usr/local/bin/printf '\u4e2d\u6587'
will be output correctly in all Chinese locales (GB2312, BIG5, UTF-8, etc).
Note that in these examples, the full name of printf has been
given, to distinguish it from the GNU bash
built-in function
printf.
For larger strings, you don't need to look up the hexadecimal code values of each character one by one. ASCII characters mixed with \u escape sequences is also known as the JAVA source file encoding. You can use GNU recode 3.5c (or newer) to convert strings to this encoding. Here is how to convert a piece of text into a shell script which will output this text in a locale-independent way:
$ LC_CTYPE=zh_CN.big5 /usr/local/bin/printf \ '\u4e2d\u6587\n' > sample.txt $ recode BIG5..JAVA < sample.txt \ | sed -e "s|^|/usr/local/bin/printf '|" -e "s|$|\\\\n'|" \ > sample.sh
An exit status of zero indicates success, and a nonzero value indicates failure.
yes prints the command line arguments, separated by spaces and followed by a newline, forever until it is killed. If no arguments are given, it prints y followed by a newline forever until killed.
Upon a write error, yes exits with status 1.
The only options are a lone --help or --version. To output an argument that begins with -, precede it with --, e.g., yes -- --help. See Common options.
This section describes commands that are primarily useful for their exit
status, rather than their output. Thus, they are often used as the
condition of shell if
statements, or as the last command in a
pipeline.
false does nothing except return an exit status of 1, meaning failure. It can be used as a place holder in shell scripts where an unsuccessful command is needed. In most modern shells, false is a built-in command, so when you use false in a script, you're probably using the built-in command, not the one documented here.
false honors the --help and --version options.
This version of false is implemented as a C program, and is thus more secure and faster than a shell script implementation, and may safely be used as a dummy shell for the purpose of disabling accounts.
Note that false (unlike all other programs documented herein) exits unsuccessfully, even when invoked with --help or --version.
Portable programs should not assume that the exit status of false is 1, as it is greater than 1 on some non-GNU hosts.
true does nothing except return an exit status of 0, meaning
success. It can be used as a place holder in shell scripts
where a successful command is needed, although the shell built-in
command :
(colon) may do the same thing faster.
In most modern shells, true is a built-in command, so when
you use true in a script, you're probably using the built-in
command, not the one documented here.
true honors the --help and --version options.
Note, however, that it is possible to cause true to exit with nonzero status: with the --help or --version option, and with standard output already closed or redirected to a file that evokes an I/O error. For example, using a Bourne-compatible shell:
$ ./true --version >&- ./true: write error: Bad file number $ ./true --version > /dev/full ./true: write error: No space left on device
This version of true is implemented as a C program, and is thus more secure and faster than a shell script implementation, and may safely be used as a dummy shell for the purpose of disabling accounts.
test returns a status of 0 (true) or 1 (false) depending on the evaluation of the conditional expression expr. Each part of the expression must be a separate argument.
test has file status checks, string operators, and numeric comparison operators.
test has an alternate form that uses opening and closing square brackets instead a leading test. For example, instead of test -d /, you can write [ -d / ]. The square brackets must be separate arguments; for example, [-d /] does not have the desired effect. Since test expr and [ expr ] have the same meaning, only the former form is discussed below.
Synopses:
test expression test [ expression ] [ ] [ option
Because most shells have a built-in test command, using an unadorned test in a script or interactively may get you different functionality than that described here.
If expression is omitted, test returns false. If expression is a single argument, test returns false if the argument is null and true otherwise. The argument can be any string, including strings like -d, -1, --, --help, and --version that most other programs would treat as options. To get help and version information, invoke the commands [ --help and [ --version, without the usual closing brackets. See Common options.
0 if the expression is true, 1 if the expression is false, 2 if an error occurred.
These options test for particular types of files. (Everything's a file, but not all files are the same!)
These options test for particular access permissions.
These options test other file characteristics.
These options test string characteristics. You may need to quote string arguments for the shell. For example:
test -n "$V"
The quotes here prevent the wrong arguments from being passed to test if $V is empty or contains special characters.
Numeric relationals. The arguments must be entirely numeric (possibly
negative), or the special expression -l
string, which
evaluates to the length of string.
For example:
test -1 -gt -2 && echo yes => yes test -l abc -gt 1 && echo yes => yes test 0x100 -eq 1 error--> test: integer expression expected before -eq
The usual logical connectives.
expr evaluates an expression and writes the result on standard output. Each token of the expression must be a separate argument.
Operands are either integers or strings. Integers consist of one or more decimal digits, with an optional leading -. expr converts anything appearing in an operand position to an integer or a string depending on the operation being applied to it.
Strings are not quoted for expr itself, though you may need to
quote them to protect characters with special meaning to the shell,
e.g., spaces. However, regardless of whether it is quoted, a string
operand should not be a parenthesis or any of expr's
operators like +
, so you cannot safely pass an arbitrary string
$str
to expr merely by quoting it to the shell. One way to
work around this is to use the gnu extension +
,
(e.g., + "$str" = foo
); a more portable way is to use
" $str"
and to adjust the rest of the expression to take
the leading space into account (e.g., " $str" = " foo"
).
You should not pass a negative integer or a string with leading - as expr's first argument, as it might be misinterpreted as an option; this can be avoided by parenthesization. Also, portable scripts should not use a string operand that happens to take the form of an integer; this can be worked around by inserting leading spaces as mentioned above.
Operators may be given as infix symbols or prefix keywords. Parentheses may be used for grouping in the usual manner. You must quote parentheses and many operators to avoid the shell evaluating them, however.
The only options are --help and --version. See Common options. Options must precede operands.
0 if the expression is neither null nor 0, 1 if the expression is null or 0, 2 if the expression is syntactically invalid, 3 if an error occurred.
expr supports pattern matching and other string operators. These have lower precedence than both the numeric and relational operators (in the next sections).
grep
) regular
expression, with a ^
implicitly prepended. The first argument is
then matched against this regular expression.
If the match succeeds and regex uses \( and \), the
:
expression returns the part of string that matched the
subexpression; otherwise, it returns the number of characters matched.
If the match fails, the :
operator returns the null string if
\( and \) are used in regex, otherwise 0.
Only the first \( ... \) pair is relevant to the return value; additional pairs are meaningful only for grouping the regular expression operators.
In the regular expression, \+
, \?
, and \|
are
operators which respectively match one or more, zero or one, or separate
alternatives. SunOS and other expr's treat these as regular
characters. (POSIX allows either behavior.)
See Regular Expression Library (Regex), for details of
regular expression syntax. Some examples are in Examples of expr.
/
.
This makes it possible to test expr length + "$x"
or
expr + "$x" : '.*/\(.\)'
and have it do the right thing even if
the value of $x happens to be (for example) /
or index
.
This operator is a GNU extension. Portable shell scripts should use
" $token" : ' \(.*\)'
instead of + "$token"
.
To make expr interpret keywords as strings, you must use the
quote
operator.
expr supports the usual numeric operators, in order of increasing precedence. The string operators (previous section) have lower precedence, the connectives (next section) have higher.
expr supports the usual logical connectives and relations. These are higher precedence than either the string or numeric operators (previous sections). Here is the list, lowest-precedence operator first.
==
is a synonym for =
. expr first tries to convert
both arguments to integers and do a numeric comparison; if either
conversion fails, it does a lexicographic comparison using the character
collating sequence specified by the LC_COLLATE locale.
Here are a few examples, including quoting for shell metacharacters.
To add 1 to the shell variable foo
, in Bourne-compatible shells:
foo=`expr $foo + 1`
To print the non-directory part of the file name stored in
$fname
, which need not contain a /
:
expr $fname : '.*/\(.*\)' '|' $fname
An example showing that \+
is an operator:
expr aaa : 'a\+' => 3
expr abc : 'a\(.\)c' => b expr index abcdef cz => 3 expr index index a error--> expr: syntax error expr index quote index a => 0
Unix shells commonly provide several forms of redirection—ways to change the input source or output destination of a command. But one useful redirection is performed by a separate command, not by the shell; it's described here.
The tee command copies standard input to standard output and also to any files given as arguments. This is useful when you want not only to send some data down a pipe, but also to save a copy. Synopsis:
tee [option]... [file]...
If a file being written to does not already exist, it is created. If a file being written to already exists, the data it previously contained is overwritten unless the -a option is used.
A file of - causes tee to send another copy of input to standard output, but this is typically not that useful as the copies are interleaved.
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that manipulate file names.
basename removes any leading directory components from name. Synopsis:
basename name [suffix]
If suffix is specified and is identical to the end of name, it is removed from name as well. basename prints the result on standard output.
The only options are --help and --version. See Common options. Options must precede operands.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output "sort". basename /usr/bin/sort # Output "stdio". basename include/stdio.h .h
dirname prints all but the final slash-delimited component of a string (presumably a file name). Synopsis:
dirname name
If name is a single component, dirname prints . (meaning the current directory).
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
Examples:
# Output "/usr/bin". dirname /usr/bin/sort # Output ".". dirname stdio.h
pathchk checks portability of file names. Synopsis:
pathchk [option]... name...
For each name, pathchk prints a message if any of these conditions is true:
A nonexistent name is not an error, so long a file with that name could be created under the above conditions.
The program accepts the following options. Also see Common options. Options must precede operands.
0 if all specified file names passed all checks, 1 otherwise.
This section describes commands that display or alter the context in which you are working: the current directory, the terminal settings, and so forth. See also the user-related commands in the next section.
pwd prints the fully resolved name of the current directory. That is, all components of the printed name will be actual directory names—none will be symbolic links.
Because most shells have a built-in pwd command, using an unadorned pwd in a script or interactively may get you different functionality than that described here.
The only options are a lone --help or --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
stty prints or changes terminal characteristics, such as baud rate. Synopses:
stty [option] [setting]... stty [option]
If given no line settings, stty prints the baud rate, line discipline number (on systems that support it), and line settings that have been changed from the values set by stty sane. By default, mode reading and setting are performed on the tty line connected to standard input, although this can be modified by the --file option.
stty accepts many non-option arguments that change aspects of the terminal line operation, as described below.
The program accepts the following options. Also see Common options.
O_NONDELAY
flag to
prevent a POSIX tty from blocking until the carrier detect line is high if
the clocal
flag is not set. Hence, it is not always possible
to allow the shell to open the device in the traditional manner.
Many settings can be turned off by preceding them with a -. Such arguments are marked below with “May be negated” in their description. The descriptions themselves refer to the positive case, that is, when not negated (unless stated otherwise, of course).
Some settings are not available on all POSIX systems, since they use extensions. Such arguments are marked below with “Non-POSIX” in their description. On non-POSIX systems, those or other settings also may not be available, but it's not feasible to document all the variations: just try it and see.
An exit status of zero indicates success, and a nonzero value indicates failure.
stop
character when the system input buffer
is almost full, and start
character when it becomes almost
empty again. May be negated.
These arguments specify output-related operations.
interrupt
, quit
, and suspend
special
characters. May be negated.
erase
, kill
, werase
, and rprnt
special characters. May be negated.
erase
characters as backspace-space-backspace. May be
negated.
kill
character. May be negated.
interrupt
and quit
special
characters. May be negated.
icanon
is set.
Non-POSIX. May be negated.
kill
special character by erasing each character on
the line as indicated by the echoprt
and echoe
settings,
instead of by the echoctl
and echok
settings. Non-POSIX.
May be negated.
parenb -parodd cs7
. May be negated. If negated, same
as -parenb cs8
.
parenb parodd cs7
. May be negated. If negated, same
as -parenb cs8
.
-icrnl -onlcr
. May be negated. If negated, same as
icrnl -inlcr -igncr onlcr -ocrnl -onlret
.
erase
and kill
special characters to their default
values.
cread -ignbrk brkint -inlcr -igncr icrnl -ixoff -iuclc -ixany imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke
and also sets all special characters to their default values.
brkint ignpar istrip icrnl ixon opost isig icanon
, plus
sets the eof
and eol
characters to their default values
if they are the same as the min
and time
characters.
May be negated. If negated, same as raw
.
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel -opost -isig -icanon -xcase min 1 time 0
May be negated. If negated, same as cooked
.
icanon
.
-parenb -istrip cs8
. May be negated. If negated,
same as parenb istrip cs7
.
parenb istrip opost cs7
.
tab0
. Non-POSIX. May be negated. If negated, same
as tab3
.
xcase iuclc olcuc
. Non-POSIX. May be negated.
echoe echoctl echoke
.
echoe echoctl echoke -ixany intr ^C erase ^? kill C-u
.
The special characters' default values vary from system to system. They are set with the syntax name value, where the names are listed below and the value can be given either literally, in hat notation (^c), or as an integer which may start with 0x to indicate hexadecimal, 0 to indicate octal, or any other digit to indicate decimal.
For GNU stty, giving a value of ^-
or undef
disables that
special character. (This is incompatible with Ultrix stty,
which uses a value of u to disable a special character. GNU
stty treats a value u like any other, namely to set that
special character to <U>.)
exta
extb
. exta
is the same as
19200; extb
is the same as 38400. 0 hangs up the line if
-clocal is set.
printenv prints environment variable values. Synopsis:
printenv [option] [variable]...
If no variables are specified, printenv prints the value of every environment variable. Otherwise, it prints the value of each variable that is set, and nothing for those that are not set.
The only options are a lone --help or --version. See Common options.
0 if all variables specified were found 1 if at least one specified variable was not found 2 if a write error occurred
tty prints the file name of the terminal connected to its standard input. It prints not a tty if standard input is not a terminal. Synopsis:
tty [option]...
The program accepts the following option. Also see Common options.
0 if standard input is a terminal 1 if standard input is not a terminal 2 if given incorrect arguments 3 if a write error occurs
This section describes commands that print user-related information: logins, groups, and so forth.
id prints information about the given user, or the process running it if no user is specified. Synopsis:
id [option]... [username]
By default, it prints the real user ID, real group ID, effective user ID if different from the real user ID, effective group ID if different from the real group ID, and supplemental group IDs.
Each of these numeric values is preceded by an identifying string and followed by the corresponding user or group name in parentheses.
The options cause id to print only part of the above information. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
logname prints the calling user's name, as found in a system-maintained file (often /var/run/utmp or /etc/utmp), and exits with a status of 0. If there is no entry for the calling process, logname prints an error message and exits with a status of 1.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
whoami prints the user name associated with the current effective user ID. It is equivalent to the command id -un.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
groups prints the names of the primary and any supplementary groups for each given username, or the current process if no names are given. If names are given, the name of each user is printed before the list of that user's groups. Synopsis:
groups [username]...
The group lists are equivalent to the output of the command id -Gn.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
users prints on a single line a blank-separated list of user names of users currently logged in to the current host. Each user name corresponds to a login session, so if a user has more than one login session, that user's name will appear the same number of times in the output. Synopsis:
users [file]
With no file argument, users extracts its information from a system-maintained file (often /var/run/utmp or /etc/utmp). If a file argument is given, users uses that file instead. A common choice is /var/log/wtmp.
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
who prints information about users who are currently logged on. Synopsis:
who [option] [file] [am i]
If given no non-option arguments, who prints the following information for each user currently logged on: login name, terminal line, login time, and remote hostname or X display.
If given one non-option argument, who uses that instead of a default system-maintained file (often /var/run/utmp or /etc/utmp) as the name of the file containing the record of users logged on. /var/log/wtmp is commonly given as an argument to who to look at who has previously logged on.
If given two non-option arguments, who prints only the entry for the user running it (determined from its standard input), preceded by the hostname. Traditionally, the two arguments given are am i, as in who am i.
Time stamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library).
The program accepts the following options. Also see Common options.
+ allowingwrite
messages - disallowingwrite
messages ? cannot find terminal device
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that print or change system-wide information.
date [option]... [+format] date [-u|--utc|--universal] [ MMDDhhmm[[CC]YY][.ss] ]
Invoking date with no format argument is equivalent to invoking it with a default format that depends on the LC_TIME locale category. In the default C locale, this format is '+%a %b %e %H:%M:%S %Z %Y', so the output looks like Thu Mar 3 13:47:51 PST 2005.
Normally, date uses the time zone rules indicated by the TZ environment variable, or the system default rules if TZ is not set. See Specifying the Time Zone with TZ (The GNU C Library).
If given an argument that starts with a +, date prints the
current date and time (or the date and time specified by the
--date option, see below) in the format defined by that argument,
which is similar to that of the strftime
function. Except for
conversion specifiers, which start with %, characters in the
format string are printed unchanged. The conversion specifiers are
described below.
An exit status of zero indicates success, and a nonzero value indicates failure.
date conversion specifiers related to times.
date conversion specifiers related to dates.
date conversion specifiers that produce literal strings.
Unless otherwise specified, date normally pads numeric fields with zeroes, so that, for example, numeric months are always output as two digits. Seconds since the epoch are not padded, though, since there is no natural width for them.
As a GNU extension, date recognizes any of the following optional flags after the %:
Here are some examples of padding:
date +%d/%m -d "Feb 1" => 01/02 date +%-d/%-m -d "Feb 1" => 1/2 date +%_d/%_m -d "Feb 1" => 1/ 2
As a GNU extension, you can specify the field width (after any flag, if present) as a decimal number. If the natural size of the output is of the field has less than the specified number of characters, the result is written right adjusted and padded to the given size. For example, %9B prints the right adjusted month name in a field of width 9.
An optional modifier can follow the optional flag and width specification. The modifiers are:
If the format supports the modifier but no alternate representation is available, it is ignored.
If given an argument that does not start with +, date sets the system clock to the date and time specified by that argument (as described below). You must have appropriate privileges to set the system clock. The --date and --set options may not be used with such an argument. The --universal option may be used with such an argument to indicate that the specified date and time are relative to Coordinated Universal Time rather than to the local time zone.
The argument must consist entirely of digits, which have the following meaning:
The --set option also sets the system clock; see the next section.
The program accepts the following options. Also see Common options.
Fri, 09 Sep 2005 13:51:39 -0700
This format conforms to
Internet RFCs 2822 and
822, the
current and previous standards for Internet email.
The argument timespec specifies how much of the time to include. It can be one of the following:
Here are a few examples. Also see the documentation for the -d option in the previous section.
date --date='2 days ago'
date --date='3 months 1 day'
date --date='25 Dec' +%j
date '+%B %d'
But this may not be what you want because for the first nine days of the month, the %d expands to a zero-padded two-digit field, for example date -d 1may '+%B %d' will print May 01.
date -d 1may '+%B %-d
date +%m%d%H%M%Y.%S
date --set='+2 minutes'
Fri, 09 Sep 2005 13:51:39 -0700
date --date='1970-01-01 00:02:00 +0000' +%s 120
If you do not specify time zone information in the date string, date uses your computer's idea of the time zone when interpreting the string. For example, if your computer's time zone is that of Cambridge, Massachusetts, which was then 5 hours (i.e., 18,000 seconds) behind UTC:
# local time zone used date --date='1970-01-01 00:02:00' +%s 18120
date --date='2000-01-01 UTC' +%s 946684800
An alternative is to use the --utc (-u) option. Then you may omit UTC from the date string. Although this produces the same result for %s and many other format sequences, with a time zone offset different from zero, it would give a different result for zone-dependent formats like %z.
date -u --date=2000-01-01 +%s 946684800
To convert such an unwieldy number of seconds back to a more readable form, use a command like this:
# local time zone used date -d '1970-01-01 UTC 946684800 seconds' +"%Y-%m-%d %T %z" 1999-12-31 19:00:00 -0500
Often it is better to output UTC-relative date and time:
date -u -d '1970-01-01 946684800 seconds' +"%Y-%m-%d %T %z" 2000-01-01 00:00:00 +0000
uname prints information about the machine and operating system it is run on. If no options are given, uname acts as if the -s option were given. Synopsis:
uname [option]...
If multiple options or -a are given, the selected information is printed in this order:
kernel-name nodename kernel-release kernel-version machine processor hardware-platform operating-system
The information may contain internal spaces, so such output cannot be parsed reliably. In the following example, release is 2.2.18ss.e820-bda652a #4 SMP Tue Jun 5 11:24:08 PDT 2001:
uname -a => Linux dum 2.2.18 #4 SMP Tue Jun 5 11:24:08 PDT 2001 i686 unknown unknown GNU/Linux
The program accepts the following options. Also see Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
With no arguments, hostname prints the name of the current host system. With one argument, it sets the current host name to the specified string. You must have appropriate privileges to set the host name. Synopsis:
hostname [name]
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
hostid prints the numeric identifier of the current host in hexadecimal. This command accepts no arguments. The only options are --help and --version. See Common options.
For example, here's what it prints on one system I use:
$ hostid 1bac013d
On that system, the 32-bit quantity happens to be closely related to the system's Internet address, but that isn't always the case.
An exit status of zero indicates success, and a nonzero value indicates failure.
This section describes commands that run other commands in some context different than the current one: a modified environment, as a different user, etc.
chroot runs a command with a specified root directory. On many systems, only the super-user can do this. Synopses:
chroot newroot [command [args]...] chroot option
Ordinarily, file names are looked up starting at the root of the directory structure, i.e., /. chroot changes the root to the directory newroot (which must exist) and then runs command with optional args. If command is not specified, the default is the value of the SHELL environment variable or /bin/sh if not set, invoked with the -i option. command must not be a special built-in utility (see Special built-in utilities).
The only options are --help and --version. See Common options. Options must precede operands.
Here are a few tips to help avoid common problems in using chroot. To start with a simple example, make command refer to a statically linked binary. If you were to use a dynamically linked executable, then you'd have to arrange to have the shared libraries in the right place under your new root directory.
For example, if you create a statically linked ls executable, and put it in /tmp/empty, you can run this command as root:
$ chroot /tmp/empty /ls -Rl /
Then you'll see output like this:
/: total 1023 -rwxr-xr-x 1 0 0 1041745 Aug 16 11:17 ls
If you want to use a dynamically linked executable, say bash, then first run ldd bash to see what shared objects it needs. Then, in addition to copying the actual binary, also copy the listed files to the required positions under your intended new root directory. Finally, if the executable requires any other files (e.g., data, state, device files), copy them into place, too.
1 if chroot itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
env runs a command with a modified environment. Synopses:
env [option]... [name=value]... [command [args]...] env
Operands of the form variable=value set the environment variable variable to value value. value may be empty (variable=). Setting a variable to an empty value is different from unsetting it. These operands are evaluated left-to-right, so if two operands mention the same variable the earlier is ignored.
Environment variable names can be empty, and can contain any characters other than = and the null character (ASCII nul). However, it is wise to limit yourself to names that consist solely of underscores, digits, and ASCII letters, and that begin with a non-digit, as applications like the shell do not work well with other names.
The first operand that does not contain the character = specifies the program to invoke; it is searched for according to the PATH environment variable. Any remaining arguments are passed as arguments to that program. The program should not be a special built-in utility (see Special built-in utilities).
If no command name is specified following the environment specifications, the resulting environment is printed. This is like specifying the printenv program.
The program accepts the following options. Also see Common options. Options must precede operands.
0 if no command is specified and the environment is output
1 if env itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
nice prints or modifies a process's niceness, a parameter that affects whether the process is scheduled favorably. Synopsis:
nice [option]... [command [arg]...]
If no arguments are given, nice prints the current niceness. Otherwise, nice runs the given command with its niceness adjusted. By default, its niceness is incremented by 10.
Nicenesses range at least from −20 (resulting in the most favorable scheduling) through 19 (the least favorable). Some systems may have a wider range of nicenesses; conversely, other systems may enforce more restrictive limits. An attempt to set the niceness outside the supported range is treated as an attempt to use the minimum or maximum supported value.
A niceness should not be confused with a scheduling priority, which lets applications determine the order in which threads are scheduled to run. Unlike a priority, a niceness is merely advice to the scheduler, which the scheduler is free to ignore. Also, as a point of terminology, POSIX defines the behavior of nice in terms of a nice value, which is the nonnegative difference between a niceness and the minimum niceness. Though nice conforms to POSIX, its documentation and diagnostics use the term “niceness” for compatibility with historical practice.
command must not be a special built-in utility (see Special built-in utilities).
Because many shells have a built-in nice command, using an unadorned nice in a script or interactively may get you different functionality than that described here.
The program accepts the following option. Also see Common options. Options must precede operands.
For compatibility nice also supports an obsolete option syntax -adjustment. New scripts should use -n adjustment instead.
0 if no command is specified and the niceness is output
1 if nice itself fails
126 if command is found but cannot be invoked
127 if command cannot be found
the exit status of command otherwise
It is sometimes useful to run a non-interactive program with reduced niceness.
$ nice factor 4611686018427387903
Since nice prints the current niceness, you can invoke it through itself to demonstrate how it works.
The default behavior is to increase the niceness by 10:
$ nice 0 $ nice nice 10 $ nice -n 10 nice 10
The adjustment is relative to the current niceness. In the next example, the first nice invocation runs the second one with niceness 10, and it in turn runs the final one with a niceness that is 3 more:
$ nice nice -n 3 nice 13
Specifying a niceness larger than the supported range is the same as specifying the maximum supported value:
$ nice -n 10000000000 nice 19
Only a privileged user may run a process with lower niceness:
$ nice -n -1 nice nice: cannot set niceness: Permission denied 0 $ sudo nice -n -1 nice -1
nohup runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out. Synopsis:
nohup command [arg]...
If standard input is a terminal, it is redirected from /dev/null so that terminal sessions do not mistakenly consider the terminal to be used by the command. This is a GNU extension; programs intended to be portable to non-GNU hosts should use nohup command [arg]... </dev/null instead.
If standard output is a terminal, the command's standard output is appended to the file nohup.out; if that cannot be written to, it is appended to the file $HOME/nohup.out; and if that cannot be written to, the command is not run. Any nohup.out or $HOME/nohup.out file created by nohup is made readable and writable only to the user, regardless of the current umask settings.
If standard error is a terminal, it is redirected to the same file descriptor as the (possibly-redirected) standard output.
nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an &. Also, nohup does not alter the niceness of command; use nice for that, e.g., nohup nice command.
command must not be a special built-in utility (see Special built-in utilities).
The only options are --help and --version. See Common options. Options must precede operands.
126 if command is found but cannot be invoked
127 if nohup itself fails or if command cannot be found
the exit status of command otherwise
su allows one user to temporarily become another user. It runs a command (often an interactive shell) with the real and effective user ID, group ID, and supplemental groups of a given user. Synopsis:
su [option]... [user [arg]...]
If no user is given, the default is root
, the super-user.
The shell to use is taken from user's passwd
entry, or
/bin/sh if none is specified there. If user has a
password, su prompts for the password unless run by a user with
effective user ID of zero (the super-user).
By default, su does not change the current directory. It sets the environment variables HOME and SHELL from the password entry for user, and if user is not the super-user, sets USER and LOGNAME to user. By default, the shell is not a login shell.
Any additional args are passed as additional arguments to the shell.
GNU su does not treat /bin/sh or any other shells specially
(e.g., by setting argv[0]
to -su, passing -c only
to certain shells, etc.).
su can optionally be compiled to use syslog
to report
failed, and optionally successful, su attempts. (If the system
supports syslog
.) However, GNU su does not check if the
user is a member of the wheel
group; see below.
The program accepts the following options. Also see Common options.
1 if su itself fails
126 if subshell is found but cannot be invoked
127 if subshell cannot be found
the exit status of the subshell otherwise
(This section is by Richard Stallman.)
Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn't know how to do that in Unix.)
However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root password who sympathizes with the ordinary users, he or she can tell the rest. The “wheel group” feature would make this impossible, and thus cement the power of the rulers.
I'm on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.
The kill command sends a signal to processes, causing them to terminate or otherwise act upon receiving the signal in some way. Alternatively, it lists information about signals. Synopses:
kill [-s signal | --signal signal | -signal] pid... kill [-l | --list | -t | --table] [signal]...
The first form of the kill command sends a signal to all pid arguments. The default signal to send if none is specified is TERM. The special signal number 0 does not denote a valid signal, but can be used to test whether the pid arguments specify processes to which a signal could be sent.
If pid is positive, the signal is sent to the process with the process ID pid. If pid is zero, the signal is sent to all processes in the process group of the current process. If pid is −1, the signal is sent to all processes for which the user has permission to send a signal. If pid is less than −1, the signal is sent to all processes in the process group that equals the absolute value of pid.
If pid is not positive, a system-dependent set of system processes is excluded from the list of processes to which the signal is sent.
If a negative PID argument is desired as the first one, it should be preceded by --. However, as a common extension to POSIX, -- is not required with kill -signal -pid. The following commands are equivalent:
kill -15 -1 kill -TERM -1 kill -s TERM -- -1 kill -- -1
The first form of the kill command succeeds if every pid argument specifies at least one process that the signal was sent to.
The second form of the kill command lists signal information. Either the -l or --list option, or the -t or --table option must be specified. Without any signal argument, all supported signals are listed. The output of -l or --list is a list of the signal names, one per line; if signal is already a name, the signal number is printed instead. The output of -t or --table is a table of signal numbers, names, and descriptions. This form of the kill command succeeds if all signal arguments are valid and if there is no output error.
The kill command also supports the --help and --version options. See Common options.
A signal may be a signal name like HUP, or a signal number like 1, or an exit status of a process terminated by the signal. A signal name can be given in canonical form or prefixed by SIG. The case of the letters is ignored, except for the -signal option which must use upper case to avoid ambiguity with lower case option letters. The following signal names and numbers are supported on all POSIX compliant systems:
Other supported signal names have system-dependent corresponding numbers. All systems conforming to POSIX 1003.1-2001 also support the following signals:
POSIX 1003.1-2001 systems that support the XSI extension also support the following signals:
POSIX 1003.1-2001 systems that support the XRT extension also support at least eight real-time signals called RTMIN, RTMIN+1, ..., RTMAX-1, RTMAX.
sleep pauses for an amount of time specified by the sum of the values of the command line arguments. Synopsis:
sleep number[smhd]...
Each argument is a number followed by an optional unit; the default is seconds. The units are:
Historical implementations of sleep have required that number be an integer. However, GNU sleep accepts arbitrary floating point numbers (using a period before any fractional digits).
The only options are --help and --version. See Common options.
An exit status of zero indicates success, and a nonzero value indicates failure.
These programs do numerically-related operations.
factor prints prime factors. Synopses:
factor [number]... factor option
If no number is specified on the command line, factor reads numbers from standard input, delimited by newlines, tabs, or spaces.
The only options are --help and --version. See Common options.
The algorithm it uses is not very sophisticated, so for some inputs factor runs for a long time. The hardest numbers to factor are the products of large primes. Factoring the product of the two largest 32-bit prime numbers takes about 80 seconds of CPU time on a 1.6 GHz Athlon.
$ p=`echo '4294967279 * 4294967291'|bc` $ factor $p 18446743979220271189: 4294967279 4294967291
Similarly, it takes about 80 seconds for GNU factor (from coreutils-5.1.2) to “factor” the largest 64-bit prime:
$ factor 18446744073709551557 18446744073709551557: 18446744073709551557
In contrast, factor factors the largest 64-bit number in just over a tenth of a second:
$ factor `echo '2^64-1'|bc` 18446744073709551615: 3 5 17 257 641 65537 6700417
An exit status of zero indicates success, and a nonzero value indicates failure.
seq prints a sequence of numbers to standard output. Synopses:
seq [option]... last seq [option]... first last seq [option]... first increment last
seq prints the numbers from first to last by
increment. By default, each number is printed on a separate line.
When increment is not specified, it defaults to 1,
even when first is larger than last.
first also defaults to 1. So seq 1
prints
1, but seq 0
and seq 10 5
produce no output.
Floating-point numbers
may be specified (using a period before any fractional digits).
The program accepts the following options. Also see Common options. Options must precede operands.
If you want to use seq to print sequences of large integer values, don't use the default %g format since it can result in loss of precision:
$ seq 1000000 1000001 1e+06 1e+06
Instead, you can use the format, %1.f, to print large decimal numbers with no exponent and no decimal point.
$ seq --format=%1.f 1000000 1000001 1000000 1000001
If you want hexadecimal output, you can use printf to perform the conversion:
$ printf %x'\n' `seq -f %1.f 1048575 1024 1050623` fffff 1003ff 1007ff
For very long lists of numbers, use xargs to avoid system limitations on the length of an argument list:
$ seq -f %1.f 1000000 | xargs printf %x'\n' | tail -n 3 f423e f423f f4240
To generate octal output, use the printf %o
format instead
of %x
. Note however that using printf might not work for numbers
outside the usual 32-bit range:
$ printf "%x\n" `seq -f %1.f 4294967295 4294967296` ffffffff bash: printf: 4294967296: Numerical result out of range
On most systems, seq can produce whole-number output for values up to
2^53
, so here's a more general approach to base conversion that
also happens to be more robust for such large numbers. It works by
using bc
and setting its output radix variable, obase,
to 16 in this case to produce hexadecimal output.
$ (echo obase=16; seq -f %1.f 4294967295 4294967296)|bc FFFFFFFF 100000000
Be careful when using seq with a fractional increment,
otherwise you may see surprising results. Most people would expect to
see 0.3
printed as the last number in this example:
$ seq -s ' ' 0 .1 .3 0 0.1 0.2
But that doesn't happen on most systems because seq is
implemented using binary floating point arithmetic (via the C
double
type)—which means some decimal numbers like .1
cannot be represented exactly. That in turn means some nonintuitive
conditions like .1 * 3 > .3
will end up being true.
To work around that in the above example, use a slightly larger number as the last value:
$ seq -s ' ' 0 .1 .31 0 0.1 0.2 0.3
In general, when using an increment with a fractional part, where (last - first) / increment is (mathematically) a whole number, specify a slightly larger (or smaller, if increment is negative) value for last to ensure that last is the final value printed by seq.
An exit status of zero indicates success, and a nonzero value indicates failure.
Each file has a set of permissions that control the kinds of access that users have to that file. The permissions for a file are also called its access mode. They can be represented either in symbolic form or as an octal number.
There are three kinds of permissions that a user can have for a file:
There are three categories of users who may have different permissions to perform any of the above operations on a file:
Files are given an owner and group when they are created. Usually the owner is the current user and the group is the group of the directory the file is in, but this varies with the operating system, the file system the file is created on, and the way the file is created. You can change the owner and group of a file by using the chown and chgrp commands.
In addition to the three sets of three permissions listed above, a file's permissions have three special components, which affect only executable files (programs) and, on some systems, directories:
In addition to the permissions listed above, there may be file attributes specific to the file system, e.g: access control lists (ACLs), whether a file is compressed, whether a file can be modified (immutability), whether a file can be dumped. These are usually set using programs specific to the file system. For example:
Although a file's permission “bits” allow an operation on that file, that operation may still fail, because:
For example, if the immutable attribute is set on a file,
it cannot be modified, regardless of the fact that you
may have just run chmod a+w FILE
.
Symbolic modes represent changes to files' permissions as
operations on single-character symbols. They allow you to modify either
all or selected parts of files' permissions, optionally based on
their previous values, and perhaps on the current umask
as well
(see Umask and Protection).
The format of symbolic modes is:
[ugoa...][+-=]perms...[,...]
where perms is either zero or more letters from the set rwxXst, or a single letter from the set ugo.
The following sections describe the operators and other details of symbolic modes.
The basic symbolic operations on a file's permissions are adding, removing, and setting the permission that certain users have to read, write, and execute the file. These operations have the following format:
users operation permissions
The spaces between the three parts above are shown for readability only; symbolic modes cannot contain spaces.
The users part tells which users' access to the file is changed. It consists of one or more of the following letters (or it can be empty; see Umask and Protection, for a description of what happens then). When more than one of these letters is given, the order that they are in does not matter.
u
g
o
a
The operation part tells how to change the affected users' access to the file, and is one of the following symbols:
+
-
=
The permissions part tells what kind of access to the file should be changed; it is normally zero or more of the following letters. As with the users part, the order does not matter when more than one letter is given. Omitting the permissions part is useful only with the = operation, where it gives the specified users no access at all to the file.
r
w
x
For example, to give everyone permission to read and write a file, but not to execute it, use:
a=rw
To remove write permission for from all users other than the file's owner, use:
go-w
The above command does not affect the access that the owner of the file has to it, nor does it affect whether other users can read or execute the file.
To give everyone except a file's owner no permission to do anything with that file, use the mode below. Other users could still remove the file, if they have write permission on the directory it is in.
go=
Another way to specify the same thing is:
og-rwx
You can base a file's permissions on its existing permissions. To do this, instead of using a series of r, w, or x letters after the operator, you use the letter u, g, or o. For example, the mode
o+g
adds the permissions for users who are in a file's group to the permissions that other users have for the file. Thus, if the file started out as mode 664 (rw-rw-r--), the above mode would change it to mode 666 (rw-rw-rw-). If the file had started out as mode 741 (rwxr----x), the above mode would change it to mode 745 (rwxr--r-x). The - and = operations work analogously.
In addition to changing a file's read, write, and execute permissions, you can change its special permissions. See Mode Structure, for a summary of these permissions.
To change a file's permission to set the user ID on execution, use u in the users part of the symbolic mode and s in the permissions part.
To change a file's permission to set the group ID on execution, use g in the users part of the symbolic mode and s in the permissions part.
To change a file's permission to set the restricted deletion flag or sticky bit, omit the users part of the symbolic mode (or use a) and put t in the permissions part.
For example, to add set-user-ID permission to a program, you can use the mode:
u+s
To remove both set-user-ID and set-group-ID permission from it, you can use the mode:
ug-s
To set the restricted deletion flag or sticky bit, you can use the mode:
+t
The combination o+s has no effect. On GNU systems the combinations u+t and g+t have no effect, and o+t acts like plain +t.
The = operator is not very useful with special permissions; for example, the mode:
o=t
does set the restricted deletion flag or sticky bit, but it also removes all read, write, and execute permissions that users not in the file's group might have had for it.
There is one more special type of symbolic permission: if you use X instead of x, execute permission is affected only if the file is a directory or already had execute permission.
For example, this mode:
a+X
gives all users permission to search directories, or to execute files if anyone could execute them before.
The format of symbolic modes is actually more complex than described above (see Setting Permissions). It provides two ways to make multiple changes to files' permissions.
The first way is to specify multiple operation and permissions parts after a users part in the symbolic mode.
For example, the mode:
og+rX-w
gives users other than the owner of the file read permission and, if it is a directory or if someone already had execute permission to it, gives them execute permission; and it also denies them write permission to the file. It does not affect the permission that the owner of the file has for it. The above mode is equivalent to the two modes:
og+rX og-w
The second way to make multiple changes is to specify more than one simple symbolic mode, separated by commas. For example, the mode:
a+r,go-w
gives everyone permission to read the file and removes write permission on it for all users except its owner. Another example:
u=rwx,g=rx,o=
sets all of the non-special permissions for the file explicitly. (It gives users who are not in the file's group no permission at all for it.)
The two methods can be combined. The mode:
a+r,g+x-w
gives all users permission to read the file, and gives users who are in the file's group permission to execute it, as well, but not permission to write to it. The above mode could be written in several different ways; another is:
u+r,g+rx,o+r,g-w
If the users part of a symbolic mode is omitted, it defaults to
a (affect all users), except that any permissions that are
set in the system variable umask
are not affected.
The value of umask
can be set using the
umask
command. Its default value varies from system to system.
Omitting the users part of a symbolic mode is generally not useful
with operations other than +. It is useful with + because
it allows you to use umask
as an easily customizable protection
against giving away more permission to files than you intended to.
As an example, if umask
has the value 2, which removes write
permission for users who are not in the file's group, then the mode:
+w
adds permission to write to the file to its owner and to other users who are in the file's group, but not to other users. In contrast, the mode:
a+w
ignores umask
, and does give write permission for
the file to all users.
As an alternative to giving a symbolic mode, you can give an octal (base 8) number that represents the new mode. This number is always interpreted in octal; you do not have to add a leading 0, as you do in C. Mode 0055 is the same as mode 55.
A numeric mode is usually shorter than the corresponding symbolic mode, but it is limited in that it cannot take into account a file's previous permissions; it can only set them absolutely.
The permissions granted to the user, to other users in the file's group, and to other users not in the file's group each require three bits, which are represented as one octal digit. The three special permissions also require one bit each, and they are as a group represented as another octal digit. Here is how the bits are arranged, starting with the lowest valued bit:
Value in Corresponding Mode Permission Other users not in the file's group: 1 Execute 2 Write 4 Read Other users in the file's group: 10 Execute 20 Write 40 Read The file's owner: 100 Execute 200 Write 400 Read Special permissions: 1000 Restricted deletion flag or sticky bit 2000 Set group ID on execution 4000 Set user ID on execution
For example, numeric mode 4755 corresponds to symbolic mode u=rwxs,go=rx, and numeric mode 664 corresponds to symbolic mode ug=rw,o=r. Numeric mode 0 corresponds to symbolic mode a=.
Our units of temporal measurement, from seconds on up to months, are so complicated, asymmetrical and disjunctive so as to make coherent mental reckoning in time all but impossible. Indeed, had some tyrannical god contrived to enslave our minds to time, to make it all but impossible for us to escape subjection to sodden routines and unpleasant surprises, he could hardly have done better than handing down our present system. It is like a set of trapezoidal building blocks, with no vertical or horizontal surfaces, like a language in which the simplest thought demands ornate constructions, useless particles and lengthy circumlocutions. Unlike the more successful patterns of language and science, which enable us to face experience boldly or at least level-headedly, our system of temporal calculation silently and persistently encourages our terror of time. ...It is as though architects had to measure length in feet, width in meters and height in ells; as though basic instruction manuals demanded a knowledge of five different languages. It is no wonder then that we often look into our own immediate past or future, last Tuesday or a week from Sunday, with feelings of helpless confusion. ...
— Robert Grudin, Time and the Art of Living.
This section describes the textual date representations that gnu
programs accept. These are the strings you, as a user, can supply as
arguments to the various programs. The C interface (via the
get_date
function) is not described here.
A date is a string, possibly empty, containing many items separated by whitespace. The whitespace may be omitted when no ambiguity arises. The empty string means the beginning of today (i.e., midnight). Order of the items is immaterial. A date string may contain many flavors of items:
We describe each of these item types in turn, below.
A few ordinal numbers may be written out in words in some contexts. This is most useful for specifying day of the week items or relative items (see below). Among the most commonly used ordinal numbers, the word last stands for -1, this stands for 0, and first and next both stand for 1. Because the word second stands for the unit of time there is no way to write the ordinal number 2, but for convenience third stands for 3, fourth for 4, fifth for 5, sixth for 6, seventh for 7, eighth for 8, ninth for 9, tenth for 10, eleventh for 11 and twelfth for 12.
When a month is written this way, it is still considered to be written numerically, instead of being “spelled in full”; this changes the allowed strings.
In the current implementation, only English is supported for words and abbreviations like AM, DST, EST, first, January, Sunday, tomorrow, and year.
The output of the date command is not always acceptable as a date string, not only because of the language problem, but also because there is no standard meaning for time zone items like IST. When using date to generate a date string intended to be parsed later, specify a date format that is independent of language and that does not use time zone items other than UTC and Z. Here are some ways to do this:
$ LC_ALL=C TZ=UTC0 date Mon Mar 1 00:21:42 UTC 2004 $ TZ=UTC0 date +'%Y-%m-%d %H:%M:%SZ' 2004-03-01 00:21:42Z $ date --iso-8601=ns | tr T ' ' # --iso-8601 is a GNU extension. 2004-02-29 16:21:42,692722128-0800 $ date --rfc-2822 # a GNU extension Sun, 29 Feb 2004 16:21:42 -0800 $ date +'%Y-%m-%d %H:%M:%S %z' # %z is a GNU extension. 2004-02-29 16:21:42 -0800 $ date +'@%s.%N' # %s and %N are GNU extensions. @1078100502.692722128
Alphabetic case is completely ignored in dates. Comments may be introduced between round parentheses, as long as included parentheses are properly nested. Hyphens not followed by a digit are currently ignored. Leading zeros on numbers are ignored.
A calendar date item specifies a day of the year. It is specified differently, depending on whether the month is specified numerically or literally. All these strings specify the same calendar date:
1972-09-24 # iso 8601.
72-9-24 # Assume 19xx for 69 through 99,
# 20xx for 00 through 68.
72-09-24 # Leading zeros are ignored.
9/24/72 # Common U.S. writing.
24 September 1972
24 Sept 72 # September has a special abbreviation.
24 Sep 72 # Three-letter abbreviations always allowed.
Sep 24, 1972
24-sep-72
24sep72
The year can also be omitted. In this case, the last specified year is used, or the current year if none. For example:
9/24 sep 24
Here are the rules.
For numeric months, the iso 8601 format year-month-day is allowed, where year is any positive number, month is a number between 01 and 12, and day is a number between 01 and 31. A leading zero must be present if a number is less than ten. If year is 68 or smaller, then 2000 is added to it; otherwise, if year is less than 100, then 1900 is added to it. The construct month/day/year, popular in the United States, is accepted. Also month/day, omitting the year.
Literal months may be spelled out in full: January, February, March, April, May, June, July, August, September, October, November or December. Literal months may be abbreviated to their first three letters, possibly followed by an abbreviating dot. It is also permitted to write Sept instead of September.
When months are written literally, the calendar date may be given as any of the following:
day month year day month month day year day-month-year
Or, omitting the year:
month day
A time of day item in date strings specifies the time on a given day. Here are some examples, all of which represent the same time:
20:02:00.000000
20:02
8:02pm
20:02-0500 # In est (U.S. Eastern Standard Time).
More generally, the time of day may be given as hour:minute:second, where hour is a number between 0 and 23, minute is a number between 0 and 59, and second is a number between 0 and 59 possibly followed by . or , and a fraction containing one or more digits. Alternatively, :second can be omitted, in which case it is taken to be zero.
If the time is followed by am or pm (or a.m. or p.m.), hour is restricted to run from 1 to 12, and :minute may be omitted (taken to be zero). am indicates the first half of the day, pm indicates the second half of the day. In this notation, 12 is the predecessor of 1: midnight is 12am while noon is 12pm. (This is the zero-oriented interpretation of 12am and 12pm, as opposed to the old tradition derived from Latin which uses 12m for noon and 12pm for midnight.)
The time may alternatively be followed by a time zone correction, expressed as shhmm, where s is + or -, hh is a number of zone hours and mm is a number of zone minutes. You can also separate hh from mm with a colon. When a time zone correction is given this way, it forces interpretation of the time relative to Coordinated Universal Time (utc), overriding any previous specification for the time zone or the local time zone. For example, +0530 and +05:30 both stand for the time zone 5.5 hours ahead of utc (e.g., India). The minute part of the time of day may not be elided when a time zone correction is used. This is the best way to specify a time zone correction by fractional parts of an hour.
Either am/pm or a time zone correction may be specified, but not both.
A time zone item specifies an international time zone, indicated by a small set of letters, e.g., UTC or Z for Coordinated Universal Time. Any included periods are ignored. By following a non-daylight-saving time zone by the string DST in a separate word (that is, separated by some white space), the corresponding daylight saving time zone may be specified. Alternatively, a non-daylight-saving time zone can be followed by a time zone correction, to add the two values. This is normally done only for UTC; for example, UTC+05:30 is equivalent to +05:30.
Time zone items other than UTC and Z are obsolescent and are not recommended, because they are ambiguous; for example, EST has a different meaning in Australia than in the United States. Instead, it's better to use unambiguous numeric time zone corrections like -0500, as described in the previous section.
If neither a time zone item nor a time zone correction is supplied, time stamps are interpreted using the rules of the default time zone (see Specifying time zone rules).
The explicit mention of a day of the week will forward the date (only if necessary) to reach that day of the week in the future.
Days of the week may be spelled out in full: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday or Saturday. Days may be abbreviated to their first three letters, optionally followed by a period. The special abbreviations Tues for Tuesday, Wednes for Wednesday and Thur or Thurs for Thursday are also allowed.
A number may precede a day of the week item to move forward supplementary weeks. It is best used in expression like third monday. In this context, last day or next day is also acceptable; they move one week before or after the day that day by itself would represent.
A comma following a day of the week item is ignored.
Relative items adjust a date (or the current date if none) forward or backward. The effects of relative items accumulate. Here are some examples:
1 year 1 year ago 3 years 2 days
The unit of time displacement may be selected by the string year or month for moving by whole years or months. These are fuzzy units, as years and months are not all of equal duration. More precise units are fortnight which is worth 14 days, week worth 7 days, day worth 24 hours, hour worth 60 minutes, minute or min worth 60 seconds, and second or sec worth one second. An s suffix on these units is accepted and ignored.
The unit of time may be preceded by a multiplier, given as an optionally signed number. Unsigned numbers are taken as positively signed. No number at all implies 1 for a multiplier. Following a relative item by the string ago is equivalent to preceding the unit by a multiplier with value -1.
The string tomorrow is worth one day in the future (equivalent to day), the string yesterday is worth one day in the past (equivalent to day ago).
The strings now or today are relative items corresponding to zero-valued time displacement, these strings come from the fact a zero-valued time displacement represents the current time when not otherwise changed by previous items. They may be used to stress other items, like in 12:00 today. The string this also has the meaning of a zero-valued time displacement, but is preferred in date strings like this thursday.
When a relative item causes the resulting date to cross a boundary where the clocks were adjusted, typically for daylight saving time, the resulting date and time are adjusted accordingly.
The fuzz in units can cause problems with relative items. For example, 2003-07-31 -1 month might evaluate to 2003-07-01, because 2003-06-31 is an invalid date. To determine the previous month more reliably, you can ask for the month before the 15th of the current month. For example:
$ date -R Thu, 31 Jul 2003 13:02:39 -0700 $ date --date='-1 month' +'Last month was %B?' Last month was July? $ date --date="$(date +%Y-%m-15) -1 month" +'Last month was %B!' Last month was June!
Also, take care when manipulating dates around clock changes such as daylight saving leaps. In a few cases these have added or subtracted as much as 24 hours from the clock, so it is often wise to adopt universal time by setting the TZ environment variable to UTC0 before embarking on calendrical calculations.
The precise interpretation of a pure decimal number depends on the context in the date string.
If the decimal number is of the form yyyymmdd and no other calendar date item (see Calendar date items) appears before it in the date string, then yyyy is read as the year, mm as the month number and dd as the day of the month, for the specified calendar date.
If the decimal number is of the form hhmm and no other time of day item appears before it in the date string, then hh is read as the hour of the day and mm as the minute of the hour, for the specified time of day. mm can also be omitted.
If both a calendar date and a time of day appear to the left of a number in the date string, but no relative item, then the number overrides the year.
If you precede a number with @, it represents an internal time stamp as a count of seconds. The number can contain an internal decimal point (either . or ,); any excess precision not supported by the internal representation is truncated toward minus infinity. Such a number cannot be combined with any other date item, as it specifies a complete time stamp.
Internally, computer times are represented as a count of seconds since an epoch—a well-defined point of time. On GNU and POSIX systems, the epoch is 1970-01-01 00:00:00 utc, so @0 represents this time, @1 represents 1970-01-01 00:00:01 utc, and so forth. GNU and most other POSIX-compliant systems support such times as an extension to POSIX, using negative counts, so that @-1 represents 1969-12-31 23:59:59 utc.
Traditional Unix systems count seconds with 32-bit two's-complement integers and can represent times from 1901-12-13 20:45:52 through 2038-01-19 03:14:07 utc. More modern systems use 64-bit counts of seconds with nanosecond subcounts, and can represent all the times in the known lifetime of the universe to a resolution of 1 nanosecond.
On most systems, these counts ignore the presence of leap seconds. For example, on most systems @915148799 represents 1998-12-31 23:59:59 utc, @915148800 represents 1999-01-01 00:00:00 utc, and there is no way to represent the intervening leap second 1998-12-31 23:59:60 utc.
Normally, dates are interpreted using the rules of the current time zone, which in turn are specified by the TZ environment variable, or by a system default if TZ is not set. To specify a different set of default time zone rules that apply just to one date, start the date with a string of the form TZ="rule". The two quote characters (") must be present in the date, and any quotes or backslashes within rule must be escaped by a backslash.
For example, with the GNU date command you can answer the question “What time is it in New York when a Paris clock shows 6:30am on October 31, 2004?” by using a date beginning with TZ="Europe/Paris" as shown in the following shell transcript:
$ export TZ="America/New_York" $ date --date='TZ="Europe/Paris" 2004-10-31 06:30' Sun Oct 31 01:30:00 EDT 2004
In this example, the --date operand begins with its own TZ setting, so the rest of that operand is processed according to Europe/Paris rules, treating the string 2004-10-31 06:30 as if it were in Paris. However, since the output of the date command is processed according to the overall time zone rules, it uses New York time. (Paris was normally six hours ahead of New York in 2004, but this example refers to a brief Halloween period when the gap was five hours.)
A TZ value is a rule that typically names a location in the tz database. A recent catalog of location names appears in the TWiki Date and Time Gateway. A few non-GNU hosts require a colon before a location name in a TZ setting, e.g., TZ=":America/New_York".
The tz database includes a wide variety of locations ranging
from Arctic/Longyearbyen to Antarctica/South_Pole, but
if you are at sea and have your own private time zone, or if you are
using a non-GNU host that does not support the tz
database, you may need to use a POSIX rule instead. Simple
POSIX rules like UTC0 specify a time zone without
daylight saving time; other rules can specify simple daylight saving
regimes. See Specifying the Time Zone with TZ
(The GNU C Library).
get_date
get_date
was originally implemented by Steven M. Bellovin
(smb@research.att.com) while at the University of North Carolina
at Chapel Hill. The code was later tweaked by a couple of people on
Usenet, then completely overhauled by Rich $alz (rsalz@bbn.com)
and Jim Berets (jberets@bbn.com) in August, 1990. Various
revisions for the gnu system were made by David MacKenzie, Jim Meyering,
Paul Eggert and others.
This chapter was originally produced by François Pinard (pinard@iro.umontreal.ca) from the getdate.y source code, and then edited by K. Berry (kb@cs.umb.edu).
An earlier version of this chapter appeared in 2 (June 1994). It was written by Arnold Robbins.
This month's column is only peripherally related to the GNU Project, in that it describes a number of the GNU tools on your GNU/Linux system and how they might be used. What it's really about is the “Software Tools” philosophy of program development and usage.
The software tools philosophy was an important and integral concept in the initial design and development of Unix (of which Linux and GNU are essentially clones). Unfortunately, in the modern day press of Internetworking and flashy GUIs, it seems to have fallen by the wayside. This is a shame, since it provides a powerful mental model for solving many kinds of problems.
Many people carry a Swiss Army knife around in their pants pockets (or purse). A Swiss Army knife is a handy tool to have: it has several knife blades, a screwdriver, tweezers, toothpick, nail file, corkscrew, and perhaps a number of other things on it. For the everyday, small miscellaneous jobs where you need a simple, general purpose tool, it's just the thing.
On the other hand, an experienced carpenter doesn't build a house using a Swiss Army knife. Instead, he has a toolbox chock full of specialized tools—a saw, a hammer, a screwdriver, a plane, and so on. And he knows exactly when and where to use each tool; you won't catch him hammering nails with the handle of his screwdriver.
The Unix developers at Bell Labs were all professional programmers and trained computer scientists. They had found that while a one-size-fits-all program might appeal to a user because there's only one program to use, in practice such programs are
Instead, they felt that programs should be specialized tools. In short, each program “should do one thing well.” No more and no less. Such programs are simpler to design, write, and get right—they only do one thing.
Furthermore, they found that with the right machinery for hooking programs together, that the whole was greater than the sum of the parts. By combining several special purpose programs, you could accomplish a specific task that none of the programs was designed for, and accomplish it much more quickly and easily than if you had to write a special purpose program. We will see some (classic) examples of this further on in the column. (An important additional point was that, if necessary, take a detour and build any software tools you may need first, if you don't already have something appropriate in the toolbox.)
Hopefully, you are familiar with the basics of I/O redirection in the shell, in particular the concepts of “standard input,” “standard output,” and “standard error”. Briefly, “standard input” is a data source, where data comes from. A program should not need to either know or care if the data source is a disk file, a keyboard, a magnetic tape, or even a punched card reader. Similarly, “standard output” is a data sink, where data goes to. The program should neither know nor care where this might be. Programs that only read their standard input, do something to the data, and then send it on, are called filters, by analogy to filters in a water pipeline.
With the Unix shell, it's very easy to set up data pipelines:
program_to_create_data | filter1 | ... | filterN > final.pretty.data
We start out by creating the raw data; each filter applies some successive transformation to the data, until by the time it comes out of the pipeline, it is in the desired form.
This is fine and good for standard input and standard output. Where does the standard error come in to play? Well, think about filter1 in the pipeline above. What happens if it encounters an error in the data it sees? If it writes an error message to standard output, it will just disappear down the pipeline into filter2's input, and the user will probably never see it. So programs need a place where they can send error messages so that the user will notice them. This is standard error, and it is usually connected to your console or window, even if you have redirected standard output of your program away from your screen.
For filter programs to work together, the format of the data has to be
agreed upon. The most straightforward and easiest format to use is simply
lines of text. Unix data files are generally just streams of bytes, with
lines delimited by the ASCII lf (Line Feed) character,
conventionally called a “newline” in the Unix literature. (This is
'\n'
if you're a C programmer.) This is the format used by all
the traditional filtering programs. (Many earlier operating systems
had elaborate facilities and special purpose programs for managing
binary data. Unix has always shied away from such things, under the
philosophy that it's easiest to simply be able to view and edit your
data with a text editor.)
OK, enough introduction. Let's take a look at some of the tools, and then we'll see how to hook them together in interesting ways. In the following discussion, we will only present those command line options that interest us. As you should always do, double check your system documentation for the full story.
The first program is the who command. By itself, it generates a list of the users who are currently logged in. Although I'm writing this on a single-user system, we'll pretend that several people are logged in:
$ who -| arnold console Jan 22 19:57 -| miriam ttyp0 Jan 23 14:19(:0.0) -| bill ttyp1 Jan 21 09:32(:0.0) -| arnold ttyp2 Jan 23 20:48(:0.0)
Here, the $ is the usual shell prompt, at which I typed who. There are three people logged in, and I am logged in twice. On traditional Unix systems, user names are never more than eight characters long. This little bit of trivia will be useful later. The output of who is nice, but the data is not all that exciting.
The next program we'll look at is the cut command. This program cuts out columns or fields of input data. For example, we can tell it to print just the login name and full name from the /etc/passwd file. The /etc/passwd file has seven fields, separated by colons:
arnold:xyzzy:2076:10:Arnold D. Robbins:/home/arnold:/bin/bash
To get the first and fifth fields, we would use cut like this:
$ cut -d: -f1,5 /etc/passwd -| root:Operator ... -| arnold:Arnold D. Robbins -| miriam:Miriam A. Robbins ...
With the -c option, cut will cut out specific characters (i.e., columns) in the input lines. This is useful for input data that has fixed width fields, and does not have a field separator. For example, list the Monday dates for the current month:
$ cal | cut -c 3-5 -|Mo -| -| 6 -| 13 -| 20 -| 27
Next we'll look at the sort command. This is one of the most powerful commands on a Unix-style system; one that you will often find yourself using when setting up fancy data plumbing.
The sort command reads and sorts each file named on the command line. It then merges the sorted data and writes it to standard output. It will read standard input if no files are given on the command line (thus making it into a filter). The sort is based on the character collating sequence or based on user-supplied ordering criteria.
Finally (at least for now), we'll look at the uniq program. When sorting data, you will often end up with duplicate lines, lines that are identical. Usually, all you need is one instance of each line. This is where uniq comes in. The uniq program reads its standard input. It prints only one copy of each repeated line. It does have several options. Later on, we'll use the -c option, which prints each unique line, preceded by a count of the number of times that line occurred in the input.
Now, let's suppose this is a large ISP server system with dozens of users logged in. The management wants the system administrator to write a program that will generate a sorted list of logged in users. Furthermore, even if a user is logged in multiple times, his or her name should only show up in the output once.
The administrator could sit down with the system documentation and write a C program that did this. It would take perhaps a couple of hundred lines of code and about two hours to write it, test it, and debug it. However, knowing the software toolbox, the administrator can instead start out by generating just a list of logged on users:
$ who | cut -c1-8 -| arnold -| miriam -| bill -| arnold
Next, sort the list:
$ who | cut -c1-8 | sort -| arnold -| arnold -| bill -| miriam
Finally, run the sorted list through uniq, to weed out duplicates:
$ who | cut -c1-8 | sort | uniq -| arnold -| bill -| miriam
The sort command actually has a -u option that does what uniq does. However, uniq has other uses for which one cannot substitute sort -u.
The administrator puts this pipeline into a shell script, and makes it available for
all the users on the system (# is the system administrator,
or root
, prompt):
# cat > /usr/local/bin/listusers who | cut -c1-8 | sort | uniq ^D # chmod +x /usr/local/bin/listusers
There are four major points to note here. First, with just four programs, on one command line, the administrator was able to save about two hours worth of work. Furthermore, the shell pipeline is just about as efficient as the C program would be, and it is much more efficient in terms of programmer time. People time is much more expensive than computer time, and in our modern “there's never enough time to do everything” society, saving two hours of programmer time is no mean feat.
Second, it is also important to emphasize that with the combination of the tools, it is possible to do a special purpose job never imagined by the authors of the individual programs.
Third, it is also valuable to build up your pipeline in stages, as we did here. This allows you to view the data at each stage in the pipeline, which helps you acquire the confidence that you are indeed using these tools correctly.
Finally, by bundling the pipeline in a shell script, other users can use your command, without having to remember the fancy plumbing you set up for them. In terms of how you run them, shell scripts and compiled programs are indistinguishable.
After the previous warm-up exercise, we'll look at two additional, more complicated pipelines. For them, we need to introduce two more tools.
The first is the tr command, which stands for “transliterate.” The tr command works on a character-by-character basis, changing characters. Normally it is used for things like mapping upper case to lower case:
$ echo ThIs ExAmPlE HaS MIXED case! | tr '[:upper:]' '[:lower:]' -| this example has mixed case!
There are several options of interest:
-c
-d
-s
We will be using all three options in a moment.
The other command we'll look at is comm. The comm command takes two sorted input files as input data, and prints out the files' lines in three columns. The output columns are the data lines unique to the first file, the data lines unique to the second file, and the data lines that are common to both. The -1, -2, and -3 command line options omit the respective columns. (This is non-intuitive and takes a little getting used to.) For example:
$ cat f1 -| 11111 -| 22222 -| 33333 -| 44444 $ cat f2 -| 00000 -| 22222 -| 33333 -| 55555 $ comm f1 f2 -| 00000 -| 11111 -| 22222 -| 33333 -| 44444 -| 55555
The file name - tells comm to read standard input instead of a regular file.
Now we're ready to build a fancy pipeline. The first application is a word frequency counter. This helps an author determine if he or she is over-using certain words.
The first step is to change the case of all the letters in our input file to one case. “The” and “the” are the same word when doing counting.
$ tr '[:upper:]' '[:lower:]' < whats.gnu | ...
The next step is to get rid of punctuation. Quoted words and unquoted words should be treated identically; it's easiest to just get the punctuation out of the way.
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | ...
The second tr command operates on the complement of the listed characters, which are all the letters, the digits, the underscore, and the blank. The \n represents the newline character; it has to be left alone. (The ASCII tab character should also be included for good measure in a production script.)
At this point, we have data consisting of words separated by blank space. The words only contain alphanumeric characters (and the underscore). The next step is break the data apart so that we have one word per line. This makes the counting operation much easier, as we will see shortly.
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | > tr -s ' ' '\n' | ...
This command turns blanks into newlines. The -s option squeezes multiple newline characters in the output into just one. This helps us avoid blank lines. (The > is the shell's “secondary prompt.” This is what the shell prints when it notices you haven't finished typing in all of a command.)
We now have data consisting of one word per line, no punctuation, all one case. We're ready to count each word:
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | > tr -s ' ' '\n' | sort | uniq -c | ...
At this point, the data might look something like this:
60 a 2 able 6 about 1 above 2 accomplish 1 acquire 1 actually 2 additional
The output is sorted by word, not by count! What we want is the most frequently used words first. Fortunately, this is easy to accomplish, with the help of two more sort options:
-n
-r
The final pipeline looks like this:
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | > tr -s ' ' '\n' | sort | uniq -c | sort -n -r -| 156 the -| 60 a -| 58 to -| 51 of -| 51 and ...
Whew! That's a lot to digest. Yet, the same principles apply. With six commands, on two lines (really one long one split for convenience), we've created a program that does something interesting and useful, in much less time than we could have written a C program to do the same thing.
A minor modification to the above pipeline can give us a simple spelling checker! To determine if you've spelled a word correctly, all you have to do is look it up in a dictionary. If it is not there, then chances are that your spelling is incorrect. So, we need a dictionary. The conventional location for a dictionary is /usr/dict/words. On my GNU/Linux system,4 this is a is a sorted, 45,402 word dictionary.
Now, how to compare our file with the dictionary? As before, we generate a sorted list of words, one per line:
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | > tr -s ' ' '\n' | sort -u | ...
Now, all we need is a list of words that are not in the dictionary. Here is where the comm command comes in.
$ tr '[:upper:]' '[:lower:]' < whats.gnu | tr -cd '[:alnum:]_ \n' | > tr -s ' ' '\n' | sort -u | > comm -23 - /usr/dict/words
The -2 and -3 options eliminate lines that are only in the dictionary (the second file), and lines that are in both files. Lines only in the first file (standard input, our stream of words), are words that are not in the dictionary. These are likely candidates for spelling errors. This pipeline was the first cut at a production spelling checker on Unix.
There are some other tools that deserve brief mention.
The software tools philosophy also espoused the following bit of advice: “Let someone else do the hard part.” This means, take something that gives you most of what you need, and then massage it the rest of the way until it's in the form that you want.
To summarize:
As of this writing, all the programs we've discussed are available via
anonymous ftp from:
ftp://gnudist.gnu.org/textutils/textutils-1.22.tar.gz. (There may
be more recent versions available now.)
None of what I have presented in this column is new. The Software Tools philosophy was first introduced in the book Software Tools, by Brian Kernighan and P.J. Plauger (Addison-Wesley, ISBN 0-201-03669-X). This book showed how to write and use software tools. It was written in 1976, using a preprocessor for FORTRAN named ratfor (RATional FORtran). At the time, C was not as ubiquitous as it is now; FORTRAN was. The last chapter presented a ratfor to FORTRAN processor, written in ratfor. ratfor looks an awful lot like C; if you know C, you won't have any problem following the code.
In 1981, the book was updated and made available as Software Tools in Pascal (Addison-Wesley, ISBN 0-201-10342-7). Both books are still in print and are well worth reading if you're a programmer. They certainly made a major change in how I view programming.
The programs in both books are available from Brian Kernighan's home page. For a number of years, there was an active Software Tools Users Group, whose members had ported the original ratfor programs to essentially every computer system with a FORTRAN compiler. The popularity of the group waned in the middle 1980s as Unix began to spread beyond universities.
With the current proliferation of GNU code and other clones of Unix programs, these programs now receive little attention; modern C versions are much more efficient and do more than these programs do. Nevertheless, as exposition of good programming style, and evangelism for a still-valuable philosophy, these books are unparalleled, and I recommend them highly.
Acknowledgment: I would like to express my gratitude to Brian Kernighan of Bell Labs, the original Software Toolsmith, for reviewing this column.
Copyright (C) 2000 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other written document “free” in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (For example, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, whose contents can be viewed and edited directly and straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup has been designed to thwart or discourage subsequent modification by readers is not Transparent. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML designed for human modification. Opaque formats include PostScript, PDF, proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies of the Document numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a publicly-accessible computer-network location containing a complete Transparent copy of the Document, free of added material, which the general network-using public has access to download anonymously at no charge using public-standard network protocols. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct
from that of the Document, and from those of previous versions
(which should, if there were any, be listed in the History section
of the Document). You may use the same title as a previous version
if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities
responsible for authorship of the modifications in the Modified
Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has less than five).
C. State on the Title page the name of the publisher of the
Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice
giving the public permission to use the Modified Version under the
terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections
and required Cover Texts given in the Document's license notice.
H. Include an unaltered copy of this License.
I. Preserve the section entitled “History”, and its title, and add to
it an item stating at least the title, year, new authors, and
publisher of the Modified Version as given on the Title Page. If
there is no section entitled “History” in the Document, create one
stating the title, year, authors, and publisher of the Document as
given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for
public access to a Transparent copy of the Document, and likewise
the network locations given in the Document for previous versions
it was based on. These may be placed in the “History” section.
You may omit a network location for a work that was published at
least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
K. In any section entitled “Acknowledgements” or “Dedications”,
preserve the section's title, and preserve in the section all the
substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
unaltered in their text and in their titles. Section numbers
or the equivalent are not considered part of the section titles.
M. Delete any section entitled “Endorsements”. Such a section
may not be included in the Modified Version.
N. Do not retitle any existing section as “Endorsements”
or to conflict in title with any Invariant Section.
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant. To do this, add their titles to the
list of Invariant Sections in the Modified Version's license notice.
These titles must be distinct from any other section titles.
You may add a section entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties–for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections entitled “History” in the various original documents, forming one section entitled “History”; likewise combine any sections entitled “Acknowledgements”, and any sections entitled “Dedications”. You must delete all sections entitled “Endorsements.”
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, does not as a whole count as a Modified Version of the Document, provided no compilation copyright is claimed for the compilation. Such a compilation is called an “aggregate”, and this License does not apply to the other self-contained works thus compiled with the Document, on account of their being thus compiled, if they are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one quarter of the entire aggregate, the Document's Cover Texts may be placed on covers that surround only the Document within the aggregate. Otherwise they must appear on covers around the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License provided that you also include the original English version of this License. In case of a disagreement between the translation and the original English version of this License, the original English version will prevail.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being list their titles, with the Front-Cover Texts being list, and with the Back-Cover Texts being list. A copy of the license is included in the section entitled ``GNU Free Documentation License''.
If you have no Invariant Sections, write “with no Invariant Sections” instead of saying which ones are invariant. If you have no Front-Cover Texts, write “no Front-Cover Texts” instead of “Front-Cover Texts being list”; likewise for Back-Cover Texts.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
!
: Connectives for test!=
: String tests%
: Numeric expressions%b
: printf invocation&
: Relations for expr*
: Numeric expressions+
: Numeric expressions+
: String expressions+
page_range: pr invocation-
: su invocation-
: env invocation-
: Numeric expressions-
and Unix rm: rm invocation--
: Common options--across
: pr invocation--address-radix
: od invocation--adjustment
: nice invocation--all
: uname invocation--all
: who invocation--all
: stty invocation--all
: du invocation--all
: df invocation--all
: Which files are listed--all
: unexpand invocation--all-repeated
: uniq invocation--almost-all
: Which files are listed--apparent-size
: du invocation--append
: tee invocation--archive
: cp invocation--author
: What information is listed--backup
: ln invocation--backup
: mv invocation--backup
: install invocation--backup
: cp invocation--backup
: Backup options--before
: tac invocation--binary
: md5sum invocation--block-size
: du invocation--block-size
: df invocation--block-size
: Block size--block-size=
size: Block size--body-numbering
: nl invocation--boot
: who invocation--bourne-shell
: dircolors invocation--buffer-size
: sort invocation--bytes
: du invocation--bytes
: cut invocation--bytes
: wc invocation--bytes
: split invocation--bytes
: tail invocation--bytes
: head invocation--bytes
: fold invocation--c-shell
: dircolors invocation--canonicalize
: readlink invocation--canonicalize-existing
: readlink invocation--canonicalize-missing
: readlink invocation--changes
: chmod invocation--changes
: chgrp invocation--changes
: chown invocation--characters
: cut invocation--chars
: wc invocation--check
: sort invocation--check-chars
: uniq invocation--classify
: General output formatting--color
: General output formatting--columns
: pr invocation--command
: su invocation--complement
: cut invocation--count
: who invocation--count
: uniq invocation--count-links
: du invocation--crown-margin
: fmt invocation--csh
: dircolors invocation--date
: Options for date--date
: touch invocation--dead
: who invocation--delimiter
: cut invocation--delimiters
: paste invocation--dereference
: stat invocation--dereference
: du invocation--dereference
: chgrp invocation--dereference
: chown invocation--dereference
: cp invocation--dereference
: Which files are listed--dereference-args
: du invocation--dereference-command-line
: Which files are listed--dereference-command-line-symlink-to-dir
: Which files are listed--dictionary-order
: sort invocation--digits
: csplit invocation--directory
: ln invocation--directory
: rm invocation--directory
: install invocation--directory
: Which files are listed--dired
: What information is listed--double-space
: pr invocation--elide-empty-files
: csplit invocation--escape
: Formatting the file names--exact
: shred invocation--exclude-from=
FILE: du invocation--exclude-type
: df invocation--exclude=
PATTERN: du invocation--expand-tabs
: pr invocation--fast
: su invocation--field-separator
: sort invocation--fields
: cut invocation--file
: Options for date--file
: stty invocation--file-system
: stat invocation--file-type
: General output formatting--files0-from=
FILE: du invocation--first-line-number
: pr invocation--follow
: tail invocation--footer-numbering
: nl invocation--force
: ln invocation--force
: shred invocation--force
: rm invocation--force
: mv invocation--force
: cp invocation--form-feed
: pr invocation--format
: General output formatting--format
: What information is listed--format
: od invocation--format=
format: seq invocation--format=
format: stat invocation--from
: chown invocation--full-time
: What information is listed--general-numeric-sort
: sort invocation--group
: id invocation--group
: install invocation--groups
: id invocation--hardware-platform
: uname invocation--header
: pr invocation--header-numbering
: nl invocation--heading
: who invocation--help
: Common options--hide-control-chars
: Formatting the file names--hide=
pattern: Which files are listed--human-readable
: du invocation--human-readable
: df invocation--human-readable
: What information is listed--human-readable
: Block size--ignore-backups
: Which files are listed--ignore-case
: join invocation--ignore-case
: uniq invocation--ignore-case
: sort invocation--ignore-environment
: env invocation--ignore-fail-on-non-empty
: rmdir invocation--ignore-interrupts
: tee invocation--ignore-leading-blanks
: sort invocation--ignore-nonprinting
: sort invocation--ignore=
pattern: Which files are listed--indent
: pr invocation--indicator-style
: General output formatting--initial
: expand invocation--inode
: What information is listed--inodes
: df invocation--interactive
: ln invocation--interactive
: rm invocation--interactive
: mv invocation--interactive
: cp invocation--iterations=
NUMBER: shred invocation--join-blank-lines
: nl invocation--join-lines
: pr invocation--keep-files
: csplit invocation--kernel-name
: uname invocation--kernel-release
: uname invocation--kernel-version
: uname invocation--key
: sort invocation--length
: pr invocation--line-bytes
: split invocation--lines
: wc invocation--lines
: split invocation--lines
: tail invocation--lines
: head invocation--link
: cp invocation--literal
: Formatting the file names--local
: df invocation--login
: su invocation--login
: who invocation--lookup
: who invocation--machine
: uname invocation--max-depth=
DEPTH: du invocation--max-line-length
: wc invocation--max-unchanged-stats
: tail invocation--merge
: sort invocation--merge
: pr invocation--mesg
: who invocation--message
: who invocation--mode
: mknod invocation--mode
: mkfifo invocation--mode
: mkdir invocation--mode
: install invocation--month-sort
: sort invocation--name
: id invocation--no-create
: touch invocation--no-dereference
: du invocation--no-dereference
: chgrp invocation--no-dereference
: chown invocation--no-dereference
: ln invocation--no-dereference
: cp invocation--no-file-warnings
: pr invocation--no-group
: What information is listed--no-newline
: readlink invocation--no-preserve-root
: chmod invocation--no-preserve-root
: chgrp invocation--no-preserve-root
: chown invocation--no-preserve-root
: rm invocation--no-renumber
: nl invocation--no-sync
: df invocation--no-target-directory
: ln invocation--no-target-directory
: mv invocation--no-target-directory
: install invocation--no-target-directory
: cp invocation--no-target-directory
: Target directory--nodename
: uname invocation--null
: du invocation--number
: cat invocation--number-format
: nl invocation--number-lines
: pr invocation--number-nonblank
: cat invocation--number-separator
: nl invocation--number-width
: nl invocation--numeric-sort
: sort invocation--numeric-suffixes
: split invocation--numeric-uid-gid
: What information is listed--omit-header
: pr invocation--omit-pagination
: pr invocation--one-file-system
: du invocation--one-file-system
: cp invocation--only-delimited
: cut invocation--operating-system
: uname invocation--output
: sort invocation--output-delimiter
: cut invocation--output-duplicates
: od invocation--output-tabs
: pr invocation--owner
: install invocation--page-increment
: nl invocation--page_width
: pr invocation--pages=
page_range: pr invocation--parents
: rmdir invocation--parents
: mkdir invocation--parents
: cp invocation--pid
: tail invocation--portability
: pathchk invocation--portability
: df invocation--prefix
: csplit invocation--preserve
: cp invocation--preserve-environment
: su invocation--preserve-root
: chmod invocation--preserve-root
: chgrp invocation--preserve-root
: chown invocation--preserve-root
: rm invocation--preserve-timestamps
: install invocation--print-database
: dircolors invocation--print-type
: df invocation--processor
: uname invocation--quiet
: tty invocation--quiet
: chmod invocation--quiet
: chgrp invocation--quiet
: chown invocation--quiet
: readlink invocation--quiet
: csplit invocation--quiet
: tail invocation--quiet
: head invocation--quote-name
: Formatting the file names--quoting-style
: Formatting the file names--read-bytes
: od invocation--real
: id invocation--recursive
: chmod invocation--recursive
: chgrp invocation--recursive
: chown invocation--recursive
: rm invocation--recursive
: cp invocation--recursive
: Which files are listed--reference
: Options for date--reference
: touch invocation--reference
: chmod invocation--reference
: chgrp invocation--reference
: chown invocation--regex
: tac invocation--remove
: shred invocation--remove-destination
: cp invocation--repeated
: uniq invocation--reply
: mv invocation--reply
: cp invocation--retry
: tail invocation--reverse
: Sorting the output--reverse
: sort invocation--rfc-2822
: Options for date--rfc-3339=
timespec: Options for date--rfc-822
: Options for date--save
: stty invocation--section-delimiter
: nl invocation--sep-string
: pr invocation--separate-dirs
: du invocation--separator
: pr invocation--separator
: tac invocation--serial
: paste invocation--set
: Options for date--sh
: dircolors invocation--shell
: su invocation--show-all
: cat invocation--show-control-chars
: Formatting the file names--show-control-chars
: pr invocation--show-ends
: cat invocation--show-nonprinting
: pr invocation--show-nonprinting
: cat invocation--show-tabs
: cat invocation--si
: du invocation--si
: df invocation--si
: What information is listed--si
: Block size--silent
: tty invocation--silent
: chmod invocation--silent
: chgrp invocation--silent
: chown invocation--silent
: readlink invocation--silent
: csplit invocation--silent
: tail invocation--silent
: head invocation--size
: What information is listed--size=
BYTES: shred invocation--skip-bytes
: od invocation--skip-chars
: uniq invocation--skip-fields
: uniq invocation--sleep-interval
: tail invocation--sort
: Sorting the output--spaces
: fold invocation--sparse=
when: cp invocation--split-only
: fmt invocation--squeeze-blank
: cat invocation--stable
: sort invocation--starting-line-number
: nl invocation--status
: md5sum invocation--strings
: od invocation--strip
: install invocation--strip-trailing-slashes
: mv invocation--strip-trailing-slashes
: cp invocation--suffix
: ln invocation--suffix
: mv invocation--suffix
: install invocation--suffix
: cp invocation--suffix
: csplit invocation--suffix
: Backup options--suffix-length
: split invocation--summarize
: du invocation--symbolic
: ln invocation--symbolic-link
: cp invocation--sync
: df invocation--sysv
: sum invocation--tabs
: unexpand invocation--tabs
: expand invocation--tabsize
: General output formatting--tagged-paragraph
: fmt invocation--target-directory
: ln invocation--target-directory
: mv invocation--target-directory
: install invocation--target-directory
: cp invocation--target-directory
: Target directory--temporary-directory
: sort invocation--terse
: stat invocation--text
: md5sum invocation--time
: du invocation--time
: touch invocation--time
: Sorting the output--time-style
: du invocation--time-style
: Formatting file timestamps--total
: du invocation--traditional
: od invocation--type
: df invocation--uniform-spacing
: fmt invocation--unique
: uniq invocation--unique
: sort invocation--universal
: Options for date--unset
: env invocation--update
: mv invocation--update
: cp invocation--user
: id invocation--utc
: Options for date--verbose
: chmod invocation--verbose
: chgrp invocation--verbose
: chown invocation--verbose
: rmdir invocation--verbose
: readlink invocation--verbose
: mkdir invocation--verbose
: ln invocation--verbose
: shred invocation--verbose
: rm invocation--verbose
: mv invocation--verbose
: install invocation--verbose
: cp invocation--verbose
: split invocation--verbose
: tail invocation--verbose
: head invocation--version
: Common options--warn
: md5sum invocation--width
: General output formatting--width
: fold invocation--width
: pr invocation--width
: fmt invocation--width
: od invocation--words
: wc invocation--writable
: who invocation--zero
: shred invocation--zero-terminated
: sort invocation-0
: du invocation-1
: General output formatting-1
: join invocation-1
: comm invocation-2
: join invocation-2
: comm invocation-3
: comm invocation-a
: uname invocation-a
: who invocation-a
: stty invocation-a
: tee invocation-a
: Connectives for test-a
: du invocation-a
: df invocation-a
: touch invocation-a
: cp invocation-A
: Which files are listed-a
: Which files are listed-a
: unexpand invocation-a
: join invocation-a
: split invocation-a
: pr invocation-a
: od invocation-A
: od invocation-A
: cat invocation-b
: who invocation-b
: File type tests-B
: du invocation-b
: du invocation-B
: df invocation-b
: ln invocation-b
: mv invocation-b
: install invocation-b
: cp invocation-b
: dircolors invocation-b
: Formatting the file names-B
: Which files are listed-b
: cut invocation-b
: sort invocation-b
: md5sum invocation-b
: csplit invocation-b
: split invocation-b
: fold invocation-b
: od invocation-b
: nl invocation-b
: tac invocation-b
: cat invocation-b
: Backup options-c
: su invocation-c
: File type tests-c
: stat invocation-c
: du invocation-c
: touch invocation-c
: chmod invocation-c
: chgrp invocation-c
: chown invocation-c
: install invocation-c
: dircolors invocation-C
: General output formatting-c
: Sorting the output-c
: cut invocation-c
: uniq invocation-c
: sort invocation-c
: wc invocation-C
: split invocation-c
: tail invocation-c
: head invocation-c
: pr invocation-c
: fmt invocation-c
: od invocation-
column: pr invocation-d
: Options for date-d
: who invocation-d
: File type tests-D
: du invocation-d
: touch invocation-d
: ln invocation-d
: rm invocation-d
: install invocation-d
: cp invocation-D
: What information is listed-d
: Which files are listed-d
: paste invocation-d
: cut invocation-D
: uniq invocation-d
: uniq invocation-d
: sort invocation-d
: split invocation-d
: pr invocation-d
: od invocation-d
: nl invocation-e
: File characteristic tests-E
: echo invocation-e
: echo invocation-e
: readlink invocation-e
: join invocation-e
: pr invocation-E
: cat invocation-e
: cat invocation-ef
: File characteristic tests-eq
: Numeric tests-f
: su invocation-f
: Options for date-F
: stty invocation-f
: File type tests-f
: stat invocation-f
: touch invocation-f
: chmod invocation-f
: chgrp invocation-f
: chown invocation-f
: readlink invocation-f
: ln invocation-F
: ln invocation-f
: shred invocation-f
: rm invocation-f
: mv invocation-f
: cp invocation-F
: General output formatting-f
: Sorting the output-f
: cut invocation-f
: uniq invocation-f
: sort invocation-f
: csplit invocation-F
: tail invocation-f
: tail invocation-f
: pr invocation-F
: pr invocation-f
: od invocation-f
: nl invocation-f
format: seq invocation-G
: id invocation-g
: id invocation-g
: stty invocation-G
: Access permission tests-g
: Access permission tests-g
: install invocation-G
: What information is listed-g
: What information is listed-g
: sort invocation-ge
: Numeric tests-gt
: Numeric tests-H
: who invocation-h
: File type tests-H
: du invocation-h
: du invocation-H
: df invocation-h
: df invocation-H
: chgrp invocation-h
: chgrp invocation-H
: chown invocation-h
: chown invocation-H
: cp invocation-h
: What information is listed-H
: Which files are listed-h
: pr invocation-h
: nl invocation-H
: Traversing symlinks-h
: Block size-i
: env invocation-i
: uname invocation-i
: tee invocation-i
: df invocation-i
: ln invocation-i
: rm invocation-i
: mv invocation-i
: cp invocation-i
: What information is listed-I
: Which files are listed-i
: expand invocation-i
: join invocation-i
: uniq invocation-i
: sort invocation-i
: pr invocation-i
: od invocation-i
: nl invocation-J
: pr invocation-j
: od invocation-k
: Access permission tests-k
: du invocation-k
: df invocation-k
: General output formatting-k
: sort invocation-k
: csplit invocation-k
: Block size-l
: su invocation-l
: who invocation-L
: File type tests-L
: stat invocation-L
: du invocation-l
: du invocation-l
: df invocation-L
: chgrp invocation-L
: chown invocation-L
: cp invocation-l
: cp invocation-l
: What information is listed-L
: Which files are listed-L
: wc invocation-l
: wc invocation-l
: split invocation-l
: pr invocation-l
: od invocation-l
: nl invocation-L
: Traversing symlinks-le
: Numeric tests-lt
: Numeric tests-m
: su invocation-m
: uname invocation-m
: who invocation-m
: du invocation-m
: touch invocation-m
: readlink invocation-m
: mknod invocation-m
: mkfifo invocation-m
: mkdir invocation-m
: install invocation-m
: General output formatting-M
: sort invocation-m
: sort invocation-m
: wc invocation-m
: pr invocation-n
: nice invocation-n
: uname invocation-n
: id invocation-n
: String tests-n
: echo invocation-n
: readlink invocation-n
: ln invocation-N
: Formatting the file names-n
: What information is listed-n
: cut invocation-n
: sort invocation-n
: csplit invocation-n
: tail invocation-n
: head invocation-N
: pr invocation-n
: pr invocation-N
: od invocation-n
: nl invocation-n
: cat invocation-n
NUMBER: shred invocation-ne
: Numeric tests-nt
: File characteristic tests-o
: uname invocation-o
: Connectives for test-O
: Access permission tests-o
: install invocation-o
: What information is listed-o
: sort invocation-o
: pr invocation-o
: od invocation-ot
: File characteristic tests-p
: su invocation-p
: uname invocation-P
: pathchk invocation-p
: pathchk invocation-p
: File type tests-P
: du invocation-P
: df invocation-P
: chgrp invocation-P
: chown invocation-p
: rmdir invocation-p
: mkdir invocation-p
: install invocation-p
: cp invocation-P
: cp invocation-p
: dircolors invocation-p
: General output formatting-p
: nl invocation-P
: Traversing symlinks-q
: who invocation-q
: readlink invocation-Q
: Formatting the file names-q
: Formatting the file names-q
: csplit invocation-q
: tail invocation-q
: head invocation-r
: uname invocation-R
: Options for date-r
: Options for date-r
: id invocation-r
: Access permission tests-r
: touch invocation-R
: chmod invocation-R
: chgrp invocation-R
: chown invocation-R
: rm invocation-r
: rm invocation-r
: cp invocation-R
: cp invocation-r
: Sorting the output-R
: Which files are listed-r
: sort invocation-r
: sum invocation-r
: pr invocation-r
: tac invocation-s
: su invocation-s
: uname invocation-s
: Options for date-s
: who invocation-s
: tty invocation-s
: File characteristic tests-S
: File type tests-S
: du invocation-s
: du invocation-s
: readlink invocation-S
: ln invocation-s
: ln invocation-S
: mv invocation-S
: install invocation-s
: install invocation-S
: cp invocation-s
: cp invocation-S
: Sorting the output-s
: What information is listed-s
: paste invocation-s
: cut invocation-s
: uniq invocation-S
: sort invocation-s
: sort invocation-s
: sum invocation-s
: csplit invocation-s
: fold invocation-S
: pr invocation-s
: pr invocation-s
: fmt invocation-s
: od invocation-S
: od invocation-s
: nl invocation-s
: tac invocation-s
: cat invocation-S
: Backup options-s
BYTES: shred invocation-T
: who invocation-t
: File type tests-t
: stat invocation-T
: df invocation-t
: df invocation-T
: ln invocation-t
: ln invocation-T
: mv invocation-t
: mv invocation-T
: install invocation-t
: install invocation-T
: cp invocation-t
: cp invocation-T
: General output formatting-t
: Sorting the output-t
: unexpand invocation-t
: expand invocation-T
: sort invocation-t
: sort invocation-t
: md5sum invocation-T
: pr invocation-t
: pr invocation-t
: fmt invocation-t
: od invocation-T
: cat invocation-t
: cat invocation-u
: env invocation-u
: Options for date-u
: who invocation-u
: id invocation-u
: Access permission tests-u
: shred invocation-u
: mv invocation-u
: cp invocation-U
: Sorting the output-u
: Sorting the output-u
: uniq invocation-u
: sort invocation-u
: fmt invocation-u
: cat invocation-v
: uname invocation-v
: chmod invocation-v
: chgrp invocation-v
: chown invocation-v
: rmdir invocation-v
: readlink invocation-v
: mkdir invocation-v
: ln invocation-v
: shred invocation-v
: rm invocation-v
: mv invocation-v
: install invocation-v
: cp invocation-v
: Sorting the output-v
: tail invocation-v
: head invocation-v
: pr invocation-v
: od invocation-v
: nl invocation-v
: cat invocation-w
: who invocation-w
: Access permission tests-w
: General output formatting-w
: uniq invocation-w
: md5sum invocation-w
: wc invocation-w
: fold invocation-W
: pr invocation-w
: pr invocation-w
: fmt invocation-w
: od invocation-w
: nl invocation-
width: fmt invocation-x
: Access permission tests-x
: du invocation-x
: df invocation-x
: shred invocation-x
: cp invocation-x
: General output formatting-X
: Sorting the output-x
: od invocation-X
FILE: du invocation-z
: String tests-z
: shred invocation-z
: sort invocation-z
: csplit invocation.cshrc
: su invocation/
: Numeric expressions/bin/sh
: su invocation/etc/passwd
: su invocation/etc/shells
: su invocation4.2
file system type: df invocation<
: Relations for expr<=
: Relations for expr=
: Relations for expr=
: String tests==
: Relations for expr>
: Relations for expr>=
: Relations for expr\(
regexp operator: String expressions\+
regexp operator: String expressions\?
regexp operator: String expressions\c
: printf invocation\
ooo: printf invocation\uhhhh
: printf invocation\Uhhhhhhhh
: printf invocation\x
hh: printf invocation\|
regexp operator: String expressions_POSIX2_VERSION
: touch invocation_POSIX2_VERSION
: uniq invocation_POSIX2_VERSION
: sort invocation_POSIX2_VERSION
: tail invocation_POSIX2_VERSION
: Standards conformanceaccess
time, changing: touch invocationaccess time
, printing or sorting files by: Sorting the outputaccess time
, show the most recent: du invocationacross
, listing files: General output formattingago
in date strings: Relative items in date stringsalnum
: Character setsalpha
: Character setsalternate ebcdic
, converting to: dd invocationalways
color option: General output formattingam i
: who invocationam
in date strings: Time of day itemsappend
: dd invocationascii
, converting to: dd invocationatime
, changing: touch invocationatime
, printing or sorting files by: Sorting the outputatime
, show the most recent: du invocationget_date
: Authors of get_dateauto
color option: General output formattingb
for block special file: mknod invocationbasename
: basename invocationbinary
: dd invocationblank
: Character setsblock
(space-padding): dd invocationBLOCK_SIZE
: Block sizeBLOCKSIZE
: Block sizebrkint
: Inputbs
: dd invocationbs
n: Outputc
for character special file: mknod invocationC-s/C-q flow control
: Inputcat
: cat invocationcbreak
: Combinationcbs
: dd invocationcdfs
file system type: df invocationchgrp
: chgrp invocationchmod
: chmod invocationchown
: chown invocationchroot
: chroot invocationcksum
: cksum invocationclocal
: Controlcntrl
: Character setscols
: SpecialCOLUMNS
: Specialcolumns
: SpecialCOLUMNS
: General output formattingcomm
: comm invocationcommas
, outputting between files: General output formattingconv
: dd invocationcooked
: Combinationcount
: dd invocationcp
: cp invocationcread
: Controlcr
n: Outputcrt
: Combinationcrterase
: Localcrtkill
: Localcrtscts
: Controlcs
n: Controlcsplit
: csplit invocationcstopb
: Controlctime
, printing or sorting by: Sorting the outputctime
, show the most recent: du invocationctlecho
: Localcut
: cut invocationdate
: date invocationday
in date strings: Relative items in date stringsdd
: dd invocationdec
: Combinationdecctlq
: Combinationdescriptor
follow option: tail invocationdf
: df invocationDF_BLOCK_SIZE
: Block sizedigit
: Character setsdir
: dir invocationdircolors
: dircolors invocationdirect
: dd invocationunlink
: rm invocationdirname
: dirname invocationdsusp
: Charactersdsync
: dd invocationdu
: du invocationDU_BLOCK_SIZE
: Block sizeebcdic
, converting to: dd invocationecho
: Localecho
: echo invocationechoctl
: Localechoe
: Localechok
: Localechoke
: Localechonl
: Localechoprt
: Localefs
file system type: df invocationek
: Combinationenv
: env invocationeof
: Characterseol
: Characterseol2
: Characterserase
: Charactersevenp
: Combinationexcl
: dd invocationexisting
backup method: Backup optionsexpand
: expand invocationexpr
: expr invocationextension
, sorting files by: Sorting the outputfactor
: factor invocationfalse
: false invocationfdatasync
: dd invocationff
n: Outputfirst
in date strings: General date syntaxfmt
: fmt invocationfold
: fold invocationfortnight
in date strings: Relative items in date stringsfsck
: rm invocationfsync
: dd invocationget_date
: Date input formatsgraph
: Character setsgroups
: groups invocationhead
: head invocationHOME
: su invocationhorizontal
, listing files: General output formattinghostid
: hostid invocationhostname
: hostname invocationhour
in date strings: Relative items in date stringshsfs
file system type: df invocationhup[cl]
: Controlibs
: dd invocationicanon
: Localicrnl
: Inputid
: id invocationiexten
: Localif
: dd invocationiflag
: dd invocationignbrk
: Inputigncr
: Inputignpar
: Inputimaxbel
: Inputindex
: String expressionsinlcr
: Inputinpck
: Inputinstall
: install invocationintr
: Charactersisig
: Localispeed
: Specialistrip
: Inputiuclc
: Inputiutf8
: Inputixany
: Inputixoff
: Inputixon
: Inputjoin
: join invocationkill
: kill invocationkill
: Characterslast
day: Day of week itemslast
day: Options for datelast
in date strings: General date syntaxLC_ALL
: ls invocationLC_ALL
: sort invocationLC_COLLATE
: Relations for exprLC_COLLATE
: join invocationLC_COLLATE
: comm invocationLC_COLLATE
: uniq invocationLC_COLLATE
: sort invocationLC_CTYPE
: printf invocationLC_CTYPE
: sort invocationLC_MESSAGES
: pr invocationLC_NUMERIC
: printf invocationLC_NUMERIC
: sort invocationLC_NUMERIC
: Block sizeLC_TIME
: date invocationLC_TIME
: du invocationLC_TIME
: Formatting file timestampsLC_TIME
: sort invocationLC_TIME
: pr invocationLCASE
: Combinationlcase
: Combinationlcase
, converting to: dd invocationlchown
: chgrp invocationlchown
: chown invocationlength
: String expressionsline
: SpecialLINES
: Speciallink
: link invocationlitout
: Combinationln
: ln invocationln
format for nl: nl invocationlnext
: CharactersLOGNAME
: su invocationlogname
: logname invocationlong ls
format: What information is listedlower
: Character setsls
: ls invocationLS_BLOCK_SIZE
: Block sizeLS_COLORS
: dircolors invocationmatch
: String expressionsmd5sum
: md5sum invocationmidnight
in date strings: Time of day itemsmin
: Specialminute
in date strings: Relative items in date stringsmkdir
: mkdir invocationmkfifo
: mkfifo invocationmknod
: mknod invocationmodification time
, sorting files by: Sorting the outputmodify
time, changing: touch invocationmonth
in date strings: Relative items in date stringsmtime
, changing: touch invocationmv
: mv invocationname
follow option: tail invocationkill
: Localnext
day: Day of week itemsnext
day: Options for datenext
in date strings: General date syntaxnice
: nice invocationnl
: Combinationnl
: nl invocationnl
n: Outputnocreat
: dd invocationnoctty
: dd invocationnoerror
: dd invocationnoflsh
: Localnofollow
: dd invocationnohup
: nohup invocationnohup.out
: nohup invocationnonblock
: dd invocationnone
backup method: Backup optionsnone
color option: General output formattingnone
, sorting option for ls: Sorting the outputnoon
in date strings: Time of day itemsnotrunc
: dd invocationnow
in date strings: Relative items in date stringsnumbered
backup method: Backup optionsobs
: dd invocationocrnl
: Outputod
: od invocationoddp
: Combinationof
: dd invocationofdel
: Outputofill
: Outputoflag
: dd invocationolcuc
: Outputonlcr
: Outputonlret
: Outputonocr
: Outputopost
: Outputospeed
: Specialp
for FIFO file: mknod invocationparenb
: Controlparity
: Combinationparmrk
: Inputparodd
: Controlpass8
: Combinationpaste
: paste invocationPATH
: su invocationPATH
: env invocationpathchk
: pathchk invocationpcfs
: df invocationpm
in date strings: Time of day itemsPOSIXLY_CORRECT
: printf invocationPOSIXLY_CORRECT
: echo invocationPOSIXLY_CORRECT
: dd invocationPOSIXLY_CORRECT
: sort invocationPOSIXLY_CORRECT
: pr invocationPOSIXLY_CORRECT
: Standards conformancePOSIXLY_CORRECT
: Common optionsPOSIXLY_CORRECT
, and block size: Block sizepr
: pr invocationprint
: Character setsprintenv
: printenv invocationprintf
: printf invocationprterase
: Localptx
: ptx invocationpunct
: Character setspwd
: pwd invocationquit
: Charactersraw
: Combinationread
system call, and holes: cp invocationreadlink
: readlink invocationrm
: rm invocationrmdir
: rmdir invocationrn
format for nl: nl invocationroot
as default owner: install invocationrows
: Specialrprnt
: Charactersrz
format for nl: nl invocationsane
: Combinationseek
: dd invocationseq
: seq invocationsha1sum
: sha1sum invocationsha224sum
: sha2 utilitiessha256sum
: sha2 utilitiessha384sum
: sha2 utilitiessha512sum
: sha2 utilitiesSHELL
: su invocationSHELL
environment variable, and color: dircolors invocationshred
: shred invocationsimple
backup method: Backup optionsSIMPLE_BACKUP_SUFFIX
: Backup optionssingle-column
output of files: General output formattingsize
: Specialsize of files
, sorting files by: Sorting the outputskip
: dd invocationsleep
: sleep invocationsort
: sort invocationspace
: Character setsspeed
: Specialsplit
: split invocationstart
: Charactersstat
: stat invocationstatus time
, printing or sorting by: Sorting the outputstatus time
, show the most recent: du invocationstop
: Charactersstrftime
and date: date invocationstty
: stty invocationsu
: su invocationsubstr
: String expressionssum
: sum invocationsusp
: Charactersswab
(byte-swapping): dd invocationswtch
: Characterssync
: sync invocationsync
: dd invocationsync
(padding with nulls): dd invocationsyslog
: su invocationtab
n: Outputtabs
: Combinationtac
: tac invocationtail
: tail invocationtandem
: Inputtee
: tee invocationTERM
: su invocationtest
: test invocationtext
: dd invocationthis
in date strings: Relative items in date stringstime
: Specialtime
: touch invocationTIME_STYLE
: du invocationTIME_STYLE
: Formatting file timestampsTMPDIR
: sort invocationtoday
in date strings: Relative items in date stringstomorrow
: Options for datetomorrow
in date strings: Relative items in date stringstostop
: Localtouch
: touch invocationtr
: tr invocationtrue
: true invocationtsort
: tsort invocationtty
: tty invocationTZ
: Specifying time zone rulesTZ
: Options for dateTZ
: date invocationTZ
: who invocationTZ
: stat invocationTZ
: touch invocationTZ
: Formatting file timestampsTZ
: pr invocationu
, and disabling special characters: Charactersucase
, converting to: dd invocationufs
file system type: df invocationuname
: uname invocationunblock
: dd invocationunexpand
: unexpand invocationuniq
: uniq invocationunlink
: unlink invocationunlink
: rm invocationupper
: Character setsuse
time, changing: touch invocationuse time
, printing or sorting files by: Sorting the outputuse time
, show the most recent: du invocationUSER
: su invocationusers
: users invocationutmp
: who invocationutmp
: users invocationutmp
: logname invocationvdir
: vdir invocationverbose ls
format: What information is listedversion
, sorting option for ls: Sorting the outputversion-control
Emacs variable: Backup optionsVERSION_CONTROL
: ln invocationVERSION_CONTROL
: mv invocationVERSION_CONTROL
: install invocationVERSION_CONTROL
: cp invocationVERSION_CONTROL
: Backup optionsvertical
sorted files in columns: General output formattingvt
n: Outputwc
: wc invocationweek
in date strings: Relative items in date stringswerase
: Characterswho
: who invocationwho am i
: who invocationwhoami
: whoami invocationwrite
, allowed: who invocationwtmp
: who invocationwtmp
: users invocationxcase
: Localxdigit
: Character setsyear
in date strings: Relative items in date stringsyes
: yes invocationyesterday
: Options for dateyesterday
in date strings: Relative items in date strings|
: Relations for expr[1] If you know of one, please write to bug-coreutils@gnu.org.
[2] If you
use a non-POSIX locale (e.g., by setting LC_ALL
to en_US), then sort may produce output that is sorted
differently than you're accustomed to. In that case, set the LC_ALL
environment variable to C. Note that setting only LC_COLLATE
has two problems. First, it is ineffective if LC_ALL is also set.
Second, it has undefined behavior if LC_CTYPE (or LANG, if
LC_CTYPE is unset) is set to an incompatible value. For example,
you get undefined behavior if LC_CTYPE is ja_JP.PCK
but
LC_COLLATE is en_US.UTF-8
.
[3] If you use a non-POSIX locale (e.g., by setting LC_ALL to en_US), then ls may produce output that is sorted differently than you're accustomed to. In that case, set the LC_ALL environment variable to C.
[4] Redhat Linux 6.1, for the November 2000 revision of this article.