wget -i file
If you specify - as file name, the urls will be read from standard input.
wget -r http://www.gnu.org/ -o gnulog
wget --convert-links -r http://www.gnu.org/ -o gnulog
wget -p --convert-links http://www.server.com/dir/page.html
The html page will be saved to www.server.com/dir/page.html, and the images, stylesheets, etc., somewhere under www.server.com/, depending on where they were on the remote server.
wget -p --convert-links -nH -nd -Pdownload \
http://www.server.com/dir/page.html
wget -S http://www.lycos.com/
wget --save-headers http://www.lycos.com/
more index.html
wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
More verbose, but the effect is the same. -r -l1 means to retrieve recursively (see Recursive Download), with maximum depth of 1. --no-parent means that references to the parent directory are ignored (see Directory-Based Limits), and -A.gif means to download only the gif files. -A "*.gif" would have worked too.
wget -nc -r http://www.gnu.org/
wget ftp://hniksic:mypassword@unix.server.com/.emacs
Note, however, that this usage is not advisable on multi-user systems
because it reveals your password to anyone who looks at the output of
ps.
wget -O - http://jagor.srce.hr/ http://www.srce.hr/
You can also combine the two options and make pipelines to retrieve the documents from remote hotlists:
wget -O - http://cool.list.com/ | wget --force-html -i -