I would not consider wget a web crawler. It is a tool that can be used to mirror one or more
websites, so it could be used to make your own web crawler.
Web crawler and web spider pretty much mean the same thing. They pull the content from one
or more websites. This could be part of an archiving process, but it could also be to check
that links are good or to index the data on the page.
A web archiver uses a web crawler to make a copy of one or more websites, and these pages are
then made available on a web server.