Hackles
February 2, 2026
There was a great Linux/programming based web-comic called "Hackles" that
ran from 2001 to 2004 by Drake Emko & Jen Brodzik. I discovered it a few
years after it had already ended, but have read through it a few times over the
years. It used to be hosted on hackles.org, but at some point
that domain went down.
Before it did, I made a local backup of their website. I've done this for a few things like this on the web that I have (1) enjoyed and (2) entertained doubts about its continued existence on the Internet. For a relatively unknown, decades old web-comic, it seemed like the creators might at some point let the domain/hosting lapse.
The trusty wget command can be used to download individual files
from the Internet, but with the right flags can be made to download complete sites.
For instance the following command could have downloaded the comic when it was online:
$ wget --mirror --convert-links --page-requisites --adjust-extension http://hackles.org
The flags used:
--mirror: Turns on recursive downloading, so pages linked from the main page are downloaded as well, and ones linked from those pages, and so on indefinitely.--convert-links: Changes absolute links to relative links. For instance if a page downloaded from the Hackles site links tohackles.org/strips/cartoon242.png, it will no longer work if that address is not reachable. So wget will convert it to a relative link, so it references wherever we downloaded it to.--page-requisites: This makes wget download the files needed to display HTML files directly, such as referenced images, style sheets and scripts.j--adjust-extension: This tells wget to rename files which don't end in HTML with a .html extension (also fixing internal links). This is necessary for mirroring sites which use certain types of dynamic pages. The Hackles website for instance uses Perl CGI scripts and so a comic link might behackles/cgi-bin/archives.pl@request=187. However we don't really want to set up a CGI server (which is quite old-school technology btw), and so wget just renames the resulting page ashackles/cgi-bin/archives.pl@request=187.htmlso we can see the end-result from the server.
Together they tell wget to make a complete, usable mirror of the original site. You can download entire websites for local viewing without needing to connect to the Internet, or for backup.
When I saw the domain had gone down, I threw the comic up on my website for posterity. It's pretty cute and deserves to still be on the Internet. It can be accessed here: Hackles.
Unfortunately almost all of the external links out from the site are dead. For being such an important part of modern life, it's surprising how volatile the Internet is. Most sites that are more than a decade old are simply gone. Things like the Internet archive are a godsend for trying to find old things from the web, but it's still difficult.
Reading through this comic again I was struck by the difference in time scales across media. Hackles and the things linked on the site are absolutely ancient in terms of the web, but the comic references the Star Wars prequels and the Lord of the Rings movies, which I consider to be modern (younger readers may disagree). It also reminded me of how great the Internet used to be when it was more de-centralized and independent. There are many web-comics I enjoy today that are just posted to Reddit or other social media and have no independent site. It's just another way that so much of our culture is controlled by a handful of companies :(