I’m Michael Hampton, principal author of Homeland Stupidity, a U.S. politics blog. Today I want to address the issue of business continuity, that is, have you planned what to do if a disaster strikes your professional blogging operation?
Over the past few months I’ve had some all-too-common computer emergencies arise, and had to move fast to recover from them. In October, filesystem corruption ate about two weeks worth of e-mail, critical files such as all of my RSS feeds, and a few works in progress. I didn’t have up to date backups, and without them I’m only getting by as best I can without the missing materials.
And late Monday night my computer decided, during a round of system updates, to uninstall my feed reader, and then refused to reinstall it on Tuesday.
These are just two examples of things that can go wrong in pro blogging, but there are others. Have you planned what to do if your Web host suddenly goes down, as TypePad did recently, goes out of business entirely, or is hit by a natural disaster?
It’s one thing to simply address crises as they arise. About eight months ago, when my blog was still a small site running on my home computer, I needed to reinstall the entire operating system due to severe filesystem corruption. I pulled out an old Pentium 166 which I had laying around and pressed it into service as a temporary Web server to host my site while I was making repairs to my main computer. It was incredibly slow, but it served for the nearly full day it took to get the main computer running again.
And in October, as I said, due to lack of current backups, I lost a significant amount of email, all of my RSS feeds (since reconstructed, mainly from memory) and other files. I immediately began keeping backups after that; I learned my lesson!
Losing my feedreader itself was a somewhat different problem. Fortunately, it stores my list of feeds in an OPML file, rather than in the registry or elsewhere, so it was easy enough to get to them on a temporary basis using Bloglines.
But what about more serious, less likely, but far more disruptive, threats? What if a tornado, hurricane or terrorists destroy the data center hosting your Web site? A lot of you are (unknown even to yourselves!) hosted in a Texas data center which was near the path of Hurricane Rita this year.
The U.S. Department of Homeland Security has an excellent guide for small business disaster planning, much of which you can adapt to your situation. Another good overview appeared in Entrepreneur magazine. I want to address a couple of issues, though, which are peculiar to pro bloggers.
Primarily there is the issue of backups. We live and die on information, and our continued access to and ability to develop information is critical to what we do. As I mentioned above, losing access to any information can deal you a tremendous blow.
Do you have backups of everything on your Web host? My Web host (affiliate link) makes backups on-demand, not only of my files, but also my databases, which I can download whenever I need to, and I’m working on automating it from my end so that I always have a current backup.
And I backup the most critical files on my computer — to storage space on my Web host and to a USB thumb drive. That way I can survive a double whammy — losing both my computer and the USB stick backup, or losing the computer and the Web host. Other, less critical files, on my computer I back up to CD-RW on a regular basis now.
Less specific to pro blogging, but still critical, is protecting yourself in a disaster. Do you know what natural disasters are likely to strike your area, or the area where your Web host is located? Do you even know where your Web host is? Is the data center well protected and what contingency plans do they have for disaster? What contingency plans do you have, in case you have to live on the road for months at a time?
After Hurricane Katrina hit the U.S., I spent nearly two months blogging from my laptop in coffee houses with wireless Internet access. I had prepared in advance, though, for the possibility that I would have to live off my laptop. (Though not well enough!)
When you adapt these ideas to pro blogging, keep in mind that computers and other hardware are replaceable; you and your ideas are not. So you should not only have good, up-to-date backups of all of your critical information, you should also be aware of how to protect yourself in a disaster. Whatever your personal situation, you should take stock and prepare yourself for the worst. And here’s hoping you never have to face disaster.
Very helpful and awareness-rising, even though i don’t live in the U.S, i know my host’s servers are there and they’re not stable anyway, time to back up i guess =)
A.H
I guess everyone with a site who has ever encountered similiar stuff (db-server error, webhosting error, etc) can feel the pain and the potential devastating loss of critical data such as (web)content and email – whether it was caused by own faults (i.e. updates with broken files, accidently deleting stuff, etc) or by a system failure. Everyone who has encountered something like that should have learned his lesson and I guess most people have – doing a backup or two.
It’s human nature though to forget about things (again) rather quickly and hence the only way to stay ahead on such a critical task IMO is automation (to a certain degree). I decided that this was and is the only practical solution for myself. After moving (most of) my content to my own dedicated server I wrote a little backup (shell) script which will do the following:
1. backup all homeshares including the document-roots and all subdirectories of all hosted websites
2. dump all databases
3. backup all mails (I use imap so this is especially critical as there are minimal to none local versions)
4. zip all files generated in 1.-3. encoded with syntax (name.Weekday.file-extension) and save them to the second hdd of the server AND a remote backup location (a special backup-server in another facility of my host)
5. (not automated) I manually copy one of the files (usually the friday’s copy of any given week to a special “old” folder)..
This script is executed via cronjob every night, so I’ve got a current backup which will be valid for one week for every day of the week and with 5. a file history of more than x weeks…
People not familiar with shell scripts (linux) can have a look at my original post where I posted the source of my backup script.
Automating backups is the only way to fly. I’m considering making automated backups of all my files to my iPod (since it has more space!) whenever I plug it in to the computer.
But just as important (and usually quite neglected) is: restoring! If the worst happens, you have to be able to restore all your data quickly. Make sure your backups are valid and contain the data you intended them to, and that you know how to restore them in the event of a disaster. The backup is completely useless if, when the time comes to use it, you can’t.
I thought you used bloglines for your feedreader?
I don’t use bloglines if I can avoid it. I prefer to have my feeds close at hand on my computer, and I’m VERY picky about how my feedreader works. With hundreds or even thousands of entries to look through in any given day, I have to have something that lets me work with them fast, and web-based feedreaders, by and large, do not.
Jon, have you read the first paragraph and seen what’s going on here at ProBlogger the past week? Darren didn’t write this.
A couple of thoughts about bloglines – I use it for its interface with the Elicit blogging software. I make a quick run through bloglines, saving only the posts I think I’ll want to incorporate into my blog. Elicit has what they call a “docklet” that snags all the saved posts from bloglines. Then, you just highlight the portion you want, and drag it from the docklet into the edit window. Actually pretty slick. Elicit has similar docklets for Amazon, Chitika, Yahoo, Google, Technorati, etc. Using it that way, bloglines serves me well, although I do have FeedBurner still around for those times bloglines takes a dive.
Doug
With protecting your files on the hard drive it’s a good idea to keep them in a seperate partition from the Operating System. With Windows we were taught to create a seperate partition on the hard drive specifically for the data files. That way it is easier to back up the files, plus if Windows gets corrupted and needs re-installing the data files don’t get lost because they are in that seperate partition.
One good tool to have is a copy of Knoppix Linux on CD ROM. This is available for free download at:
http://www.knoppix.com/
Knoppix is a distrobution of Linux that can be booted from the CD Drive (as long as your computer is set up to allow this in the BIOS). If the Windows Operating System crashes you have the option of booting up Knoppix from the CD Drive and then be able to copy your files to an external device like a usb drive.
Have fun
I just found a list of sites that were hacked …posted it at …
http://www.robertbenjamin.com/hackedlist/hacked.htm
I DO NOT advise going to any of the sites listed
I better make it clearer … the page I posted the list on is ok… just not the url’s in the list…
I have now rendered the list harmless to viewers. This list was found by using MSN search engine. I was ‘backtracking’ some of my stuff when I came accross this. I felt that there are many people and sites that want to correct the problem (s) created by this person. So I posted the list. It now has inserted into it a 3 digit number in each of the URL ‘s to prevent them from activation. Please do not remove the numbers unless you can with the “bugs” created. The list is at http://www.robertbenjamin.com/hackedlist/hacked.htm
I used to work in the tech support department for a software company (now long gone) that made what was, at the time, the premium backup utlity. You can’t imagine how many tech support calls that I fielded from folks that were backing up religiously but had never tested their backups! The comment earlier about testing your restore process is absolutely just as important as running the backup process.
I’m paranoid. :-) I back up source code only after changes. There’s no point in wasting bandwidth on backing up something that has not changed. But I back up every database on my server (via a cron job run by “root”) every night. Later that night I have my home server (gotta love linux) ftp to my production server and download the resulting backup files. These files are copied to a RAID device. It’s only a mirror, but it still has two hard-drives instead of just one.
At any given point in time my most recent database backup is on the server. In the event of a data corruption event on the server, I can restore with at most a 23 hour 59 minute “lost data” window. If the server craters, I can upload the most recent source code / restore the most recent database and be up and running in about twenty minutes. I know this, because I’ve tested it.
Test your backups. In a few months, test them again. It’s like your smoke detector; it doesn’t do you any good if it doesn’t work.
Great suggestions on testing your backups, but what would be an easy way to test them? I currently backup my WordPress blogs by backing up all the files on the server, and the DB is e-mailed to me each night via the WP Backup plugin…
Is there a simple way for the not-so-database-literate to test that their DBs are good?
you article just prompted ,me to backup my blog. thanks!
And in what I could only call a cruel twist of fate, the unthinkable just did happen to me a few hours ago. My laptop bit the dust. I’m now on my contingency plans where I’m using a web-based feedreader (again), webmail, and essentially working from others’ computers and public terminals.
Great suggestions on testing your backups, but what would be an easy way to test them?
Two choices that I use, there are probably more. First, I have a dedicated server with my host so I can log in and do whatever I want. I create a new folder (or sub-domain or whatever technique you like) and copy up the source code. Then I create a new database (since I don’t want to overwrite my production one) and alter the wordpress config file to point to this copy of the database. Then I restore the database backup. As long as paths are “relative” (meaning not hard-coded to your full domain) then everything should work.
I also maintain a linux server in my home office, and I can restore my source code / database backups to that server and test without affecting my “production” server.
Now that we have the ability, let’s blog well…
Blogs in YWAM are starting to pop up little by little. Jeff Neely just launched his over at neelys.org and I spoke to someone at the coffee machine who is interested in starting one up as well.
Thanks for all.
[…] In the last week or so I’ve managed to guest blog two posts at two different blogs which you may be interested in reading (or may not care at all about.)The first, which was written before Christmas but only posted this week, deals with business continuity for professional bloggers. In other words, what happens when disaster strikes? I learned firsthand how important it is to prepare for disasters, both small and large, and shared some of what I know. It’s primarily through planning that I was able to survive losing my computer and continue writing this site (and my others), albeit at a much reduced capacity. If you intend to make a business of blogging, it’s a must-read, and must-act-upon. […]