[NA]When to make business backups [NA]

Let'sgoflying!

Touchdown! Greaser!
Joined
Feb 23, 2005
Messages
20,311
Location
west Texas
Display Name

Display name:
Dave Taylor
You have an 8-5, M-F small business with a server.
You make daily automatic backups at 5:15pm onto an external HD.
You have two of these EHDs. One stays at home except:
Normally you swap these every Thursday, taking the fresh backup home and the other is left attached to the server for those daily backups.

Just got to thinking, is there a better day of the week to swap these?

Please send your suggestions for a completely different plan if so desired but we are happy with the overall current plan and I am only looking for an answer to:

"Just got to thinking, is there a better day of the week to swap these?"

Thanks!
 
It would matter more what your business model is like. Do things happen more often one day than another. You need to see how your business operates versus the vulnerabilities.

Also, it depends what you are backing up. Certain databases you're going to have to put in a quiescent state if you want to have a valid backup copy. We typically ran the backup at 2AM (dead time in our site) to another hard disk. During the normal shift, that was then transferred into the fire safe and periodically, we'd send one off site.
 
There is a lot of information missing. Are you doing incrementals or differentials with a full checkpoint once a week? Or, are you writing a full backup every day to a new location on the disk (5 full backups)? Are you doing system level backups, including system state or just data?

As far as suggestions on a backup plan, a lot of it depends on your actual business requirements. There are a couple of metrics to look at, Recovery Point Objective (RPO) and Recovery Time Objective (RTO). RPO refers to how much data you can lose, in a recovery. For instance, if you are backing up once a day, you have accepted the potential of a 24 hour RPO. That means you may lose up to 24 hours of data, if you have to recover from backup. RTO refers to the amount of downtime you can withstand, in the case of data loss and recovery. Since you are a single location business, I won't touch on a DR scenario where the building burns down, because at that point, you have bigger problems, but If you lose your server and need to obtain new hardware and then recover to that hardware, in a best case scenario, you are probably looking at 3 days of downtime minimum. If these numbers aren't acceptable, you probably need to look at other types of solutions, such as BDR (Backup and Disaster Recovery). These perform backups locally to disk and often replicate offsite to the "cloud". Many of them allow you to spin up a server either on local standby hardware or in the cloud, in the case of a recovery, reducing recovery time to an hour or less. They can also backup more often, such as twice a day or even more often.

Another point type solution that enables you to recover previous versions of files, is a Windows feature called Volume Shadow Copy. By enabling this, you don't have to recover files from backup, if you accidentally overwrite a file or delete it https://www.servers2016.com/server-2016-volume-shadow-copies-setup/
 
Don't forget that if you actually LOSE that server having a hard drive with all your information is virtually useless until you get another server up and running. Which, if you order from Dell for example could be at least a couple days.

For a business, it still amazes me that people use external hard drives and tape backups in this day and age. Sending tape backups and hard drives off-site just means you have to get it from off-site if the on-site copy is bad, which adds more time.

When I used to manage disaster recovery architecture for a previous company, we'd use appliances that took snapshots of all of the data throughout the day (10-15 minute increments) and stored it locally on the appliance. And throughout the day that data would just replicate up to a cloud.

The cool thing is that if a server died, we could actually spin up a virtual copy on the backup appliance (which acted as the host) with current data. Downtime would be maybe 10 minutes, many times, less. And that's in the case of a total system failure.

So while people are re-installing an OS, service packing it, setting up all the shares, or whatever, our business is actually making money instead of losing it.
 
I should remind, this is a small business. Very small. We can function without the computer for at least a week.
 
The salvage yard has become exceptionally data centric. If the server goes off line, the revenue generation dies quickly.

We back up nightly to an external hard drive, and also do an incremental backup to offsite. The external drive's are swapped out each morning, so we always have something no older than 48 hours off site to prevent data lose from theft or properly loss disaster (aka fire).
 
The salvage yard has become exceptionally data centric. If the server goes off line, the revenue generation dies quickly.

We back up nightly to an external hard drive, and also do an incremental backup to offsite. The external drive's are swapped out each morning, so we always have something no older than 48 hours off site to prevent data lose from theft or properly loss disaster (aka fire).

So what happens if the external drive fails? How do you maintain the backup chain without it?
 
Don't forget that if you actually LOSE that server having a hard drive with all your information is virtually useless until you get another server up and running. Which, if you order from Dell for example could be at least a couple days.

For a business, it still amazes me that people use external hard drives and tape backups in this day and age. Sending tape backups and hard drives off-site just means you have to get it from off-site if the on-site copy is bad, which adds more time.

When I used to manage disaster recovery architecture for a previous company, we'd use appliances that took snapshots of all of the data throughout the day (10-15 minute increments) and stored it locally on the appliance. And throughout the day that data would just replicate up to a cloud.

The cool thing is that if a server died, we could actually spin up a virtual copy on the backup appliance (which acted as the host) with current data. Downtime would be maybe 10 minutes, many times, less. And that's in the case of a total system failure.

So while people are re-installing an OS, service packing it, setting up all the shares, or whatever, our business is actually making money instead of losing it.

Yes, these BDR solutions have become quite affordable, even for small businesses. At the end of the day, it really depends on the business, irrelevant of its size, on how much downtime and how much data loss they can withstand. It is also important to look at your insurance and see what your coverage is for business interruption. That can bridge the gap, during a recovery. A little planning goes a long way.
 
So what happens if the external drive fails? How do you maintain the backup chain without it?
Once a month I am checking each of the external HDDs. And if needed, the nightly cloud backup can substitute. And I didn't mention the nightly back up to a cloud server.
 
Once a month I am checking each of the external HDDs. And if needed, the nightly cloud backup can substitute.

Ah ok, so you backup more than just incremental's to the cloud then? Didn't see that part in your original post, which is why I asked.

How much data do you keep in the cloud? Approximately? Just curious.
 
Ah ok, so you backup more than just incremental's to the cloud then? Didn't see that part in your original post, which is why I asked.

How much data do you keep in the cloud? Approximately? Just curious.
Don't know the quantity of bytes. But it's enough for the recovery service to provide me a new server on site and ready to go with my data within 48 hours of being contacted..
 
I like to have images of my critical machines. Recently I've started using Cloudberry to back up our Windows Server domain controller and file server with RAID 10. I use it to image the whole system, once to a hard drive and once to Google Nearline. Supposedly Cloudberry can install the imagine not only on another PC, but to a VM on AWS. This is in addition to the normal Windows file backup system. I hope I never need it, but if I do I'll report back.

For less critical PCs I image with Acronis backup, which has saved me loads of trouble several times.
 
Jim mentioned RAID.

Dave, (@Let'sgoflying!) does your server have a RAID setup?

My system does, and we have had two of the onboard HDD's die. The RAID system made those deaths transparent to our system users (they never new it happened). And I wouldn't have noticed except that the system service provider called to tell me and let me know a replacement HDD was on its way.
 
Dave, (@Let'sgoflying!) does your server have a RAID setup?
yes Mike it does thanks.
Dont forget a second power supply too!
And - how does your system notify you the power supply or disk has failed? (I see your IT guy gets notified about the disks - only way we could do that was via a monthly fee reporting system so I nixed it)
 
So, you make daily backups but you can go without them for a week?

Why are you backing up daily then?
He might be able to go a week without the computer, but if he loses a weeks worth of data, that could be expensive. Even losing a day's worth of data scared me. Even if the data can be re-keyed manually, more frequent backups are always better.
 
He might be able to go a week without the computer, but if he loses a weeks worth of data, that could be expensive. Even losing a day's worth of data scared me. Even if the data can be re-keyed manually, more frequent backups are always better.
|
Yeah... more than a day gets a bit scary. Mostly because we lose the access to the interchange on what fits what. Today's car design makes it very challenging to remember all of the details on all of the parts. Too much variety and often too narrow of a fit (including one model only and one model year only)

The inventory changes are also difficult to keep up with manually.

Dave, as far as notification, you might talk to an IT guy who could go into the server management app and tell it to email you when there is a problem with RAID or the power. Mine does not email me, just the remote tech. But the server management fee is rolled into the monthly subscription I have for the yard management software. It's also that company that manages the cloud backup and would provide the replacement server should the entire thing go kabooey.
 
He might be able to go a week without the computer, but if he loses a weeks worth of data, that could be expensive. Even losing a day's worth of data scared me. Even if the data can be re-keyed manually, more frequent backups are always better.

Well sure, but if nothing significant changes you are wasting space storing daily backups. Why backup something that is unchanged or irrelevant 5 times, just because?

Change to a weekly backup and do incremental's daily (if you must).
 
Well sure, but if nothing significant changes you are wasting space storing daily backups. Why backup something that is unchanged or irrelevant 5 times, just because?

Change to a weekly backup and do incremental's daily (if you must).
The information we capture is not just what inventory was sold, deleted, added, or had its description changed. The other really important items are the supply and demand info. When you contact us, the item you asked for is the demand. We capture year, make, model, part type, then the interchange of that part type. So if it was for a Chevy 5.3L engine, which version? If it was a Rear Axle for your F150, which ratio. Then we we are also capturing if we had it, did we sell it, and what price.

All of this gets aggregated by my buying software. This will analyze the list of vehicles at the auctions using the demand/supply information and narrow my focus to the 10-15% of the list that will actually generate cash to make worth purchasing. Then when I dig into the particular car, which parts on that car bring the money, how much, and how big the demand is for it. All this then adds up to what I can bid on the car. If I win the car for this bid or less, then I have a high likelihood of being profitable on that car.

But if all that information is inaccessible for more than a day or three, it really puts a pinch on the ability to run the business.
 
For home, data lives on a RAID1 NAS box (don't care about performance) and overnight that NAS is backed up to the cloud. I currently use JungleDisk for the front end and Amazon S3 for the actual data. If I was starting from scratch today, I'd probably choose Cloudberry.

Something similar would probably work for a very small biz situation. Maybe even backup to the cloud more than 1x a day.

For about 250GB, I pay $10 for S3 and $5 for JungleDisk. JD was $1 a month, but they recently ended their grandfathering so I may be switching to Cloudberry.
 
Last edited:
Well, I would like to capture and take home the entire week's work ie a Fri pm backup but if I chose Friday - it might not get done; just too busy late in the day to ensure that task would be completed. So I guess I will stick with Thurs.
 
If you back up to a cloud, you can access from home...no need to carry a physical drive around.

Well, I would like to capture and take home the entire week's work ie a Fri pm backup but if I chose Friday - it might not get done; just too busy late in the day to ensure that task would be completed. So I guess I will stick with Thurs.
 
If you back up to a cloud, you can access from home...no need to carry a physical drive around.

I don't think his Interweb connection is fast / broad enough for that.

Jim mentioned RAID.

Dave, (@Let'sgoflying!) does your server have a RAID setup?

My system does, and we have had two of the onboard HDD's die. The RAID system made those deaths transparent to our system users (they never new it happened). And I wouldn't have noticed except that the system service provider called to tell me and let me know a replacement HDD was on its way.

RAID is great for downtime reduction, but not so much for backup. HDD failures are a minor cause of data loss these days compared to software- or malware-related events, against which RAID provides zero protection. Also, RAID controllers have suicidal / homicidal tendencies. They often go down in a blaze of glory, bringing their entire arrays down with them.

Well sure, but if nothing significant changes you are wasting space storing daily backups. Why backup something that is unchanged or irrelevant 5 times, just because?

Change to a weekly backup and do incremental's daily (if you must).

In all my years in IT, I have never heard the phrase "I wish we didn't have so damned many good backups!" uttered. One can never have too many backups.

I like to have images of my critical machines. Recently I've started using Cloudberry to back up our Windows Server domain controller and file server with RAID 10. I use it to image the whole system, once to a hard drive and once to Google Nearline. Supposedly Cloudberry can install the imagine not only on another PC, but to a VM on AWS. This is in addition to the normal Windows file backup system. I hope I never need it, but if I do I'll report back.

For less critical PCs I image with Acronis backup, which has saved me loads of trouble several times.

Give Macrium a try. I did, and haven't looked back.

You have an 8-5, M-F small business with a server.
You make daily automatic backups at 5:15pm onto an external HD.
You have two of these EHDs. One stays at home except:
Normally you swap these every Thursday, taking the fresh backup home and the other is left attached to the server for those daily backups.

Just got to thinking, is there a better day of the week to swap these?

Please send your suggestions for a completely different plan if so desired but we are happy with the overall current plan and I am only looking for an answer to:

"Just got to thinking, is there a better day of the week to swap these?"

Thanks!

I recommend you consider an ioSafe disaster-proof EHD or NAS device, and back up to it daily using the software of your choice. Then use something like ShadowSpawn and Robocopy to further copy the ioSafe to the EHDs nightly and swap them weekly like you have been. This way in the unlikely event that the ioSafe is destroyed and the data is irrecoverable by the company, at most you lose a week of data.

At some point, if you get a better Internet connection, then you can copy the ioSafe to someplace like BackBlaze B2 nightly using something like ShadowSpawn, Robocopy, and/or rClone. The cloud backup then becomes your doomsday backup, needed only if the ioSafe is destroyed and the data on its enclosed drives is irrecoverable. That's basically what I do, except I also have a HDD clone in the mix. (Did I mention that you can never have too many backups?)

The reason I like backing up to the ioSafe and then backing up the ioSafe to BackBlaze is because local backups are much more convenient. Cloud backups are great as secondary or "doomsday" destinations, but unless you have gigabit Internet, you'll save many hours by having your primary backup on a local EHD or NAS if you ever actually need to use your backups.

Rich
 
Last edited:
In all my years in IT, I have never heard the phrase "I wish we didn't have so damned many good backups!" uttered. One can never have too many backups.
Rich

Good backups being the operative phrase here Rich. Explain to me the point of backing 1 TB of information twice if nothing has changed, are you doing it just for the sake of doing it?

Now, if the OP is doing a full backup at the beginning of the week for example and an incremental that would backup the 1 KB file that changed the next day, awesome. Backing up daily for no reason is just a waste of time. Having an external hard drive that starts the chain and a cloud that houses the incremental is also a bad idea. Lose the EHD, lose the whole chain.

I think it's also important to note that the "external hard drive" or file-based backups that most everyone has been talking about only really apply to exactly that..files. And if that's all that matters, why are you even using a server? You shouldn't need a full blown server for file storage. There's about a dozen other ways to do that better these days, with less power, overhead, etc. So I'll bet that you are using the server for more than just file storage. And..if that's the case, file-based backups only go so far.

If you are backing up application files with an external hard drive, you are doing it wrong. Image-based backups are better, but what are you going to do with those if the ###$ hits the fan? You still need a host. Even if that host lives in the cloud like Cloudberry or Datto for example.

Also, I'd be wary of any companies out there promising a low turnaround SLA for disaster without any proof. It's easy for a company to say, oh yeah we'll have your data restored on a server and back to you in 48 hours, ok..show me. For those of you that use those companies, have you ever TESTED that? I used to run through annual (or more) fire-drills where we would actually take the data that we had and see how fast we could make it available for the client. Some applications are super complex and even after a restore can require support from third-parties to fully use again. Do they factor that in? What if a fire knocks out three of their clients at the same time, are they going to have you up and running in 48 hours as originally promised? Are they staffed for that?

I know the OP's question was really simple, but it's a bit scary reading what some of you are doing for backups...
 
Good backups being the operative phrase here Rich. Explain to me the point of backing 1 TB of information twice if nothing has changed, are you doing it just for the sake of doing it?

Now, if the OP is doing a full backup at the beginning of the week for example and an incremental that would backup the 1 KB file that changed the next day, awesome. Backing up daily for no reason is just a waste of time. Having an external hard drive that starts the chain and a cloud that houses the incremental is also a bad idea. Lose the EHD, lose the whole chain.

I think it's also important to note that the "external hard drive" or file-based backups that most everyone has been talking about only really apply to exactly that..files. And if that's all that matters, why are you even using a server? You shouldn't need a full blown server for file storage. There's about a dozen other ways to do that better these days, with less power, overhead, etc. So I'll bet that you are using the server for more than just file storage. And..if that's the case, file-based backups only go so far.

If you are backing up application files with an external hard drive, you are doing it wrong. Image-based backups are better, but what are you going to do with those if the ###$ hits the fan? You still need a host. Even if that host lives in the cloud like Cloudberry or Datto for example.

Also, I'd be wary of any companies out there promising a low turnaround SLA for disaster without any proof. It's easy for a company to say, oh yeah we'll have your data restored on a server and back to you in 48 hours, ok..show me. For those of you that use those companies, have you ever TESTED that? I used to run through annual (or more) fire-drills where we would actually take the data that we had and see how fast we could make it available for the client. Some applications are super complex and even after a restore can require support from third-parties to fully use again. Do they factor that in? What if a fire knocks out three of their clients at the same time, are they going to have you up and running in 48 hours as originally promised? Are they staffed for that?

I know the OP's question was really simple, but it's a bit scary reading what some of you are doing for backups...

I've experienced, either directly or as a consultant to someone else, every kind of data loss event short of a nuclear blast. Yet I personally have never permanently lost any of my own data, nor have any of my clients who were already my clients when their sundry disasters struck permanently lost any of their own data. That's because of my scary, admittedly OCD backup strategies. Planning for impossible things to happen and for fail-safe devices to fail has bailed many asses out of many fires for many years.

Your apparent dislike of EHDs and NASs is especially mystifying to me, especially in Dave's case. He has a crappy, unstable DSL connection that rules out any form of online backup. So what's left as a destination for Dave's data? Only EHD or NAS, unless you want to get into old-school tape drives or network backup to another building that's close enough to run cable or connect to with WiFi.

An EHD or NAS is a perfectly suitable destination for both file backups an image backups. (A dedicated EHD would also be a perfectly good destination for a clone, if desired.) If it's a file backup, then nothing special is needed to retrieve it. If it's an image and you're using the right software, then that software can simply be installed on Any Available PC if all that's desired is to recover data as opposed to the entire HDD image. Yes, you need a host. But that host can be any computer if all you want is the data.

But why limit yourself to one or the other? Just do both. The EHD can store both image and file backups, so why not both?

Because of the lack of a viable online option in Dave's case, some degree of protection of the EHD / NAS against fires and floods would also be desirable. Hence my suggestion for the ioSafe, which is as fire- and flood-resistant as you're going to get for a reasonable price. Both file backups and images can be stored on the ioSafe drive and be immediately or almost-immediately available in the case of a disaster. The odds are overwhelmingly good that the data the ioSafe houses will survive a fire or flood, and the backup will be the most current one.

In a normal situation, I would also use Some Software to back up the backups on the ioSafe drive to online backup. I don't mind command-line tools, so I just use Robocopy and Rclone for that part, with flags to copy only the changes. But practically any GUI-based backup software can do the same thing.

The reason for backing up the ioSafe is to create the doomsday backup in case the ioSafe, against all odds, does **** the bed along with its drives, and the company can't recover the data; or if Dave's computer and the ioSafe are both stolen. By using versioning on the remote host, it also provides additional protection against ransomware should the local machine become infected with ransomware that's smart enough to encrypt the files on the ioSafe.

Dave, however, has a ****ty Internet connection; so using online backup is out. That limits his doomsday backup options to either EHDs or some LAN scheme to another building for fire and flood protection.

I'm not sure what you find scary about that, so please advise. I can think of other options that are as good, but I fail to get the scary part. Thanks.

Rich
 
Last edited:
The best day of the week to do a backup is right before your server crashes. Now, assuming you don't know when that will be, the best day will be right after you have done your biggest piece of work. So if you balance out your books on Friday, do it Friday afternoon. Otherwise it doesn't matter.

BTW, if you CAN predict when servers are going to crash, contact me because that skill is worth millions.
 
I really like our Synology NAS with the built in backup to an Amazon S3 bucket. Copy crap to Synology, Synology copies crap to S3. Restore can either be from S3 back into the Synology, or for single file recoveries, just log into the S3 bucket and download it. Brain dead easy.

If we really badly needed on-site duplication, it'll do that in real-time, and do failover to a second Synology on-site, too. We don't, so we didn't bother installing a second one.

The only complaint with Synology is they're not really good at replacement hardware fast. A dead power supply out of two, was about a week to get the replacement in. But the entire chassis brand new is so cheap, we could overnight one from Amazon in a dire emergency, including replacement disks, and restore from S3.

They have a bunch of cloud providers they work with for backups if Amazon isn't your cup of tea. We could have also used Glacier, but the restore options are more limited that way. Can't just as easily log in and grab files in real time for the one-offs.

Heck, in a really dire emergency, we could fire up an EC2 instance running Samba and attach it to the S3 backup bucket if we wanted to. Tons of flexibility this way.
 
Back
Top