Windows server, mirroring a directory

weilke

Touchdown! Greaser!
Joined
Jan 26, 2010
Messages
15,177
Display Name

Display name:
weilke
I have two Windows servers , one 2008 SBS the other one 2003. I have a SQL database on one of them and want to keep a slave copy of that database on the other (for lets say the unlikely case that a 100 year hurricane wipes out the office with the server).

I have a second instance of the DB based application on the second server. At this time, if the main box gets wiped out, I would have to restore the DB from a backup and move the license for the application to the second box (couple of phonecalls/emails). Rather than doing that, I would like to keep a current copy of the DB on the failover server at a separate location (same domain, same network hooked together via ipsec tunnels). It would save me the restoring from backup step and allow staff to work via remote desktops etc.

Two ways I imagine I can do this:

- somehow make SQL write every change to a replica of the database

- use DFS to create a mirror of a folder with the SQL database


Anyone have some good pointers for doing one vs. the other and how to go about this.

(and no, I can't move all of this to an apple or Linux server)
 
There are a few ways to do it.

Simplest in your stated setup is probably to set up Log Shipping.

http://technet.microsoft.com/en-us/library/ms190202.aspx

That sounds like a pretty straight forward way of doing this.


Don't think DFS will help because you need to quiesce the filesystem prior to the copy. You need something that works at the database level when copying RDBMS systems in real time.

That would have been too easy, ran across that in the DFS manual....:(
 
Granted, you're getting advice from a Unix guy, so you may want to wait until the Microsofties chime in. Ha. :)
 
Log shipping is probably the best way to do this. It allows the changes on the primary database to be replicated to the standby database in an asynchronous fashion, which means that your primary database isn't waiting for the change to replicate on the standby system before getting on with the next task.

You do have to check it to make sure that the logs are getting applied correctly and that there aren't any errors. One of my clients uses a similar technology (Oracle Dataguard) and occasionally a network error will result in one of the log files getting corrupted or lost, and then it's a manual process to copy the missing log file over and apply it, or even re-instantiate the database. You still want database backups above and beyond the log replication.

Other options - do your backups across the WAN to the remote site so you have those available - this may be more "robust" than log shipping but you still have to recover from the backup so your time to resume operations is longer. You can also look at storage based solutions - SRDF or something similar where your filesystems are mirrored at the block level. This requires reliable WAN links and bandwidth to match your rate of change on the primary storage, and if done synchronously it can have performance effects since the data write has to be done on both storage systems before the primary storage tells the OS "I'm done, what's next?".
 
I avoided recommending block level stuff since it sounded like he'd be using VPN tunnel over the public Internet.

With Log Shipping if its got problems you can copy logs onto a darn USB disk and drive it between sites if you have to. :)

Just have to come up with a way to validate that the logs are really copying correctly and usable at site 2, is the only downside to Log Shipping that I know of, with my limited MSFT-SQL knowledge.

I think their other HA options cost $$$$ and may not play as well with his 2003 server. Which upgrading would also cost $$$$. :)
 
Log shipping is probably the best way to do this. It allows the changes on the primary database to be replicated to the standby database in an asynchronous fashion, which means that your primary database isn't waiting for the change to replicate on the standby system before getting on with the next task.

You do have to check it to make sure that the logs are getting applied correctly and that there aren't any errors. One of my clients uses a similar technology (Oracle Dataguard) and occasionally a network error will result in one of the log files getting corrupted or lost, and then it's a manual process to copy the missing log file over and apply it, or even re-instantiate the database. You still want database backups above and beyond the log replication.

Thanks. That sounds like what I need, if the connection goes down, it should not affect the production DB.

Doing backups via WAN doesn't really work due to bandwidth limitations at the main (rural) site.
 
Keep in mind that if you do any sort of batch processing that touches a lot of records in the database, the log replication for those events can take as much bandwidth as a backup! Take a long look at the number and size of the logs you generate in a day.
 
Back
Top