Windows server, mirroring a directory

Discussion in 'Technical Corner' started by weilke, Oct 30, 2012.

  1. weilke

    weilke Touchdown! Greaser! PoA Supporter

    Joined:
    Jan 26, 2010
    Messages:
    14,968

    Display name:
    weilke
    I have two Windows servers , one 2008 SBS the other one 2003. I have a SQL database on one of them and want to keep a slave copy of that database on the other (for lets say the unlikely case that a 100 year hurricane wipes out the office with the server).

    I have a second instance of the DB based application on the second server. At this time, if the main box gets wiped out, I would have to restore the DB from a backup and move the license for the application to the second box (couple of phonecalls/emails). Rather than doing that, I would like to keep a current copy of the DB on the failover server at a separate location (same domain, same network hooked together via ipsec tunnels). It would save me the restoring from backup step and allow staff to work via remote desktops etc.

    Two ways I imagine I can do this:

    - somehow make SQL write every change to a replica of the database

    - use DFS to create a mirror of a folder with the SQL database


    Anyone have some good pointers for doing one vs. the other and how to go about this.

    (and no, I can't move all of this to an apple or Linux server)
     
  2. denverpilot

    denverpilot Tied Down PoA Supporter

    Joined:
    Nov 8, 2009
    Messages:
    55,440
    Location:
    Denver, CO

    Display name:
    DenverPilot
    There are a few ways to do it.

    Simplest in your stated setup is probably to set up Log Shipping.

    http://technet.microsoft.com/en-us/library/ms190202.aspx

    Don't think DFS will help because you need to quiesce the filesystem prior to the copy. You need something that works at the database level when copying RDBMS systems in real time.
     
  3. weilke

    weilke Touchdown! Greaser! PoA Supporter

    Joined:
    Jan 26, 2010
    Messages:
    14,968

    Display name:
    weilke
    That sounds like a pretty straight forward way of doing this.


    That would have been too easy, ran across that in the DFS manual....:(
     
  4. denverpilot

    denverpilot Tied Down PoA Supporter

    Joined:
    Nov 8, 2009
    Messages:
    55,440
    Location:
    Denver, CO

    Display name:
    DenverPilot
    Granted, you're getting advice from a Unix guy, so you may want to wait until the Microsofties chime in. Ha. :)
     
  5. TMetzinger

    TMetzinger Final Approach

    Joined:
    Mar 6, 2006
    Messages:
    9,660
    Location:
    Northern Virginia

    Display name:
    Tim
    Log shipping is probably the best way to do this. It allows the changes on the primary database to be replicated to the standby database in an asynchronous fashion, which means that your primary database isn't waiting for the change to replicate on the standby system before getting on with the next task.

    You do have to check it to make sure that the logs are getting applied correctly and that there aren't any errors. One of my clients uses a similar technology (Oracle Dataguard) and occasionally a network error will result in one of the log files getting corrupted or lost, and then it's a manual process to copy the missing log file over and apply it, or even re-instantiate the database. You still want database backups above and beyond the log replication.

    Other options - do your backups across the WAN to the remote site so you have those available - this may be more "robust" than log shipping but you still have to recover from the backup so your time to resume operations is longer. You can also look at storage based solutions - SRDF or something similar where your filesystems are mirrored at the block level. This requires reliable WAN links and bandwidth to match your rate of change on the primary storage, and if done synchronously it can have performance effects since the data write has to be done on both storage systems before the primary storage tells the OS "I'm done, what's next?".
     
  6. denverpilot

    denverpilot Tied Down PoA Supporter

    Joined:
    Nov 8, 2009
    Messages:
    55,440
    Location:
    Denver, CO

    Display name:
    DenverPilot
    I avoided recommending block level stuff since it sounded like he'd be using VPN tunnel over the public Internet.

    With Log Shipping if its got problems you can copy logs onto a darn USB disk and drive it between sites if you have to. :)

    Just have to come up with a way to validate that the logs are really copying correctly and usable at site 2, is the only downside to Log Shipping that I know of, with my limited MSFT-SQL knowledge.

    I think their other HA options cost $$$$ and may not play as well with his 2003 server. Which upgrading would also cost $$$$. :)
     
  7. weilke

    weilke Touchdown! Greaser! PoA Supporter

    Joined:
    Jan 26, 2010
    Messages:
    14,968

    Display name:
    weilke
    Thanks. That sounds like what I need, if the connection goes down, it should not affect the production DB.

    Doing backups via WAN doesn't really work due to bandwidth limitations at the main (rural) site.
     
  8. TMetzinger

    TMetzinger Final Approach

    Joined:
    Mar 6, 2006
    Messages:
    9,660
    Location:
    Northern Virginia

    Display name:
    Tim
    Keep in mind that if you do any sort of batch processing that touches a lot of records in the database, the log replication for those events can take as much bandwidth as a backup! Take a long look at the number and size of the logs you generate in a day.