Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

T-SQL Tuesday: Second Chances

  • 9 July 2013
  • Author: Kathi Kellenberger
  • Number of views: 7852

It’s T-SQL Tuesday once again, and this time the event, #44, is hosted by my Pragmatic Works colleague and team leader, Brad Ball (@sqlballs). The topic is Second Chances.

Many things can go wrong in life, and the same can be said for databases. In a split second, a table can be accidently dropped or truncated on a production system when the DBA thought he or she was connected to a development box. Drives in RAID volumes can go bad, fans decide to quit, a page is corrupted or a natural disaster damages a data center. We’ve all heard the stories about the janitor unplugging a server in order to listen to a radio while cleaning.

SQL Server has many features to help recover from these disasters, big and small, giving you a second chance or as many chances as you need.

The most basic disaster recovery option is database backups. You can create complete copies of the data with full backups. You can backup deltas since the last full with differentials and incremental changes with transaction log backups. For databases consisting of multiple files you can back up individual files or file groups. Regardless of your backup strategy, backups must be tested. The only good backup is one that you absolutely know will work because it has been tested.

Once you have the backups, you can restore entire databases, particular database files or even individual pages. You can restore to a point in time via the use of transaction log backups. Don’t wait until disaster strikes, practice restoring databases with many different scenarios so that you will be ready!


To recover quickly from hardware issues, you can use SQL Server Clustering. With clustering, you set up redundant hardware so that a SQL Server instance can move to another server in seconds to a few minutes. I used clustering extensively in my previous DBA position. Clustering drastically decreases the amount of downtime required for patching or hardware problems like those pesky fans. 

I see one drawback with clustering, there is not a redundant copy of the data. I always said that clustering works great, but even though you have multiple baskets, all your eggs are in one of them. You have several choices for keeping a warm or hot copy of the data on another server, hopefully in another data center.

Log shipping lets you keep a warm copy of the data on another server by copying and restoring log backups. This is kind of low tech, but it does work well. There is a delay, usually in minutes, keeping the secondary copy of the data up to date with the primary database. As long as a transaction log is not being replayed at a given moment, you can query the data on that secondary copy.

Database Mirroring was introduced in 2005, and now it is deprecated, but it is still being used in many shops. Like log shipping, you get a secondary copy of the database on another server. You can set database mirroring up to automatically fail over in case of a problem on the primary copy. You can also set it up for manual failover. There are quite a few limitations to Database Mirroring, and many of these have been overcome with the new option, AlwaysOn Availability Groups.

AlwaysOn Availability Groups (AG) was introduced in 2012 and will be enhanced in 2014. The 2012 version of AG allows up to 4 secondary copies of the databases, each with different properties. You can have automatic failover as well as warm copies for reporting. AG lets you set up groups of databases that must stay together. AG combines some of the best features of clustering, database mirroring and log shipping to give you a simpler solution for both high availability and disaster recovery.

Another way to keep redundant copies of data is by replication. There are several flavors of replication, but in each case, replication is set up at the table or even row level. To keep redundant copies of an entire database, the other choices are better options in my opinion.

One more way that SQL Server gives you a second chance is database snapshots. Database snapshots are files made up of pointers to data pages in the live database plus copies of any pages that have been changed. Database snapshots are often used before a large data load or a risky operation so that the data can be reverted back quickly in case of a problem. Database snapshots are not backups, but they can give you a second chance under limited circumstances.

SQL Server gives you many ways to protect your data and keep databases available. Make sure you are familiar with the pros and cons of each so that you can make informed decisions when planning your disaster recovery strategy. Then practice, automate, document, practice, practice, and practice some more.

Categories: SQL Server
Rate this article:
No rating

Please login or register to post comments.