17.06.2013 Views

Beginning Microsoft SQL Server 2008 ... - S3 Tech Training

Beginning Microsoft SQL Server 2008 ... - S3 Tech Training

Beginning Microsoft SQL Server 2008 ... - S3 Tech Training

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Chapter 19: Playing Administrator<br />

Schedule<br />

With all this setup, wouldn’t it be nice to set up a job to run this backup on a regular basis? Well, the<br />

Script button up at the top of the dialog box provides a means to do just that. Click it, and you’ll get a<br />

list of options. Choose “Script Action to Job…” and it will bring up the Job Schedule dialog box we saw<br />

earlier in the chapter, but with the job steps already completed. You can then define a regular schedule to<br />

run the backup you just defined by selecting the Schedules tab and filling it out as you desire.<br />

Recovery Models<br />

578<br />

Note that there are several more options for backups that are only addressable via<br />

issuing actual T-<strong>SQL</strong> commands or using the .NET programmability object model<br />

SMO. These are fairly advanced use, and further coverage of them can be found in<br />

Profession <strong>SQL</strong> <strong>Server</strong> <strong>2008</strong> Programming.<br />

Well, I spent most of the last section promising that we would discuss them, so it’s time to ask: What is a<br />

recovery model?<br />

Well, back in our chapter on transactions (see Chapter 14), we talked about the transaction log. In addition<br />

to keeping track of transactions to deal with transaction rollback and atomicity of data, transaction<br />

logs are also critical to being able to recover data right up to the point of system failure.<br />

Imagine for a moment that you're running a bank. Let's say you've been taking deposits and withdrawals<br />

for the last six hours — the time since your last full backup was done. Now, if your system went down,<br />

I’m guessing you’re not going to like the idea of going to last night’s backup and losing track of all the<br />

money that went out the door or came in during the interim. See where I’m going here? You really need<br />

every moment’s worth of data.<br />

Keeping the transaction log around gives us the ability to “roll forward” any transactions that happened<br />

since the last full or differential backup was done. Assuming both the data backup and the transaction<br />

logs are available, you should be able to recover right up to the point of failure.<br />

The recovery model determines for how long and what type of log records are kept; there are three<br />

options:<br />

❑ Full: This is what it says; everything is logged. Under this model, you should have no data loss<br />

in the event of system failure assuming you had a backup of the data available and have all<br />

transaction logs since that backup. If you are missing a log or have one that is damaged, then<br />

you’ll be able to recover all data up through the last intact log you have available. Keep in mind,<br />

however, that as keeping everything suggests, this can take up a fair amount of space in a system<br />

that receives a lot of changes or new data.<br />

❑ Bulk-Logged: This is like “full recovery light.” Under this option, regular transactions are<br />

logged just as they are with the full recovery method, but bulk operations are not. The result is<br />

that, in the event of system failure, a restored backup will contain any changes to data pages<br />

that did not participate in bulk operations (bulk import of data or index creation, for example),<br />

but any bulk operations must be redone. The good news on this one is that bulk operations perform<br />

much better. This performance comes with risk attached, so your mileage may vary.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!