Beginning SQL
Beginning SQL Beginning SQL
Chapter 11 From Sue’s perspective, however, the data appeared inconsistent. Two reads of the same record a few moments apart returned different results. Even though this did not cause an issue, in this case it might very well have. Suppose that Mary had been running reports that showed sales by office and then sales by salesperson. The two reports would not return usable data for John’s office because his office would show sales figures without the sale he just made, whereas the Salesperson report would show that sale. If Sue’s manager were to see such a report, his faith in the database would no doubt be diminished, and with good reason. Furthermore, explaining the differences would be extremely difficult. The Phantom Insert The sales manager for John’s office is running a transaction that scans the Order table and generates a long series of reports showing sales by region, sales by salesperson, and so forth. The report by region is finished, but several more reports are still running. While this transaction is running, John receives an order for $20,000, which he finishes entering just as the queries begin running for sales by salesperson. Because the transaction is allowed to see the new order, the data by salesperson is $20,000 higher than the sales by region would indicate possible. As in the previous example, the problem is that the data is inconsistent between views of the data. The database itself is in a consistent state at any given instant, but the data in the database is changing over the time that the reports are being generated. Reads of two supposedly identical sets of records return a different number of rows. In the previous example, an update to a record between reads caused the problem. In this case, the insertion of a new record between reads caused the problem. Both of these problems cause the same symptom: data that appears to be (and in fact is) different between points in time. Now that you understand isolation levels and the most common problems associated with them, the following table gives you a snapshot of how each isolation level helps you deal with each specific problem. Isolation Level Lost Update Uncommitted Data Inconsistent Data Phantom Data SERIALIZABLE Prevented Prevented Prevented Prevented REPEATABLE READ Prevented Prevented Prevented Possible READ COMMITTED Prevented Prevented Possible Possible READ UNCOMMITTED Prevented Possible Possible Possible Revisiting the Example Code 322 Remember that it is important to use transactions when they are required and to avoid them or set the isolation level correctly when the highest level of isolation is not required. In the majority of common DBMSs, locks enforce transactions, and locks inevitably cause performance issues. Using your admittedly simple example, this section discusses which code you might want to wrap in transactions and which you probably wouldn’t. The first process you need to do is to create a handful of new tables. Using a transaction at all when creating an entirely new table is probably not necessary. No other users know about the new tables, and since they simply don’t even exist until the CREATE TABLE is executed, there is no chance of lock or data
- Page 632: Chapter 10 296 This statement gives
- Page 636: Chapter 10 The CHECK OPTION plays a
- Page 642: 11 Transactions When you move out o
- Page 646: Example Data This section creates e
- Page 650: 3. Build a record in the Rentals ta
- Page 654: The other statement that the ANSI S
- Page 658: SAVE TRANSACTION Select Update Begi
- Page 662: Figure 11-5 Begin Transaction Delet
- Page 666: In spite of all of these performanc
- Page 670: Shared A shared lock essentially me
- Page 674: of record locks, the DBMS may decid
- Page 678: No updates, additions, or deletions
- Page 682: International Board Manufacturer ha
- Page 688: Chapter 11 5. At this point, you ne
- Page 694: 12 SQL Security In today’s world,
- Page 698: Setting up security begins with cre
- Page 702: ALTER USER username options ALTER U
- Page 706: In Figure 12-3, three different gro
- Page 710: How It Works After you run this sta
- Page 714: check constraints can be used in a
- Page 718: You could then give SELECT privileg
- Page 722: Limitations on Views Views are by t
- Page 726: GRANT SELECT (NAME, SSN, SALARY) ON
- Page 730: Betsey also grants privileges to Ji
Chapter 11<br />
From Sue’s perspective, however, the data appeared inconsistent. Two reads of the same record a few<br />
moments apart returned different results. Even though this did not cause an issue, in this case it might<br />
very well have. Suppose that Mary had been running reports that showed sales by office and then sales<br />
by salesperson. The two reports would not return usable data for John’s office because his office would<br />
show sales figures without the sale he just made, whereas the Salesperson report would show that sale.<br />
If Sue’s manager were to see such a report, his faith in the database would no doubt be diminished, and<br />
with good reason. Furthermore, explaining the differences would be extremely difficult.<br />
The Phantom Insert<br />
The sales manager for John’s office is running a transaction that scans the Order table and generates a<br />
long series of reports showing sales by region, sales by salesperson, and so forth. The report by region is<br />
finished, but several more reports are still running. While this transaction is running, John receives an<br />
order for $20,000, which he finishes entering just as the queries begin running for sales by salesperson.<br />
Because the transaction is allowed to see the new order, the data by salesperson is $20,000 higher than<br />
the sales by region would indicate possible.<br />
As in the previous example, the problem is that the data is inconsistent between views of the data. The<br />
database itself is in a consistent state at any given instant, but the data in the database is changing over<br />
the time that the reports are being generated. Reads of two supposedly identical sets of records return<br />
a different number of rows. In the previous example, an update to a record between reads caused the<br />
problem. In this case, the insertion of a new record between reads caused the problem. Both of these problems<br />
cause the same symptom: data that appears to be (and in fact is) different between points in time.<br />
Now that you understand isolation levels and the most common problems associated with them, the following<br />
table gives you a snapshot of how each isolation level helps you deal with each specific problem.<br />
Isolation Level Lost Update Uncommitted Data Inconsistent Data Phantom Data<br />
SERIALIZABLE Prevented Prevented Prevented Prevented<br />
REPEATABLE READ Prevented Prevented Prevented Possible<br />
READ COMMITTED Prevented Prevented Possible Possible<br />
READ UNCOMMITTED Prevented Possible Possible Possible<br />
Revisiting the Example Code<br />
322<br />
Remember that it is important to use transactions when they are required and to avoid them or set the<br />
isolation level correctly when the highest level of isolation is not required. In the majority of common<br />
DBMSs, locks enforce transactions, and locks inevitably cause performance issues. Using your admittedly<br />
simple example, this section discusses which code you might want to wrap in transactions and<br />
which you probably wouldn’t.<br />
The first process you need to do is to create a handful of new tables. Using a transaction at all when creating<br />
an entirely new table is probably not necessary. No other users know about the new tables, and<br />
since they simply don’t even exist until the CREATE TABLE is executed, there is no chance of lock or data