05.11.2015 Views

Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

272<br />

CHAPTER 8 ■ TRANSACTIONS<br />

very wide distribution of rows. The first UPDATE did more work than all of the others combined.<br />

Additionally, if other sessions are accessing this table <strong>and</strong> modifying the data, they might<br />

update the object_name field as well. Suppose that some other session updates the object<br />

named Z to be A, after we already processed the As—we would miss that record. Furthermore,<br />

this is a very inefficient process compared to UPDATE T SET LAST_DDL_TIME = LAST_DDL_TIME+1.<br />

We are probably using an index to read every row in the table, or we are full scanning it n-times,<br />

both of which are undesirable. There are so many bad things to be said about this approach.<br />

The best approach is the one I advocated at the beginning of Chapter 1: do it simply. If it<br />

can be done in SQL, do it in SQL. What can’t be done in SQL, do in PL/SQL. Do it using the<br />

least amount of code you can. Have sufficient resources allocated. Always think about what<br />

happens in the event of an error. So many times, I’ve seen people code update loops that<br />

worked great on the test data but then failed halfway through when applied to the real data.<br />

Now they are really stuck, as they have no idea where it stopped processing. It is a lot easier to<br />

size undo correctly than it is to write a restartable program. If you have truly large tables that<br />

need to be updated, then you should be using partitions (more on that in Chapter 10), allowing<br />

you to update each partition individually. You can even use parallel DML to perform the<br />

update.<br />

Using Autocommit<br />

My final word on bad transaction habits concerns the one that arises from use of the popular<br />

programming APIs ODBC <strong>and</strong> JDBC. These APIs “autocommit” by default. Consider the following<br />

statements, which transfer $1,000 from a checking account to a savings account:<br />

update accounts set balance = balance - 1000 where account_id = 123;<br />

update accounts set balance = balance + 1000 where account_id = 456;<br />

If your program is using JDBC when you submit these statements, JDBC will (silently)<br />

inject a commit after each UPDATE. Consider the impact of this if the system fails after the first<br />

UPDATE <strong>and</strong> before the second. You’ve just lost $1,000!<br />

I can sort of underst<strong>and</strong> why ODBC does this. The developers of SQL Server designed<br />

ODBC, <strong>and</strong> this database dem<strong>and</strong>s that you use very short transactions due to its concurrency<br />

model (writes block reads, reads block writes, <strong>and</strong> locks are a scarce resource). What I cannot<br />

underst<strong>and</strong> is how this got carried over into JDBC, an API that is supposed to be in support of<br />

“the enterprise.” It is my belief that the very next line of code after opening a connection in<br />

JDBC should always be<br />

connection conn = DriverManager.getConnection<br />

("jdbc:oracle:oci:@database","scott","tiger");<br />

conn.setAutoCommit (false);<br />

This will return control over the transaction back to you, the developer, which is where it<br />

belongs. You can then safely code your account transfer transaction <strong>and</strong> commit it after both<br />

statements have succeeded. Lack of knowledge of your API can be deadly in this case. I’ve<br />

seen more than one developer unaware of this autocommit “feature” get into big trouble with<br />

their application when an error occurred.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!