1

I have a question concerning the "Uncommitted Read" Isolation Level under DB2 DBMS for z/OS.

In this article

https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/perf/src/tpc/db2z_isolationissues.html

in the IBM Knowledge center it is stated that when using Isolation Levels other then Repeatable Read, the following phemomena can occur:

  • Phantom Rows
  • Dirty Read
  • Non-repeateatable read

also the ANSI-SQL 92 standard only defines this three phenomena as possible and says that no updates can be lost:

http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt

(see page 68, chapter 4.28 "SQL-transactions").

quote:

"The four isolation levels guarantee that each SQL-transaction will be executed completely or not at all, and that no updates will be lost."

So far I have not found a definitive answer on how DB2 (for z/OS) prevents lost updates when SQL statement in programs are coded with the "WITH UR" clause.

Lets assume the classic example for the "Lost Update" phenomenon:

  1. Transaction T1 reads the Value "X" from Table "R" into memory and increments it by 1.
  2. Transaction T2 reads the value "X" from Table "R" into memory and increments it by 2.
  3. Transaction T1 commits its changes
  4. Transaction T2 commits its changes - the value X+2 is stored in the database altough X+3 would be the correct result when T1 and T2 would have been serialized.

Lets assume that the transactions do not read the data by a "for update" cursor which internally leads to "with cursor stability" isolation level in DB2.

As of my understanding, to prevent this anomaly, Transaction T1 would need to set an exclusive lock on the qualifying row to prevent T2 to read the same value before T1 has not committed.

Despite that, as of the resources cited above, the Lost Update anomaly is not possible despite that - but how is this done?

  • To my understanding, unless you lock the table/row explicitly, the result will not be OK in the case you have described. The case is "lost updates" problem. See: https://books.google.com.eg/books?id=hb4zskzHrWYC&pg=PA106&dq=LOST+UPDATE+DB2&hl=en&sa=X&ved=0ahUKEwjntMbdp8bgAhVMxoUKHROdAnMQ6AEIMDAB#v=onepage&q=LOST%20UPDATE%20DB2&f=false – NoChance Feb 18 '19 at 22:24

1 Answers1

0

The Lost Update phenomenon from DB2’s point of view is an ability to overwrite the transaction’s update, if this transaction hasn’t still finished.
It’s not possible with DB2, since such a transaction locks the corresponding row exclusively until the end of this transaction, and no any other transaction can overwrite this row. Such a row lock is removed after the end of the transaction only.
But you provided another description of the Lost Update phenomenon. DB2 itself can’t prevent your applications doing such “lost updates”.
If you don’t want to have such a situation, you have to design your transactions properly. DB2 provides you an ability to do this with a number of techniques.

  • I see - so searched updates in the form of "UPDATE R SET COL_A = X WHERE... lead to an exclusive lock on the qualified rows, preventing other transactions from reading/changing it, so they cannot rely on outdated values. But reading the values independently from one another before doing the update cannot be prevented (provided no other mechanisms like row change timestamp etc.) are used. https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.perf.doc/doc/c0004121.html#c0004121__cs states, that update/delete/insert places an x-lock on the affected rows. – Monty Burns Feb 19 '19 at 11:53