Being a developer of DB2 LUW, I had written many Migration scripts where I needed to perform data manipulation on bulk data. Most known problem for such operations on bulk data is that most of the time we face transaction log full problem:
“SQL0964C The transaction log for the database is full”
This problem occurs frequently when the amount of data is huge. For example, consider a performance database or production database table having millions of customers and we need to manipulate all the customer data for some value.
To resolve this problem, DBAs usually configure the database parameters such as LOGBUFSZ, LOGFILSIZ, LOGPRIMARY and LOGSECOND. This approach is trail and error approach where we need to set the particular values and try for the operations.
But along with this, over the time I learned ways to perform such bulk operations using the PL/SQL operations.
Note: I use the sample database while providing the example. These examples are just for reference and can be used in different scenarios. For SAMPLE database, we don’t need to use these methods since the data is very low there.
1) Update n records at a time using GET DIAGNOSTICS:
We can write a stored procedure Where inside while Loop we can use the update statement in little different way as follows:
WHILE nCount <> 0
DO
UPDATE (SELECT * FROM Trnsct_tab WHERE isUpdated = 0 FETCH FIRST 10000 ROWS ONLY) SET Transactky = pmtlineitemky;
GET DIAGNOSTICS nCount = ROW_COUNT;
COMMIT;
END WHILE;
In above example, After every 10000 records COMMIT operation will be performed and the transaction log will get cleared. I used 10000 records at a time. We can use any number depending on our environment and parameter values.