This article is based on the “Online Tablespace Migration Using ADMIN_MOVE_TABLE Procedure” presentation for the The DB2Night Show #125: DB2’s GOT TALENT Contestant Search #2 held on Friday, 14th February 2014. I made it through the qualifying round but frankly speaking it was heavy competition out there in the Round 1 itself. Total 7 ideas presented and all of them had done their job extremely well. 3 of us – Michael, Raja and myself got a chance to present into the finals and rest of the contestants got chance to present again on 28th February 2014. Here are few outlines of my presentation in Round 1.
Table with more than 100K records needs to be migrated from 8K tablespace to 16K with minimum downtime and minimum risk of data loss.
I found out that to resolve this problem we have two possible solutions
1. EXPORT AND LOAD METHOD
In this we need to export all the data and then drop the table and re-create the table in new tablespace and re-create the constraints,indexes, views and MQTs referring to that table. And for the DB2 versions prior to DB2 9.7, probably this is the only solution we have.
But here are few Bottlenecks:
a. Migration needs to be OFFLINE.
b. There is always a risk of data loss.
Regarding data loss, I have this experience on our UAT environment. Since we have constraints on our disk space on database server. Developer executed the script by this method and due to disk space only half of the records got exported and those many only imported into the new table which cause a real trouble for me at work. 😦
So to avoid such mishaps, there is another way to all those who are using DB2 versions 9.7 and higher.
2. ONLINE MIGRATION USING ADMIN_MOVE_TABLE PROCEDURE
Here are the steps how this will work.
a. Drop MQTs, referential constraints and Views referencing the subject table.
b. Migrate the table to 16K tablespace using SYSPROC.ADMIN_MOVE_TABLE procedure.
c. Re-create the MQTs, referential constraints and Views.
Schema for this routine is SYSPROC.
db2 “CALL SYSPROC.ADMIN_MOVE_TABLE (<schemaname>, <tablename> , <data_tbsp>, <index_tbsp>,
<lob_tbsp>, <mdc_col_list>, <hash_key_list>,<range_part_key_list>, <column-definitions>,<options-list>,’MOVE’)”
For online tablespace migration we can ignore the few input parameters and pass them as NULL. Here is sample example:
HOW IT WORKS
While migrating the tablespace online there are multiple steps executed internally. Here is the brief description of each of them.
1. INIT: In this step it is verified if the operation can be performed or if the user has the required authorization to perform this operation. Also few data objects are initialized during this step. Triggers, target table and staging table are created.
2. COPY: In this step, data is copied from source table to target table. While data is being copied any IUD changes in the source table are recorded into the staging table.
3. REPLAY: This phase will copy the changes recorded in the staging table to the target table.
4. SWAP: During this phase, EXCLUSIVE lock will be acquired on both source and target tables and target table will be renamed with the source table. During this phase both the tables will be OFFLINE for very short time period.
5. CLEANUP: This phase will clean all triggers and the staging tables that were created during the migration. If we specify KEEP option then the source table will not be dropped and it will be renamed with some different name
6. VERIFY: This is the optional phase which will verify the contents of the source and the target table. But one should be careful while using this phase since this phase is very costly with respect to resource utilization.
1. Very small OFFLINE window is required.
2. Data is ACCESSIBLE while Migration.
3. No Data Loss (Because of No EXPORT AND LOAD). – Thanks God for this 🙂
4. Excellent solution to perform data maintenance operations during online maintenance window.
1. Online migration without unique indexes can be expensive or might result in deadlock.
2. Tables that have foreign key constraints, MQTs cannot be moved. But this limitations is removed in DB2 v10.1 and we can now migrate the tables with constraints on it. (Thanks to our judge Klaas Brant to pointed out this miss later after presentation)
3. Tables that have only LONG, LOB and XML data cannot be moved.
FEW TIPS AND TRICKS FOR THE USERS:
1. All the parameters MUST be mentioned in UPPER CASE.
2. Instead of MOVE operation, we can perform step by step operation INIT, COPY, REPLAY, SWAP , VERIFY, CLEANUP to gain more control.
3. As mentioned earlier VERIFY Operation is very costly. Use it only when required
4. To check the current phase:
select * from SYSTOOLS.ADMIN_MOVE_TABLE where KEY=‘STATUS’;
5. Performance of this procedure will depend solely on the throughputs in the COPY and REPLAY phase. For all other phases throughputs will almost constant.
6. Use of COPY_USE_LOAD option to will give high throughput in the COPY phase since it will use the LOAD operation instead of the sequential INSERTS.
7. Table with many indexes can be troublesome for migration. Because simply there will be additional objects that need to be considered while the migration.
8. Moving fully compressed tables will give better throughput than uncompressed or partially compressed tables because tables will not be compressed again during the migration to the target tablespace.
I ended my topic little ahead of restricted time i.e. 4 minutes but overall it was good experience. Few valuable suggestions got from judges like including pictures and animations to make it more interactive and interesting. And most important the point mentioned by Klaas regarding the referential constraints.
You can checkout my presentation at the following link:
Don’t miss this episode there are really very good presentations there in that particular episode.
To view the complete episode hit the following link: