The only one difference between your code and mine is that I issue just one commit at the end. At first, it did not work (job_queue_processes was 0), but after I set it to 12 it started working. Your status said that you had a large backlog, so I decided not to wait for your response and tried myself using dbms_job.submit() calls. Obviously you need to rebuild indecies, etc as required. Alter tablexyz nologging.Ģ) insert /*+ append parallel (xyzhold,12) */ into xyz_hold xyzhold (field1, field2, field3) select /*+ parallel (x,12) */ xyz.field1, my_new_value_for_field2, xyz.field3 from xyz x where blah blah blah.ģ) when done, either rename the table, or swap the partition if your original table is partitioned, and you only updated one partition as we do. When done, we swap the partition of original data with the 'dummy' table (the one containing new values), rebuild indexes in parallel, and wha-la! Our update is complete.ġ) First create your dummy hold table: create table xyz_HOLD as select * from xyz where rownum<1. With nologging, if the system aborts, you simply re-run the 'update' again, as you have the original data in the main table. We institued the Insert into a dummy table append with nologging, and were able to complete the "update" in under 30 minutes. The cursor.For loop approach for the update was calculated to take 53.7 years to complete! One of our apps updates a table of several hundred million records. This is absolutely a viable approach, and one we have used repeatedly.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |