Sunday, November 23, 2008

How to perform long deletes - (committing every n number of records)

When performing a DELETE from a table, Oracle will generate rollback. If the particular table you are trying to delete from contains a large number of records, it is possible for the transaction to fail because of the rollback segment not being large enough. One way to solve this issue is to use the TRUNCATE command. TRUNCATE does not generate rollback and has the added benefit of rebuilding any indexes on the table. Another solution would be to use the anonymous PL/SQL block below.

DECLARE

count NUMBER := 0;
total NUMBER := 0;

CURSOR del_record_cur IS
SELECT rowid
FROM .
WHERE
BEGIN
FOR rec IN del_record_cur LOOP
DELETE FROM .
WHERE rowid = rec.rowid;
total := total + 1;
count := count + 1;
IF (count >= 1000) THEN
COMMIT;
count := 0;
END IF;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE('Deleted ' total ' records from ..');
END;
/


Update
Soon after publishing this months DBA Tip, Geurts Maarten wrote in to indicate that the above PL/SQL block may not work where you may have either many processes running on the system or if the table is too large. If you do not have a rollback segment large enough, you may encounter the "snapshot too old" error. This is because the code block is modifing the snapshot the first cursor uses to get its rowids from.

You need to reopen the snapshot.
A PL/SQL BLock like:

BEGIN
loop
Delete .
where and rownum <>. where and rownum < 5000; -- exit the loop when there where no more 5000 reccods to delete.
exit when SQL%rowcount < 4999; -- commit to clear the rollback segments.
commit; -- this commit i forgot in the last mail, oops.
end loop;
commit; -- commit the last delete
END;

Note that the souce I typed is untested, it may have syntax errors. The point is: when you need big rollback statements it is getting slower. I have seen cases in a simular PL/SQL block (with an update instead of delete) where the first records were updated twice as fast as the last records. On the other hand, my constuction has to execute the SQL much more times, that is why i increased the commit size from 1000 to 5000.

No comments: