checking internal and external constraints (mare sure to index FK targeting the table, otherwise read this);
maintain indexes;
feed undo/redo segment;
feed temp segment with resultSet;
fragmented space afterwards (in data and index) causing scattered reads.
Rules of thumb
more than 30% data to delete
Use CTAS and rename. A con may be to backup all indexes/constraints using dbms_metadata.get_ddl.
less than 30% data to delete, on regular basis (eg, weekly)
selection is easy (one column)
Consider partitioning table on this column, and use ALTER TABLE DROP PARTITION UPDATE INDEXES;
selection is cumbersome (multiple criteria, dependencies on other tables)
Overview:
* create index to speed up selection;
* select ROWID, and issue DELETE statement against ROWID;
* use FETCH BULK COLLECT LIMIT 10000 (..) FORALL to limit context switches - COMMIT at each LOOP;
* try disabling internal and external constraints, re-activating them afterwards (check for ROI);
* consider making index unusable, rebuilding them afterwards (check for ROI) - this will avoid scattered reads;
* consider shrinking table afterwards.
Data change
When a lot of rows are deleted:
easy way (do NOT lock table): SHRINK SPACE COMPACT
hard way (lock table): MOVE TABLE
manual way (remove table and constraints): INSERT INTO SELECT FROM ; DROP TABLE ; CREATE TABLE AS SELECT FROM
One of the commonest complaints about bulk inserts and bulk deletes on the OTN database forum used to be the amount of time spent on “db file sequential read” waits; execution plans (in Oracle) don’t tell us anything about the overheads of index maintenance so it’s very easy to forget that a big delete can do a lot of slow, random I/O. (It’s interesting to note that plans in SQL Server do tell you which indexes you have to maintain as you delete.)