Home Page for the TeradataForum

Archives of the TeradataForum

Message Posted: Sat, 08 Mar 2003 @ 09:58:53 GMT

  <Prev Next>   <<First <Prev Next> Last>>  

Subj:   Re: Deleting from a huge table
From:   Dieter Nöth

Anomy Anom wrote:

  I have a very huge table of 1.2 billion rows. I load every day about 30 million to this table. After the load need to delete about the same number of rows. The delete involves very complex logic. The PI on the table is non unique. The delete is based on columns, and not all of them are in the PI. Currently the delete takes about 3 hours. I would like to know if this is the best that we can do. Our production node is a W2K server with 4cpus. Could any one suggest a better way of deleting?  

Do you delete using SQL? In the case of a failure this will result in a very long rollback.

A MultiLoad DELETE job is faster than SQL, because there's no transient journal and there's never a rollback.

Or you export the data to be deleted (including the PI columns) and use a Mload IMPORT job to delete the rows. You could even add that to the existing Mload job, so it will be done using a single Table Scan.

Another way is to insert/select the rows not to be deleted into a new table. This maybe the fastest way, but only if it's deleting much more than 2.5%.


  <Prev Next>   <<First <Prev Next> Last>>  
  Top Home Privacy Feedback  
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 28 Jun 2020