Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Fri, 05 Oct 2012 @ 10:52:20 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Compression Process on Teradata 13.10
 
From:   VANDORMAEL Raf

Hi,

Our previous boxes were of the type:

5450 : 2 nodes. Dedicated to dev/stg (TD version 13)

5500C: 2 nodes. Dedicated to prd. (TD version 13)


We had no compression on the tables before the backups were taken.

So in fact, we did some combined operations:

- New machinery (floorsweep) + new TD version. (TD13 ==> TD 13.10) + data migration. (exactually in our case it was also going from UNIX MP- RAS ==> SUSE LINUX!).


We didn't have Datamover at that moment in time. Now we do, but that's another topic. We did have NPARC but I was told that tool is EOL and being replaced by datamover, hence it wasn't installed and configured on our new boxes. As we had no experience with datamover, only one option remained for the data migration ==> ARCMAIN.

So for our backup/restore operation ==> ARCMAIN TTU 14 on MF. (= our shops current standard).

Personally I think it doesn't matter if it's ARCMAIN on MF or not, the tool remains the same. (Please esteemed experts don't shoot me if I'm wrong). I think internally it must be some kind of flag. ARCmain seems to notice whether or not data is compressed or not. If data is compressed at the time of the backup, it gets restored compressed, if not it'll get restored as before, regardless whether the auto compression parameters are on or not.

About the remark of TD sales/tech people not being aware of it:

Could be, but our local support/EMEA team know it. A few days after the data migration had completed, and after some investigations, we noticed it and raised an incident ticket for it at TD. (by use of the web portal, sorry, can't remember the ticket number anymore). I'm confident and 100% sure to say the auto compression activate. Our local TD support crew had checked the entire configuration.

So in order to activate the compression on our non-compressed tables, they said we had to use the ferret utility and do manually compression, table by table. *big sigh ==> big job :P ==> Hallelujah for automation :-).

We used the CNSRUN utility to execute the ferret scripts in batch mode. I'll try to give you the steps I did to get the job done.

1a) Save your current space statistics, so you're able to compare the past/future situation.

1b) Generate the compress statements for the tables dynamically via bteq or SQLA. (I did it database by database)

SQL:

     SELECT 'compress "'||TRIM(databasename)||'.'||TRIM(tablename)||'"'
     (TITLE '')
     FROM dbc.tables
     WHERE databasename = 'yourdbname'
     ORDER BY tablename
     ;

     Result needs to be : compress "db.tablename"

2) Of course the syntax is not yet complete because your cnsrun ferret script needs to look like:

     enable scriptmode
     output over /tmp/rvd/db.log

     compress "db.tab1"  ==> generated statement from point 1 Y compress "db.tab2"  ==>
     generated statement from point 1 Y ...
     ...
     quit

So I wrote a little program that created me the compression script as you see above. Depending of the number of lines to process, I generated me several scripts.

For instance if your input has 1000 lines, my script shopped it up into 5 scripts with each 200 tables to process.

(I'm not going to elaborate on this, because it was a workaround due to a bug in ferret, but normally it should be fixed now, by one of the numerous TD patches.) Normally this shopping is no longer necessary)

3) File transfer the generated scripts to your target machine.

4) Logon to primary node of the target machine. Run the cnsrun command to start ferret in batch, with your generated scripts as input.

For the exact syntax of the cnsrun tool (in this case ferret), please look into the utilities guide.

5a) follow & check the result(s).

5b) look at the new space statistics and see what the outcome is. Results are spectacular!! (at least in our shop)


To end this long story, this is how it was done by me in our shop. Every shop has its own curiosities, exceptions etc etc, so don't start doing everything I've done blindly. First examine a bit what's useful for you.

Also read the manual, mentioned in my previous posting, carefull. You'll need it.

If your new machines are already running but not yet in use, do an experimental test migration During that test-migration, note everything you encounter as an issue and how you have solved it. Afterwards, Once all that is done, make a execution plan. It will save you a lot of trouble when it's time for the real action. :)


Best regards,

R.



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023