|
Archives of the TeradataForumMessage Posted: Sat, 07 Jul 2007 @ 11:22:46 GMT
Thanks guys (Dave Wellman,Dieter Noeth,Victor Sokovin, and others if they arrive after this note). Yes, I define all my character columns up to 30 bytes as CHAR and anything over that as VARCHAR (I have my reasons). I didn't realize the server side was UTF-16. Actually, I'm a little annoyed with myself, because I've been meaning to check the space difference between a LATIN and UNICODE table for months, but just didn't get around to it. Your combined comments - apart from alarming me - are going to make me stop and think what I'm trying to achieve. One immediate thought is that I could change my approach to the following : Only define columns as UNICODE where it is known to contain double byte characters, rather than just set all character based columns to Unicode. I can't see this as being easy to guarantee but I can no longer ignore the approach. Set all Unicode character columns to VARCHAR, which I assume (I'll check it this time!) will only take up the space it needs. crawl back under a stone and hope it all goes away ! I think I'll also have a word with my NCR consultants and ask how they plan to move forward with Unicode on the Server side. As usual, have a virtual beer on me (and if I run in to you at a conference, I'll materialize them !) Dave Clough
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||