|
|
Archives of the TeradataForum
Message Posted: Tue, 12 Aug 2003 @ 22:35:41 GMT
Subj: | | Re: PSF questions |
|
From: | | Ballinger, Carrie |
Hi Shelley,
I have a couple of suggestions and comments that I have embedded underneath the points as you express them in the text, for convenience.
I think you are on track with your planned use of resource partition limits, but I am not sure that your plans for time-based performance
periods are going to deliver what you want.
Thanks, -Carrie
| -----Original Message-----
From: Perrior, Shelley
Sent: Tuesday, August 12, 2003 12:15 PM
Subject: PSF questions | |
| Some questions about Priority Scheduler (V2R5), ceilings etc. I hope some of you can share some of your experience in this
area. | |
| We have defined all five Resource Partitions and associated Performance Groups and Allocation Groups in the default manner i.e.
Performance Groups 0-7 have associated Allocation groups 1-4, PG's 8 - 15 have AG's 40 - 43. We have moved all client accounts and batch
accounts to a partition we call "general" RP1 by modifying their account login string. RP's currently set at 100 and 75 (default\system
partition and general\client partition) respectively. The other three partitions are not in use. Typically our system is available to
application and client accounts from 7AM - 7PM and available to batch-ETL accounts from 7PM - 7AM. Client applications typically limit
client use from 7AM - 7 PM. | |
| We are about to add 2 Nodes and go from 6 nodes to 8 nodes; and, we want to limit the client resources to 75% (the original 6 nodes)
until the apps come on board that require these extra resources - but - we want to allow the batch accounts used for ETL and archiving full
use of the resources. | |
| Move the batch accounts to the 3rd resource partition, call it "Batch", set a weight of 90. These accounts will use the system,
mostly, from 7PM - 7AM - at night they should have close to full resources but we always plan to give the "default\system" partition the
most resources - if it needs them. | |
You might want to reduce the weight of the batch resource partition (RP) down from 90 to something lower. This is because it
appears that you only intend to have a single allocation group (AG) active in this RP, so all the relative weight of the RP will go to that
single AG. Currently that batch RP relative weight would be (100+90+75 = 265; 90/265 = 33%) about 33%. I am concerned that a single batch
AG, when it is active at the same time as the General resource partition, will be slightly above the relative weight of AG 4 (associated
with R), which is not recommended. Whatever else you do with weights, always check to make sure the R-associated AG (AG 4) has the highest
relative weight, because some of the very sensitive DBS work is performed under the control of AG 4, and it's always a good idea for that
work be benefit from the highest possible priority...One approach I have seen taken, to protect the relative weight of R, is to make sure
the sum of the "other" RP assigned weights does not exceed the assigned weight of RP0. You can also check your monitor output after running
with the new priorities, and tweak the weights after the fact, if AG 4 is not coming out on top.
| For this batch partition we will setup the PG, performance periods, based on time-of-day i.e. 07:00 AM until 7PM | |
The time-of-day performance period (PP) will move users associated with a given performance group from one allocation group to a
different AG within the same resource partition. All active sessions will be moved at the same time, so only one or the other AG will ever
be active, the way you have it set up. If you only intend to have a single group active for all batch work, then setting up time
milestones is not really going to alter the relative weight you end up with. Have you thought about simply reducing the assigned weight of
RP2 at 7 AM, if your intent is to lower the priority of batch work when general work begins in the morning? You could, for example, give
RP1 an assigned weight of 75 during the day and 25 at night, and the reverse for the batch RP, as an example. You would have to submit a
UNIX job to issue those weight changes regularily, but that's a fairly common approach.
| We will set the General\Client partition to a 75% limit using the new V2R5 feature which allows you to place a limit at RP
Level. | |
This is a reasonable approach, to only limit the user queries, not the batch or internal work.
| Account strings will be modified to add $M2$ - placing the batch accounts in Performance Group 18 - initially associated with
Allocation group 81. | |
| Couldn't find a great deal of info on the syntax related to a "Time" performance period - does the following example look about right
- PG 18 will use AG 81 until 07:00 at which time it will switch to AG 80? | |
Yes, the syntax looks fine to me--it's the "end time" that you want to express in each performance period, which is what you are
doing. But I still have the same question as I raised above: Is this the best way to accomplish what you want to do? If there is only one
performance group that will ever be active within the batch RP, then the change being made will have no impact on relative weight. This is
because if only a single AG is active, it will have the same relative weight whether it is AG 80 or AG 81. The only thing that could make a
difference using this performance period approach is if AG 80 had a different policy, such as a restrictive policy, but based on your
syntax, they both use the default policy.
| Do I need to add the $L2$ to the account string? Since it is the performance period which makes the switch I wasn't sure I
needed this 2nd PG association in the account string? I believe that I do need it but not sure? | |
Nope. When a milestone is being used, the performance group stays the same, but the level below the performance group, the
allocation group changes.
| Will the other allocation groups interfere in the weighting algorithm; especially were it relates to the new V2R5 RP ceiling
limit - this is why I have set the associated AG's to 1 - to take them out of the equation - I don't think they would be factored in but
wasn't sure? | |
Nope again. Only active allocation groups will influence the calculation of relative weight. Any inactive allocation group
doesn't get considered, so their weights could be anything. The RP-level ceiling will only consider the active allocation groups within
that RP.
| Any thoughts on the RP weightings: Default-RP0 = 100 (relative weight), General-RP1 = 75 (RP ceiling limit 75 as well), Batch-
RP2 = 90 (Relative weighting)? | |
If, as you said earlier, you only intend to use RP1 during the day, and RP2 during the night, then as long as they are both
independently less than RP0's assigned weight you are probably on the right track. If you foresee RP1 and RP2 ever being active at the same
time (stuff happens), then I think you have them, in combination, weighted them too high. I'd try for something whose sum was 100 (the
weight of RP0) or less.
| |