Quantcast
Channel: Teradata Developer Exchange - Blog activity for carrie's blog
Viewing all articles
Browse latest Browse all 1058

Don’t confuse SLES11 Virtual Partitions with SLES10 Resource Partitions - blog entry by carrie

$
0
0
Cover Image: 

Because they look like just another group of workloads, you might think that SLES11 virtual partitions are the same as SLES10 resource partitions.  I’m here to tell you that is not the case.  They have quite different capabilities and purposes.  So don’t fall victim to retro-conventions and old-school habits that might hold you back from the full value of new technology.  Start using SLES11 with fresh eyes and brand new attitudes.  Begin at the virtual partition level.

This content is relevant to EDW platforms only.

Background on SLES10 Resource Partitions  

Use of multiple resource partitions (RP) in SLES10 originated due to restrictions in the early days on how many different priorities each RP could support.  The original Teradata priority scheduler had four external performance groups and four internal performance groups contained in a single default RP.  Even today, the original RP (RP 0, the default RP) usually supports no more than four default priorities of $L, $M, $H, and $R.

In Teradata V2R5 came the ability to add resource partitions, but even then each new resource partition could only support 4 different external performance groups, similar to how RP 0 worked.  This forced users to branch out to more RPs if they had a greater number of priority differences.  So it was common to see 4 to 5 RPs in use, and some users raised complaints that that wasn’t enough to provide homes to the growing mix of priorities they were trying to support.

In V2R6, priority scheduler was enhanced to allow more than 4 priority groupings in any RP.  At that time we encouraged users to consolidate all their performance groups into three standard partitions for ease of management:  Default, Standard, and Tactical.  Generally, a Tactical RP was needed to give special protection to short tactical queries.  Some internal work still ran in RP 0 so it was recommended that you avoid assigning user work there, which necessitated that a “Standard” RP be set up to manage all of the non-tactical performance groups.   In SLES10 many users embraced this three-RP approach, while others went their own way with subject-area divisions or priority-based divisions among multiple RPs (creating a Batch RP and a User RP, for example).

Here are four rationales for the multiple resource partition usage patterns that are in heavy rotation with SLES10 today.  For the most part they came into being due to restrictions within the SLES10 priority scheduler which encouraged out-of-the-box use of multiple RPs on EDW platforms, whether you thought you needed them or not.

  1. Internal work: Some sensitive internal work ran in RP 0, the so the recommendation was to avoid putting user work there.
  2. Protection for tactical work by isolating it into its own RP with a high RP weight.  A high RP weight contributed to a more stable relative weight (allocation of resources) for tactical workloads.
  3. Desire to more easily swap priorities between load and query work by different times of day (by making one change at the RP-level instead of multiple changes at the level of the allocation group).   These RP-level changes often included the desire to add RP-level CPU limits on RPs supporting resource-intensive work, in order to protect tactical queries at certain times of the day.
  4. Sharing unused resources with an RP.   Some sites liked putting all work from one application type in the same RP so that if one of the allocation groups was idle, the other allocation groups of that type would get their relative weight points.  The SLES10 relative weight calculation benefits groups within the same RP, such that they share unused resources among themselves first, before those resources are made available to allocation groups in other RPs.

Very limited examples of using resource partitions for business unit divisions has been in evidence among Teradata sites on SLES10, partly because of only having four usable RPs and partly because the SLES10 technology has not been all-encompassing enough to support the degree of separation required.

What Has Changed with SLES11?

A lot.

First, let’s address the four key motives (or rationales) users have had for spreading workloads and performance groups across multiple RPs in  SLES10, but looking at it from the SLES11 perspective.

  1. Internal work: In SLES11 all internal work has been moved up in the priority hierarchy above the virtual partition level, where it can get all of the resource it needs off the top, without the user having to be aware or considerate of where that internal work is running.   There is no longer a need to set up additional partitions to avoid impacting internal work.    
  2. Protection for tactical work: The Tactical tier in SLES11 is intended (and is) a turbo-powered location in which to place tactical queries, where response time expectations can be consistently met without taking extraordinary steps.  The Tactical tier in SLES11 is first in line when it comes to resource allocation, right after operating system and internal database tasks.  This eliminates the need for a special partition solely for tactical work, or as a means of applying resource limits on the non-tactical work.
  3. Desire to more easily swap priorities: There is something to be said for grouping workloads that need priority changes at similar times into a single partition, because then you only have to make the change in one place.  But that is a fairly minor issue on either SLES10 or SLES11 with the advent of TASM planned environments.   You’re not saving that much during TASM setup to indicate a change in one place (a virtual partition) vs. making a change in several places (multiple workloads) when those changes are going to be happening automatically for you at run time each day.  There is no repetitive action that needs to be taken by the administrator once a new planned environment has been created.  New planned environments can automatically implement new definitions, with lower priorities for some of the workloads and higher for others, no matter how many workloads are involved.     

Applying higher level (partition-level) resource limits on a group of workloads at the partition level, as we have see in some SLES10 sites, is much less likely to be needed in SLES11 (I personally believe it will not be needed at all).  That is because the accounting in SLES11 priority scheduler is more accurate, giving SLES 11 the ability to deliver exactly what is specified.  No more, no less. There is no longer a performance-protection need for resource limits or an over-/under-allocation of weight at the partition level. And because that need has gone away, the argument in favor of separate partitions for performance benefit is less compelling.

  1. Sharing unused resources.  Sharing unused resources among a small set of selected workloads is available on each SLG Tier as it exists within a single virtual partition in SLES11.  If an SLG Tier 1 workload is idle, the other workloads placed on SLG Tier 1 will be able to share its allocation before those resources are made available to other workloads lower in the hierarchy.  The order of sharing of unused resources is guided by the priority hierarchy in SLES11 and does not require multiple partitions to implement. 

The Intent and Vision of SLES11 Virtual Partitions

A virtual partition in SLES11 is a self-contained microcosm.  It has a place for very high priority tactical work in the Tactical tier.  It has many places in the SLG Tiers for critical, time dependent work across all applications ranging from the very simple to the more complex.   And at the base of its structure in Timeshare it can accommodate large numbers of different workloads submitting resource-intensive or background work at different access levels, including load jobs, sandbox applications and long-running queries. Within its self-sufficient world, priorities at the workload level can be changed multiple times every day if you wish, using planned environments in the TASM state matrix.

If you’re on an EDW platform with SLES11, you are offered multiple virtual partitions, but their intent is different from SLES10 resource partitions.   Virtual partitions were implemented in order to provide a capability that SLES10 was not well suited to deliver:  Supporting differences in resource availability across multiple business units, or distinct geographic areas, or a collection of tenants.

Virtual partitions are there to provide a method of slicing up available resources among key business divisions of the company on the same hardware platform.  Once you get on SLES11, if you begin moving in a direction that made sense in SLES10, you lose the ability to sustain distinct business units in the future.  And you’ll be less in harmony with TASM/SLES11 enhancements going forward.

New capabilities around virtual partitions, such a virtual partition throttles in 15.0, and other similar enhancements being planned, are all being put in place with the same consistent vision of what a virtual partition is.  Keep in step with these enhancements and position yourself to use them fully, by letting go of previous conventions and embracing the new world of SLES11 possibilities.  

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Viewing all articles
Browse latest Browse all 1058

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>