Although the mainframe is often viewed as a legacy platform, mainframe systems still occupy a significant place in the data centers of many organizations. Furthermore, although mainframe price/performance continues to improve year to year, mainframes remain a big-ticket investment. Therefore, proper management of mainframe utilization is an important part of data center planning.
Mainframe capacity planning is a balancing act. Too much utilization during peak periods can create CPU bottlenecks, resulting in slowed response time or late completion of batch processing. Under-utilization, on the other hand, means that the organization is spending more on mainframe capacity than is warranted by the organization’s computing workload. Furthermore, oversized mainframes drive other costs in the data center. For example, mainframe software is largely based on installed MIPS; the more MIPS, the higher the license and maintenance fee for mainframe software. Larger mainframes also consume more power and require more cooling. Therefore, properly sizing mainframe capacity is a key to keeping data center costs under control.
This article provides metrics for average utilization of mainframe systems across all industry sectors by shift, along with equivalent utilization figures for best practice participants. User organizations may compare their own utilization levels with these metrics to determine where they stand in relation to industry averages and make adjustments accordingly.
Average and Best Practice Utilization by Shift
Figure 1 provides mainframe utilization metrics over the previous five quarters across organizations in all industry sectors. CPU usage is measured over a 30-day period, 24 hours a day. The 30-day period is limited to four weekends. It must include a month-end close, and it must not include a holiday.
Average mainframe CPU utilization over the entire 30-day period is 53%, with the best-practice participant averaging 75% utilization. During prime shift (Monday through Friday, 7:00 a.m. to 6:00 p.m.), survey participants achieve average utilization of 68%, with the best-practice participant reaching a high of 83%. On non-prime shift weekdays (Monday through Thursday, 6:00 p.m. to 7:00 a.m., and Friday 6:00 p.m. to 12:00 a.m.), organizations run at 58% utilization, with the best-practice participant achieving a 75% utilization rate. Weekend utilization (Saturdays and Sundays, 48 hours) is only 38% on average, with the best-practice participant reaching 68%. In shift definitions, all times are local data center time.
The best-practice participants in Figure 1 are defined as those organizations that have the highest utilization while maintaining acceptable service levels. Clearly, these data centers have managed to achieve the balance between effectively managing mainframe resources without sacrificing customer service. It is also important to note that the study sample shows a number of participants at or close to the level shown for best practice, indicating that these levels of utilization are not unusual for data centers that do a good job of maintaining mainframe capacity.
Improving Mainframe Utilization
Over the long term, mainframe capacity utilization has remained relatively consistent. Since 1989, average CPU usage over a 30 day period has ranged from 49% to 53%. However, organizations do not plan mainframe capacity based on average demand–it must be sufficient to accommodate peak demand. For example, a stock brokerage firm must be able to meet the one minute peak when the market opens. The data center cannot control when the market opens. Therefore, there will always be excess capacity on all shifts. Likewise, credit card processors are configured to meet the peak demand on the day after Thanksgiving and the weekend before Christmas.
Generally, however, according to the study sample the peak usage hour is from 10:00 a.m. to 11:00 a.m. on weekdays, where average usage reaches 74.5%. The afternoon peak is from 2:00 p.m. to 3:00 p.m., with average utilization reaching 72.0%. Of course, within the peak hour, utilization may spike even higher due to a sudden surge in online processing or initiation of certain CPU-intensive batch jobs.
Therefore, the key to improving mainframe utilization is to cut down the peaks in demand by moving work from prime shift to non-prime shift–or, better yet, to weekends. Unfortunately, in many organizations the mainframe processing schedule is well established, with dependencies between batch jobs that are not always well-understood, and it may not be easy to simply move batch processing between shifts or to different days of the week. On the other hand, some jobs running on prime shift may not be as important as they were years ago when the schedule was first developed. A little investigation may reveal that some jobs do not need to be run daily or at least may be relegated to second or third shift.
Finally, data center managers should understand the role that economic incentives can have on mainframe utilization. If the organization has a chargeback system for data center costs, the data center manager should consider chargeback rate adjustments that motivate user departments to move work to non-prime shifts. Since the mainframe is a fixed cost to the data center, underutilized CPU cycles on second or third shift are essentially free. If incentives can be built into the chargeback system to move work to non-prime shifts, demand peaks may be reduced enough to slow the growth of mainframe capacity during the next upgrade cycle.
Although economic incentives using the chargeback system can help motivate the desired user behavior, such incentives alone are unlikely to be enough. Chargeback formulas are often difficult for end-users to understand, and they are unlikely to take the time to figure out which work can be moved from prime shift. Data center managers should therefore take it upon themselves to investigate which jobs are candidates for rescheduling and present these findings to the responsible users, along with the cost savings that the user will realize by agreeing to such changes. Only then will the data center manager stand a chance of improving the workload balance and improving mainframe utilization.
Statistics in this article were provided by Mark Levin, a Partner at Metrics Based Assessments, LLC, from data collected from data center benchmarking studies conducted over the past 12 months. Dozens of data center benchmarks are available in Levin’s book, Best Practices and Benchmarks in the Data Center, which may be purchased from the Computer Economics website (click for pricing).