CSDN博客

img zhaoyang17

Managing Web Site Performance

发表于2004/10/6 18:07:00  1367人阅读

分类: 02 CRM

Managing Web Site Performance

Table of Contents

Executive summary
Introducing a methodology for managing performance
Step 1. Establish performance objectives
Step 2. Monitor and measure the site
Step 3. Analyze and tune components
Step 4. Predict and plan for the future
Summary
Appendix A. Some performance management scenarios
Appendix B. Tools for monitoring performance
References

Authors: High Volume Web Site Team
More information: High Volume Web Sites Zone
Technical contact: Joseph Spano
Management contact: Willy Chiu

Date: April 23, 2001
Status: Version 1.0

PDF version also available.

Abstract

As enterprises implement Web applications in response to the pressures of e-business, managing performance becomes increasingly critical. This paper introduces a methodology for managing performance from one end of the e-business infrastructure to the other. It identifies some "best practices" and tools that help implement the methodology.

Contributors

The High Volume Web Site team is grateful to the major contributors to this article: Willy Chiu, Jerry Cuomo, Ebbe Jalser, Rahul Jain, Frank Jones, W. Nathaniel Mills III, Bill Scully, Joseph Spano, Ruth Willenborg, and Helen Wu.

Special notice

The information contained in this document has not been submitted to any formal IBM® test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Executive summary

As more of your company's business moves to the Internet, your IT organization is becoming a major focal point for such important business measures as revenue and customer satisfaction. You're enjoying unprecedented visibility, and it may not all be positive. If it hasn't already, the performance of your Web site will become critically important.

This paper deals with managing performance. More than ever before, this task requires a perspective that considers components of the infrastructure from end-to-end, from the front-end browsers to the back-end database servers and legacy systems. The end-to-end perspective must be shared not only by you and your operations staff, but also by application developers and Web site designers. Required as well are thoughtful objectives for performance coupled with thorough measurements of performance.

This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies. If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology (see References). Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to meet them.

Our best practices methodology for managing the performance of a high-volume Web site consists of familiar tasks:

  • Establish objectives
  • Monitor and measure the site
  • Analyze and tune components
  • Predict and plan for the future

Some benefits you can expect after implementing the end-to-end methodology include:

  • Proper reporting of quality of service metrics
  • Interactive and historical data on end-to-end performance
  • Rapid identification of the problem source
  • Improved support of business goals
  • Understanding and control of transaction costs
  • World class customer support and satisfaction

The goal of implementing the end-to-end methodology is to align the system performance with the underlying business goals. The methodology, coupled with implementation of the capacity-on-demand options available from IBM's powerful server family, make the goal achievable, and set the stage for self-managing IT infrastructures.

Introducing a methodology for managing performance

The IT infrastructures that comprise most high-volume Web sites (HVWSs) present unique challenges in design, implementation, and management. While actual implementations vary, Figure 1 below shows a typical e-business infrastructure comprised of several tiers. Each tier handles a particular set of functions, such as serving content (Web servers such as the IBM HTTP Server), providing integration business logic (Web application servers such as the WebSphere® Application Server), or processing database transactions (transaction and database servers).

Figure 1. Multi-tier infrastructure for e-business
Multi-tier infrastructure for e-business

IBM's IT experts have been working with IBM customers to architect and analyze many of the world's largest Web sites. Figure 2 below shows how IBM's HVWS team defines the life cycle of a Web site; it also shows the categories of best practices recommended for one or more phases of the cycle. As it accumulates experience and knowledge, the HVWS team compiles white papers aimed at helping CIOs like you understand and meet the new challenges presented during one or more of the phases.

Figure 2. Life cycle of a Web site
Life cycle of a Web site

Managing the performance of a high-volume Web site requires a new look at familiar tasks such as setting objectives, measuring performance, and tuning for optimal performance. First, HVWS workloads are different from traditional workloads. HVWS workloads are assumed to be high-volume and growing, serving dynamic data, and processing transactions. Additional characteristics that can affect performance include transaction complexity, data volatility, security, and others. IBM has determined that HVWS workload patterns fit into one of five classifications: publish/subscribe, online shopping, customer self-service, trading, or business-to-business. Correctly identifying your workload pattern will position you well for making the best use of the practices recommended in this and related papers. For more information about how IBM distinguishes among HVWS workloads, see the Design for Scalability white paper.

Secondly, those performing the tasks must extend their perspectives to include the e-business infrastructure from end to end. This is most effective when all participants understand the application's business requirements, how their component contributes to the application, and how a transaction flows from one end of the infrastructure to the other. Only then can they work together to optimize application performance and meet key business needs. It's often best when one person is assigned ownership of each application considered critical to the e-business; the application owner assures the customer's perspective of application performance -- response time -- remains the primary focus of all participants.

This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies. If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology. Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to meet them.

Figure 3 below shows our methodology for managing the performance of a high-volume Web site in the context of a multi-tier infrastructure.

Figure 3. Methodology for managing performance of a HVWS
Methodology for managing performance of a HVWS

Our methodology consists of familiar tasks with a new twist, driven by the requirement for an end-to-end perspective, and including tools that are available now to help you get started. See Appendix A for some sample scenarios about managing performance and Appendix B for a summary of tools available from IBM, including Tivoli™, IBM's provider of e-business infrastructure management software.

Step 1. Establish performance objectives

The first task is to establish performance objectives for the business, the application, and operations. Performance objectives for the business include numbers of log-ons and page hits, and browse-to-buy ratios. Objectives for the application include availability, transaction response time, and total cost per transaction. Operations objectives include resource utilization (network, servers, etc.) and the behavior of the components that comprise the e-business application.

You should use the results of an application benchmark test to establish the "norms." Ideally, you acquire the norms from controlled benchmark and stress testing. If this isn't possible, you should closely monitor and measure the deployment of the application and use the results to produce a performance profile ranging from the average to the peak hours and/or days.

Metrics should be established from outside the site (response times, availability, ease of navigation, security, etc.), and from each server tier (CPU, I/O, storage, network utilization, database load, intranet traffic rates, etc.). You need to establish thresholds so that operations can be notified when targets are near, at, or over their limits. See Appendix B for a list of tools available from IBM and Tivoli.

Managing against a set of norms is an ongoing process. Frequent updates to expectations and thresholds may be required. Marketing may schedule a promotion that will drive site traffic to new highs. It is important that this be planned for to avoid "false alarms" that can occur if you haven't updated your thresholds for the expected spikes in load.

The team that sets the objectives should include representatives of each area; if that is not possible, the combined objectives should be communicated clearly to all areas, along with the emphasis on what may be considered a new paradigm, that of the end-to-end perspective.

Step 2. Monitor and measure the site

In this step you examine and analyze the performance of the application. You view the application as a transaction flow from the browser through the Web servers and, if applicable, to the backend database and transaction servers, and back to the browser. You are concerned with the entities that make up the system (operating system, firewalls, application servers, Web servers, etc.) only insofar as they support the application.

To understand end-to-end performance, you must understand and document the flow of each transaction type, for example, search, browse, buy, trade, etc. That done, you can use software that monitors the actual flow and alerts operations when any metric you specify exceeds the norms established in Step 1. For example, the alert informs you that your target page response time has been exceeded. You know that something in the system has degraded, but where is the slowdown occurring? How do you find the culprit?

You could instrument your application to record information at various points in the transaction flow. An open standard, Application Resource Management (ARM) defines an API and library for these records. In addition to Tivoli, several vendors have tools to display and analyze this data. We have used exactly this type of instrumentation to manage various high-volume Web sites. It is important to note that instrumentation adds modifications and overhead to the application.

Instead of recording information on every transaction, you can take averages at several points. Information about averages is nearly as good as full instrumentation, but comes at a lower cost and uses existing and transparent tools. Tools such as the WebSphere Resource Analyzer can be used to extract these averages through the resource management interface.

Other available tools include:

  • Report on the quality of customer experiences
  • Analyze the Web site to verify links and enforce content policy
  • Aggregate Web data into an overall business view
  • Correlate log and performance data
  • Monitor availability
  • Use online analytic processing (OLAP) techniques to provide decision support

WebSphere Application Server provides a set of comprehensive performance metrics. For servlets and beans, these include: number of requests, requests per second, execution time, and errors. Java™ metrics reported include active memory, available memory, threads active, threads idle, etc. Database connections are also included connection times, active database connections, and users waiting for database access.

It's best to continuously monitor the site availability from outside to insure that transactions are executing successfully and within criteria. Examine the site navigation periodically to validate the links and content. Resource monitors will need to roll up their data into an aggregate application view. Web logs have to be analyzed and correlated with other resource data.

In a recent customer engagement, IBM's HVWS team investigated a problem of slow customer response time. The Web server was running Netscape Enterprise Server and the WebSphere Application Server for dynamic content generation using Java servlets. A middle tier used Enterprise JavaBeans™ (EJBs) to process transactions and then a JDBC call to the database tier. Using the external monitor for response times, they found that during peak hours, response times for consumer transactions increased from fourteen seconds to twenty seconds. Using the WebSphere Resource Analyzer and some DB2® tools, they collected the internal elapsed times for the application components. Figure 4 below shows the analysis of how each component contributed to total response time. Comparing the baseline time with the peak times, it's easy to see that the slowdown occurs in the servlet tier. The WebSphere Resource Analyzer showed that the application server was running out of worker threads under peak load. Allocating additional worker threads eliminated the slowdown.

Figure 4. Application response times -- baseline vs. peak
Application response times -- baseline vs. peak

Step 3. Analyze and tune components

So far, the methodology has provided objectives, measurements, and application insights. Thus it has allowed you to understand, monitor, and report on end-to-end performance. It has also allowed rapid problem determination. When performance issues come up, you can quickly investigate the application and isolate an individual component. Appendix A contains scenarios that are based on real events and demonstrate how components are analyzed and tuned.

In this step, you analyze and tune specific components. One common question: Does the application scale gracefully? In general, scalability refers to a component's ability to adapt readily to a greater or lesser intensity of use, volume, or demand while still meeting business objectives. You want to assure that your application scales smoothly wherever deployed without experiencing thrashing, bottlenecks, or response time difficulties. You need to examine how your application uses resources: you're interested in CPU consumption per transaction and I/O and network overhead. See also Design for Scalability, our HVWS paper that recommends which scaling techniques should be applied to specific components.

Another important question: Is the application meeting economic criteria? Now that resource consumption is understood, you know the "cost per transaction" and you can assess whether the application is using resources as projected by the performance objectives. You want to consider the best practices pertaining to scalability and page design and learn what's needed to optimize how the resources in each tier are used. The application owner uses this to work with development, operations, and design to control and/or improve the efficiency of the application. In this way costs are held on budget.

We used the methodology recently to benchmark a customer's application and found that throughput seemed to be stalled in the database server. Furthermore, the database was consuming more resources than was expected based on the historical archived data. The DBA ran the analysis tools and quickly determined that one of the application SQL statements was forcing a full table scan (very expensive, very bad). This hadn't had any measurable effect during the initial deployment of the application with a limited number of customers. However, as the number of customers grew, the size of the database increased significantly. The DBA was able to define an alternate index into the table, test the change, and resolve the problem within a short time. It was the methodology that pointed us quickly to the database tier and allowed us to determine the cause of the problem and solve it quickly.

The all-important question: Can response time be improved? Using the component response times, the application owner works with operations to tune and allocate resources to insure good response times. For example, the Web servers may need more memory to allow a larger cache and reduce I/O times.

In one recent engagement, the customer help desk was flooded with complaints of slow or nonexistent performance. The senior management was concerned that the system seemed to be failing and IT seemed unable to tell them why. Using our methodology, we accessed the site with the WebSphere Studio Page Detailer to analyze page response times. Page Detailer showed us that response times were long due to excessive delays in obtaining TCP/IP socket connections. We investigated the intranet, firewalls, and site connectivity. It turned out that when the site went online, the firewalls had been set up to allow a fixed number of concurrent socket connections. As traffic increased (the site was succeeding), more and more customers contended for the same number of connections. This was easily corrected. In this case, as in many others, the solution seems obvious when you isolate the fault to an individual component. It is the methodology that allows us to do so.

Figure 5 below shows tools and technologies available to monitor and analyze Web site components. You can see, for example, that you can monitor response time proactively using WebSphere Studio Page Detailer and Tivoli Web Services Manager (TWSM). See Appendix B for more detail about some available tools.

Figure 5. Tools available to monitor and analyze Web site components
Tools available to monitor and analyze Web site components

Step 4. Predict and plan for the future

Sadly, none of us can predict the future. However, an increasing amount of valuable information and useful tools are available to help you plan proactively to keep your Web site serving customers as they expect to be served and to avoid the problems that plague busy sites.

Figure 6 below shows one week of page hits for one of IBM's retail customers. All of the days have essentially the same pattern with predictable peaks and valleys. This site showed no "weekend effect," which may not be true for its "brick and mortar" store, nor for other retailers. This kind of information enables site personnel to prepare for peaks and use the valleys for other operations when needed.

Figure 6. Retailer usage pattern over one week
Retailer usage pattern over one week

While a typical week, as shown in Figure 6 above, can be counted on, a retailer also has to plan for seasonal rushes when peaks can easily exceed those of a typical week. Figure 7 below shows a retail site over six months, including the annual holiday period when the number of hits tripled. During this kind of load, the site must be at its best, if possible free of other operations.

Figure 7. Retail customer seasonal peaks
Retail customer seasonal peaks

Retailers aren't the only e-business facing seasonal demands. Figure 8 below shows how the number of hits for a bank grew over the months approaching tax time. Clearly, the financial sites have their own version of weekly and seasonal peaks and valleys.

Figure 8. Hit rates over six months for a financial site
Hit rates over six months for a financial site

These examples demonstrate that it is possible to monitor your site and detect trends from which you can plan for the future and meet your business objectives. Your site will have peaks and valleys. You can measure them. You can reasonably predict when your peaks will occur and you can position the resources you need to handle the demand and serve your customers (and bring them back!).

Your trend data should suggest whether and when additional site components are needed. Powerful new servers have options, as well, that can generate capacity based on predicted workload. IBM can help you clarify which components match your particular requirements and objectives. See the Planning for Growth paper to learn about our capacity planning methodology and the HVWS Simulator for WebSphere.

Summary

Managing the performance of a high-volume Web site is challenging, exciting, and possible. Following a methodology such as the one presented in this paper will help guide you and your team toward tasks they can understand and goals they can achieve. The success of your company's e-business depends on the tools and techniques your IT team chooses. There are many available, and more are coming, as well as capacity-on-demand options from IBM's powerful server family that set the stage for self-managing IT infrastructures. As always, their use succeeds best in the context of a process.

The "best practices" methodology for managing a high-volume Web site includes developing an end-to-end perspective of the site and following these familiar steps:

  1. Establish objectives
  2. Monitor and measure the site
  3. Analyze and tune components
  4. Predict and plan for the future

Using this methodology, your IT team can help your company meet the revenue and customer satisfaction objectives of its e-business and enjoy improved IT performance management benefits, such as:

  • Proper reporting of quality of service metrics
  • Interactive and historical data on end-to-end performance
  • Rapid identification of the problem source
  • Improved support of business goals
  • Understanding and control of transaction costs
  • World class customer support and satisfaction

IBM's experience with high-volume Web sites has yielded valuable information and revealed the methodologies and tools needed for a successful e-business site. The HVWS team can help you be on your way to just such a successful site.

Appendix A. Some performance management scenarios

This appendix contains three brief scenarios that are based on real events and demonstrate the principles of our methodology for managing performance.

CIO

When reviewing his schedule for the upcoming week, the CIO notes a midweek meeting with the marketing department, a Tuesday working lunch with his colleague from Finance, and the monthly CEO staff meeting on Thursday. He works with his assistant to be sure he takes appropriate information to each meeting.

On Tuesday he will take the latest reports showing costs, projected capacity over the next year, and likely capital spending. The cost chart in Figure 9 below shows, at a high level, the cost per transaction and the cost breakdown by tier. The capacity chart in Figure 10 below illustrates the expected growth in the number of users and transactions. These expectations were jointly reached with the marketing group. The CIO will show his Finance colleague how the increase in workload drives a needed increase in capacity and, thus, capital spending for next year. He points out that operations is working closely with application development to examine costs. They have identified where improvements can be made in the application and have projected the cost savings in terms of cost per transaction and reduced capital spending. He uses the cost savings chart in Figure 11 below to show how the proposed improvements will reduce the cost per transaction more effectively than the in-plan improvements.

Figure 9. Average cost per Web transaction
Average cost per Web transaction

Figure 10. Current and projected system load
Current and projected system load

Figure 11. Cost savings with proposed enhancement
Cost savings with proposed enhancement

The CIO asks Finance to support him in prioritizing these changes in the development plan over other candidate items from other departments.

At the marketing meeting he brings the charts that report system availability, response time, transaction rates, and an analysis of consumer navigation experience. Marketing is concerned about an upcoming promotion. They expect that it will drive traffic to new highs and worry that the system will slow down. Having anticipated this line of discussion the CIO brings out charts showing the current peak demand on the system and the amount of available overhead. He is able to demonstrate that the system has the headroom to handle up to a 30% increase in workload while still maintaining current response times during peak hours. His colleagues in marketing are pleased to see that IT has anticipated the effects of the ad campaign and are satisfied that the system will be able to contain the burst of traffic.

Finally, our CIO prepares for the CEO monthly staff meeting. Each major function is expected to present a short highlight report on the current and upcoming months. The CIO will show charts that illustrate system availability, response time and costs vs targets. He will then discuss upcoming events, like the marketing campaign, and his plans to support them. He expects the presentation to go well because he is confident that the system is providing him the proper information to support his role.

Content problems

Last week, marketing, sales, development, and IT proudly deployed a new application that not only significantly enhanced the function of the e-business site, but also dramatically improved the look and feel of the site for the consumer.

After just a few days, however, IT noted that the Tivoli Web Services Manager was producing alerts that indicated that nearly all pages were slowing down and response time was approaching the maximum allowed by the service level agreement. Using the Tivoli Web Services Analyzer to examine site traffic patterns, IT observed that the site slowed down in proportion to the number of new visitors and customers. All pages were affected, indicating the problem was systemic.

IT contacted Development to review the new content. Development remained puzzled, as they had tested the new pages thoroughly before migrating them into production.

The application owner convened the performance team. One member was detailed to examine page performance using the WebSphere Studio Page Detailer. He reported, "Page Detailer shows that socket connect and SSL connect times are fine. This would seem to absolve the network, firewalls, routers, and TCP/IP layers. It also shows that transactions are processing well within criteria, so there doesn't seem to be a problem with that part of the system. However, Page Detailer does show that static content (such as GIFs) slowed down dramatically after the new application was deployed."

Armed with this information the team quickly identified the Web server as the likely problem area since it is responsible for serving up static content. As this shop was using Netscape Enterprise Server they asked for a PerfDump to be executed. PerfDump reports on the internal performance of the server. Within minutes they were able to examine the output and determine that the cache hit ratio for static content had degraded. Clearly the addition of the new application had added much new static content to be served and the Web server cache was now too small to efficiently manage the new total. A quick look at the operating system input/output statistics using VMSTAT confirmed that real I/O had jumped dramatically within a day or so of the new application roll out.

IT was able to modify the cache size setting in the Web server and deploy the change at the next scheduled maintenance period.

Bottleneck

The e-business site was launched last month, just in time for the TV ad campaign. To date the site is successful. Traffic is growing as predicted, sales are strong, and complaints have been quite low. However, in the past few days, the application seems to have hit a bottleneck. The number of transactions has plateaued, while the response time per page has jumped dramatically.

IT employs the Tivoli Web Services Manager to examine the site. They determine that only transaction pages have slowed down; the number of transactions (sales, etc.) continues to rise, while the number successfully processed is stagnant. Customers are complaining to the help desk and by e-mail about the slow response times. Analysis of the access logs produced by Tivoli Web Services Analyzer (TWSA) confirms that many customers are leaving the site without waiting for their business to complete. Later they complain about not knowing if their business was successfully processed. A transaction in doubt is the worst possible customer problem, one that can destroy confidence in the site and the enterprise.

It's apparent there is a problem in the transaction processing. IT still checks out the Web server to eliminate it as a component of the problem. Next the team extracts the overall response times for transactions (from the Tivoli Web Management Solution) and uses the WebSphere Resource Analyzer to obtain the average elapsed times for the servlet and bean during the slowdown. Rapid subtractions demonstrate that the increased load extended the execution time of the bean. In fact, when a specific transaction rate is reached, the application can't process any more transactions in the bean layer. Additional requests exacerbate the problem in that the transaction rate remains fixed but response times become nonlinear as incoming transactions queue up waiting for the bean.

Resource Analyzer at the bean engines also showed that the application server threads were busy processing requests while VMSTAT showed the CPU was less than 50% busy with no I/O or page wait. Believing that the bottleneck was found, the team recommended that additional threads be assigned to the pool so that the bean could process more concurrent requests.

Before deploying such a change, the team runs the Mercury Interactive Load Runner® to create an artificial load on the test system. They then add threads to the pool expecting the bottleneck to disappear. They rerun the test with the new setting, but the bottleneck still occurs at nearly the same transaction rate. Resource Analyzer confirms that all the threads, including the new total, are still in use while response time continues to rise.

Now they know that the thread starvation is a symptom of the problem but not the cause. The next step is to re-create the problem again. This time they take a dump of the Java Virtual Machine and examine the Java threads for a pattern. They see that all threads are blocked on the same method in their bean. They examine the source code and discover that this method is synchronized (that is, under lock control). A developer investigates and reports that the code need be synchronized only while it updates a shared object, but that the programmer synchronized the entire long running method. This causes all transactions to block, waiting for this common routine.

The programmer codes a fix and test reruns the test. With the change made, the test system can fully utilize the CPU. The transaction rate is no longer constrained. The bottleneck is broken. Test schedules a regression test for that evening and the next day. Meanwhile, IT has configured an additional server to handle the production load pending availability of the fix. Testing is complete by the weekend. The fix is deployed into production during the Sunday morning maintenance period. By Monday evening, production monitoring confirms that the bottleneck is resolved, transaction rates are up, and response time is within criteria.

Appendix B. Tools for monitoring performance

The appendix introduces some of the tools available to monitor Web site performance.

WebSphere Application Server (WAS) Resource Analyzer

The WAS Resource Analyzer can be used with operating system tools such as vmstat to monitor a number of performance measures related to the application server. These metrics are classified into Enterprise JavaBeans (EJBs), ORB thread pool, system runtime resources, database connection pool, and servlets. WAS Resource Analyzer is available for all WAS platforms.

Resource Analyzer on EJB

The Resource Analyzer monitors execution of your EJBs at three levels: server, EJB container, and individual EJB. The table below summarizes the statistics provided.

Authors: High Volume Web Site Team
More information: High Volume Web Sites Zone
Technical contact: Joseph Spano
Management contact: Willy Chiu

Date: April 23, 2001
Status: Version 1.0

PDF version also available.

Abstract

As enterprises implement Web applications in response to the pressures of e-business, managing performance becomes increasingly critical. This paper introduces a methodology for managing performance from one end of the e-business infrastructure to the other. It identifies some "best practices" and tools that help implement the methodology.

Contributors

The High Volume Web Site team is grateful to the major contributors to this article: Willy Chiu, Jerry Cuomo, Ebbe Jalser, Rahul Jain, Frank Jones, W. Nathaniel Mills III, Bill Scully, Joseph Spano, Ruth Willenborg, and Helen Wu.

Special notice

The information contained in this document has not been submitted to any formal IBM® test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Executive summary

As more of your company's business moves to the Internet, your IT organization is becoming a major focal point for such important business measures as revenue and customer satisfaction. You're enjoying unprecedented visibility, and it may not all be positive. If it hasn't already, the performance of your Web site will become critically important.

This paper deals with managing performance. More than ever before, this task requires a perspective that considers components of the infrastructure from end-to-end, from the front-end browsers to the back-end database servers and legacy systems. The end-to-end perspective must be shared not only by you and your operations staff, but also by application developers and Web site designers. Required as well are thoughtful objectives for performance coupled with thorough measurements of performance.

This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies. If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology (see References). Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to meet them.

Our best practices methodology for managing the performance of a high-volume Web site consists of familiar tasks:

  • Establish objectives
  • Monitor and measure the site
  • Analyze and tune components
  • Predict and plan for the future

Some benefits you can expect after implementing the end-to-end methodology include:

  • Proper reporting of quality of service metrics
  • Interactive and historical data on end-to-end performance
  • Rapid identification of the problem source
  • Improved support of business goals
  • Understanding and control of transaction costs
  • World class customer support and satisfaction

The goal of implementing the end-to-end methodology is to align the system performance with the underlying business goals. The methodology, coupled with implementation of the capacity-on-demand options available from IBM's powerful server family, make the goal achievable, and set the stage for self-managing IT infrastructures.

Introducing a methodology for managing performance

The IT infrastructures that comprise most high-volume Web sites (HVWSs) present unique challenges in design, implementation, and management. While actual implementations vary, Figure 1 below shows a typical e-business infrastructure comprised of several tiers. Each tier handles a particular set of functions, such as serving content (Web servers such as the IBM HTTP Server), providing integration business logic (Web application servers such as the WebSphere® Application Server), or processing database transactions (transaction and database servers).

Figure 1. Multi-tier infrastructure for e-business
Multi-tier infrastructure for e-business

IBM's IT experts have been working with IBM customers to architect and analyze many of the world's largest Web sites. Figure 2 below shows how IBM's HVWS team defines the life cycle of a Web site; it also shows the categories of best practices recommended for one or more phases of the cycle. As it accumulates experience and knowledge, the HVWS team compiles white papers aimed at helping CIOs like you understand and meet the new challenges presented during one or more of the phases.

Figure 2. Life cycle of a Web site
Life cycle of a Web site

Managing the performance of a high-volume Web site requires a new look at familiar tasks such as setting objectives, measuring performance, and tuning for optimal performance. First, HVWS workloads are different from traditional workloads. HVWS workloads are assumed to be high-volume and growing, serving dynamic data, and processing transactions. Additional characteristics that can affect performance include transaction complexity, data volatility, security, and others. IBM has determined that HVWS workload patterns fit into one of five classifications: publish/subscribe, online shopping, customer self-service, trading, or business-to-business. Correctly identifying your workload pattern will position you well for making the best use of the practices recommended in this and related papers. For more information about how IBM distinguishes among HVWS workloads, see the Design for Scalability white paper.

Secondly, those performing the tasks must extend their perspectives to include the e-business infrastructure from end to end. This is most effective when all participants understand the application's business requirements, how their component contributes to the application, and how a transaction flows from one end of the infrastructure to the other. Only then can they work together to optimize application performance and meet key business needs. It's often best when one person is assigned ownership of each application considered critical to the e-business; the application owner assures the customer's perspective of application performance -- response time -- remains the primary focus of all participants.

This paper proposes a methodology that you can follow to manage your Web site's performance from end to end. Ideally, you have characterized your workload, selected and applied appropriate scaling techniques, assured that performance is considered in Web page design, and implemented capacity planning technologies. If you have not, you may want to review the white papers related to those phases of the life cycle at the same time as you consider this methodology. Regardless, the methodology presented here can help you define your challenges and implement processes and technologies to meet them.

Figure 3 below shows our methodology for managing the performance of a high-volume Web site in the context of a multi-tier infrastructure.

Figure 3. Methodology for managing performance of a HVWS
Methodology for managing performance of a HVWS

Our methodology consists of familiar tasks with a new twist, driven by the requirement for an end-to-end perspective, and including tools that are available now to help you get started. See Appendix A for some sample scenarios about managing performance and Appendix B for a summary of tools available from IBM, including Tivoli™, IBM's provider of e-business infrastructure management software.

Step 1. Establish performance objectives

The first task is to establish performance objectives for the business, the application, and operations. Performance objectives for the business include numbers of log-ons and page hits, and browse-to-buy ratios. Objectives for the application include availability, transaction response time, and total cost per transaction. Operations objectives include resource utilization (network, servers, etc.) and the behavior of the components that comprise the e-business application.

You should use the results of an application benchmark test to establish the "norms." Ideally, you acquire the norms from controlled benchmark and stress testing. If this isn't possible, you should closely monitor and measure the deployment of the application and use the results to produce a performance profile ranging from the average to the peak hours and/or days.

Metrics should be established from outside the site (response times, availability, ease of navigation, security, etc.), and from each server tier (CPU, I/O, storage, network utilization, database load, intranet traffic rates, etc.). You need to establish thresholds so that operations can be notified when targets are near, at, or over their limits. See Appendix B for a list of tools available from IBM and Tivoli.

Managing against a set of norms is an ongoing process. Frequent updates to expectations and thresholds may be required. Marketing may schedule a promotion that will drive site traffic to new highs. It is important that this be planned for to avoid "false alarms" that can occur if you haven't updated your thresholds for the expected spikes in load.

The team that sets the objectives should include representatives of each area; if that is not possible, the combined objectives should be communicated clearly to all areas, along with the emphasis on what may be considered a new paradigm, that of the end-to-end perspective.

Step 2. Monitor and measure the site

In this step you examine and analyze the performance of the application. You view the application as a transaction flow from the browser through the Web servers and, if applicable, to the backend database and transaction servers, and back to the browser. You are concerned with the entities that make up the system (operating system, firewalls, application servers, Web servers, etc.) only insofar as they support the application.

To understand end-to-end performance, you must understand and document the flow of each transaction type, for example, search, browse, buy, trade, etc. That done, you can use software that monitors the actual flow and alerts operations when any metric you specify exceeds the norms established in Step 1. For example, the alert informs you that your target page response time has been exceeded. You know that something in the system has degraded, but where is the slowdown occurring? How do you find the culprit?

You could instrument your application to record information at various points in the transaction flow. An open standard, Application Resource Management (ARM) defines an API and library for these records. In addition to Tivoli, several vendors have tools to display and analyze this data. We have used exactly this type of instrumentation to manage various high-volume Web sites. It is important to note that instrumentation adds modifications and overhead to the application.

Instead of recording information on every transaction, you can take averages at several points. Information about averages is nearly as good as full instrumentation, but comes at a lower cost and uses existing and transparent tools. Tools such as the WebSphere Resource Analyzer can be used to extract these averages through the resource management interface.

Other available tools include:

  • Report on the quality of customer experiences
  • Analyze the Web site to verify links and enforce content policy
  • Aggregate Web data into an overall business view
  • Correlate log and performance data
  • Monitor availability
  • Use online analytic processing (OLAP) techniques to provide decision support

WebSphere Application Server provides a set of comprehensive performance metrics. For servlets and beans, these include: number of requests, requests per second, execution time, and errors. Java™ metrics reported include active memory, available memory, threads active, threads idle, etc. Database connections are also included connection times, active database connections, and users waiting for database access.

It's best to continuously monitor the site availability from outside to insure that transactions are executing successfully and within criteria. Examine the site navigation periodically to validate the links and content. Resource monitors will need to roll up their data into an aggregate application view. Web logs have to be analyzed and correlated with other resource data.

In a recent customer engagement, IBM's HVWS team investigated a problem of slow customer response time. The Web server was running Netscape Enterprise Server and the WebSphere Application Server for dynamic content generation using Java servlets. A middle tier used Enterprise JavaBeans™ (EJBs) to process transactions and then a JDBC call to the database tier. Using the external monitor for response times, they found that during peak hours, response times for consumer transactions increased from fourteen seconds to twenty seconds. Using the WebSphere Resource Analyzer and some DB2® tools, they collected the internal elapsed times for the application components. Figure 4 below shows the analysis of how each component contributed to total response time. Comparing the baseline time with the peak times, it's easy to see that the slowdown occurs in the servlet tier. The WebSphere Resource Analyzer showed that the application server was running out of worker threads under peak load. Allocating additional worker threads eliminated the slowdown.

Figure 4. Application response times -- baseline vs. peak
Application response times -- baseline vs. peak

Step 3. Analyze and tune components

So far, the methodology has provided objectives, measurements, and application insights. Thus it has allowed you to understand, monitor, and report on end-to-end performance. It has also allowed rapid problem determination. When performance issues come up, you can quickly investigate the application and isolate an individual component. Appendix A contains scenarios that are based on real events and demonstrate how components are analyzed and tuned.

In this step, you analyze and tune specific components. One common question: Does the application scale gracefully? In general, scalability refers to a component's ability to adapt readily to a greater or lesser intensity of use, volume, or demand while still meeting business objectives. You want to assure that your application scales smoothly wherever deployed without experiencing thrashing, bottlenecks, or response time difficulties. You need to examine how your application uses resources: you're interested in CPU consumption per transaction and I/O and network overhead. See also Design for Scalability, our HVWS paper that recommends which scaling techniques should be applied to specific components.

Another important question: Is the application meeting economic criteria? Now that resource consumption is understood, you know the "cost per transaction" and you can assess whether the application is using resources as projected by the performance objectives. You want to consider the best practices pertaining to scalability and page design and learn what's needed to optimize how the resources in each tier are used. The application owner uses this to work with development, operations, and design to control and/or improve the efficiency of the application. In this way costs are held on budget.

We used the methodology recently to benchmark a customer's application and found that throughput seemed to be stalled in the database server. Furthermore, the database was consuming more resources than was expected based on the historical archived data. The DBA ran the analysis tools and quickly determined that one of the application SQL statements was forcing a full table scan (very expensive, very bad). This hadn't had any measurable effect during the initial deployment of the application with a limited number of customers. However, as the number of customers grew, the size of the database increased significantly. The DBA was able to define an alternate index into the table, test the change, and resolve the problem within a short time. It was the methodology that pointed us quickly to the database tier and allowed us to determine the cause of the problem and solve it quickly.

The all-important question: Can response time be improved? Using the component response times, the application owner works with operations to tune and allocate resources to insure good response times. For example, the Web servers may need more memory to allow a larger cache and reduce I/O times.

In one recent engagement, the customer help desk was flooded with complaints of slow or nonexistent performance. The senior management was concerned that the system seemed to be failing and IT seemed unable to tell them why. Using our methodology, we accessed the site with the WebSphere Studio Page Detailer to analyze page response times. Page Detailer showed us that response times were long due to excessive delays in obtaining TCP/IP socket connections. We investigated the intranet, firewalls, and site connectivity. It turned out that when the site went online, the firewalls had been set up to allow a fixed number of concurrent socket connections. As traffic increased (the site was succeeding), more and more customers contended for the same number of connections. This was easily corrected. In this case, as in many others, the solution seems obvious when you isolate the fault to an individual component. It is the methodology that allows us to do so.

Figure 5 below shows tools and technologies available to monitor and analyze Web site components. You can see, for example, that you can monitor response time proactively using WebSphere Studio Page Detailer and Tivoli Web Services Manager (TWSM). See Appendix B for more detail about some available tools.

Figure 5. Tools available to monitor and analyze Web site components
Tools available to monitor and analyze Web site components

Step 4. Predict and plan for the future

Sadly, none of us can predict the future. However, an increasing amount of valuable information and useful tools are available to help you plan proactively to keep your Web site serving customers as they expect to be served and to avoid the problems that plague busy sites.

Figure 6 below shows one week of page hits for one of IBM's retail customers. All of the days have essentially the same pattern with predictable peaks and valleys. This site showed no "weekend effect," which may not be true for its "brick and mortar" store, nor for other retailers. This kind of information enables site personnel to prepare for peaks and use the valleys for other operations when needed.

Figure 6. Retailer usage pattern over one week
Retailer usage pattern over one week

While a typical week, as shown in Figure 6 above, can be counted on, a retailer also has to plan for seasonal rushes when peaks can easily exceed those of a typical week. Figure 7 below shows a retail site over six months, including the annual holiday period when the number of hits tripled. During this kind of load, the site must be at its best, if possible free of other operations.

Figure 7. Retail customer seasonal peaks
Retail customer seasonal peaks

Retailers aren't the only e-business facing seasonal demands. Figure 8 below shows how the number of hits for a bank grew over the months approaching tax time. Clearly, the financial sites have their own version of weekly and seasonal peaks and valleys.

Figure 8. Hit rates over six months for a financial site
Hit rates over six months for a financial site

These examples demonstrate that it is possible to monitor your site and detect trends from which you can plan for the future and meet your business objectives. Your site will have peaks and valleys. You can measure them. You can reasonably predict when your peaks will occur and you can position the resources you need to handle the demand and serve your customers (and bring them back!).

Your trend data should suggest whether and when additional site components are needed. Powerful new servers have options, as well, that can generate capacity based on predicted workload. IBM can help you clarify which components match your particular requirements and objectives. See the Planning for Growth paper to learn about our capacity planning methodology and the HVWS Simulator for WebSphere.

Summary

Managing the performance of a high-volume Web site is challenging, exciting, and possible. Following a methodology such as the one presented in this paper will help guide you and your team toward tasks they can understand and goals they can achieve. The success of your company's e-business depends on the tools and techniques your IT team chooses. There are many available, and more are coming, as well as capacity-on-demand options from IBM's powerful server family that set the stage for self-managing IT infrastructures. As always, their use succeeds best in the context of a process.

The "best practices" methodology for managing a high-volume Web site includes developing an end-to-end perspective of the site and following these familiar steps:

  1. Establish objectives
  2. Monitor and measure the site
  3. Analyze and tune components
  4. Predict and plan for the future

Using this methodology, your IT team can help your company meet the revenue and customer satisfaction objectives of its e-business and enjoy improved IT performance management benefits, such as:

  • Proper reporting of quality of service metrics
  • Interactive and historical data on end-to-end performance
  • Rapid identification of the problem source
  • Improved support of business goals
  • Understanding and control of transaction costs
  • World class customer support and satisfaction

IBM's experience with high-volume Web sites has yielded valuable information and revealed the methodologies and tools needed for a successful e-business site. The HVWS team can help you be on your way to just such a successful site.

Appendix A. Some performance management scenarios

This appendix contains three brief scenarios that are based on real events and demonstrate the principles of our methodology for managing performance.

CIO

When reviewing his schedule for the upcoming week, the CIO notes a midweek meeting with the marketing department, a Tuesday working lunch with his colleague from Finance, and the monthly CEO staff meeting on Thursday. He works with his assistant to be sure he takes appropriate information to each meeting.

On Tuesday he will take the latest reports showing costs, projected capacity over the next year, and likely capital spending. The cost chart in Figure 9 below shows, at a high level, the cost per transaction and the cost breakdown by tier. The capacity chart in Figure 10 below illustrates the expected growth in the number of users and transactions. These expectations were jointly reached with the marketing group. The CIO will show his Finance colleague how the increase in workload drives a needed increase in capacity and, thus, capital spending for next year. He points out that operations is working closely with application development to examine costs. They have identified where improvements can be made in the application and have projected the cost savings in terms of cost per transaction and reduced capital spending. He uses the cost savings chart in Figure 11 below to show how the proposed improvements will reduce the cost per transaction more effectively than the in-plan improvements.

Figure 9. Average cost per Web transaction
Average cost per Web transaction

Figure 10. Current and projected system load
Current and projected system load

Figure 11. Cost savings with proposed enhancement
Cost savings with proposed enhancement

The CIO asks Finance to support him in prioritizing these changes in the development plan over other candidate items from other departments.

At the marketing meeting he brings the charts that report system availability, response time, transaction rates, and an analysis of consumer navigation experience. Marketing is concerned about an upcoming promotion. They expect that it will drive traffic to new highs and worry that the system will slow down. Having anticipated this line of discussion the CIO brings out charts showing the current peak demand on the system and the amount of available overhead. He is able to demonstrate that the system has the headroom to handle up to a 30% increase in workload while still maintaining current response times during peak hours. His colleagues in marketing are pleased to see that IT has anticipated the effects of the ad campaign and are satisfied that the system will be able to contain the burst of traffic.

Finally, our CIO prepares for the CEO monthly staff meeting. Each major function is expected to present a short highlight report on the current and upcoming months. The CIO will show charts that illustrate system availability, response time and costs vs targets. He will then discuss upcoming events, like the marketing campaign, and his plans to support them. He expects the presentation to go well because he is confident that the system is providing him the proper information to support his role.

Content problems

Last week, marketing, sales, development, and IT proudly deployed a new application that not only significantly enhanced the function of the e-business site, but also dramatically improved the look and feel of the site for the consumer.

After just a few days, however, IT noted that the Tivoli Web Services Manager was producing alerts that indicated that nearly all pages were slowing down and response time was approaching the maximum allowed by the service level agreement. Using the Tivoli Web Services Analyzer to examine site traffic patterns, IT observed that the site slowed down in proportion to the number of new visitors and customers. All pages were affected, indicating the problem was systemic.

IT contacted Development to review the new content. Development remained puzzled, as they had tested the new pages thoroughly before migrating them into production.

The application owner convened the performance team. One member was detailed to examine page performance using the WebSphere Studio Page Detailer. He reported, "Page Detailer shows that socket connect and SSL connect times are fine. This would seem to absolve the network, firewalls, routers, and TCP/IP layers. It also shows that transactions are processing well within criteria, so there doesn't seem to be a problem with that part of the system. However, Page Detailer does show that static content (such as GIFs) slowed down dramatically after the new application was deployed."

Armed with this information the team quickly identified the Web server as the likely problem area since it is responsible for serving up static content. As this shop was using Netscape Enterprise Server they asked for a PerfDump to be executed. PerfDump reports on the internal performance of the server. Within minutes they were able to examine the output and determine that the cache hit ratio for static content had degraded. Clearly the addition of the new application had added much new static content to be served and the Web server cache was now too small to efficiently manage the new total. A quick look at the operating system input/output statistics using VMSTAT confirmed that real I/O had jumped dramatically within a day or so of the new application roll out.

IT was able to modify the cache size setting in the Web server and deploy the change at the next scheduled maintenance period.

Bottleneck

The e-business site was launched last month, just in time for the TV ad campaign. To date the site is successful. Traffic is growing as predicted, sales are strong, and complaints have been quite low. However, in the past few days, the application seems to have hit a bottleneck. The number of transactions has plateaued, while the response time per page has jumped dramatically.

IT employs the Tivoli Web Services Manager to examine the site. They determine that only transaction pages have slowed down; the number of transactions (sales, etc.) continues to rise, while the number successfully processed is stagnant. Customers are complaining to the help desk and by e-mail about the slow response times. Analysis of the access logs produced by Tivoli Web Services Analyzer (TWSA) confirms that many customers are leaving the site without waiting for their business to complete. Later they complain about not knowing if their business was successfully processed. A transaction in doubt is the worst possible customer problem, one that can destroy confidence in the site and the enterprise.

It's apparent there is a problem in the transaction processing. IT still checks out the Web server to eliminate it as a component of the problem. Next the team extracts the overall response times for transactions (from the Tivoli Web Management Solution) and uses the WebSphere Resource Analyzer to obtain the average elapsed times for the servlet and bean during the slowdown. Rapid subtractions demonstrate that the increased load extended the execution time of the bean. In fact, when a specific transaction rate is reached, the application can't process any more transactions in the bean layer. Additional requests exacerbate the problem in that the transaction rate remains fixed but response times become nonlinear as incoming transactions queue up waiting for the bean.

Resource Analyzer at the bean engines also showed that the application server threads were busy processing requests while VMSTAT showed the CPU was less than 50% busy with no I/O or page wait. Believing that the bottleneck was found, the team recommended that additional threads be assigned to the pool so that the bean could process more concurrent requests.

Before deploying such a change, the team runs the Mercury Interactive Load Runner® to create an artificial load on the test system. They then add threads to the pool expecting the bottleneck to disappear. They rerun the test with the new setting, but the bottleneck still occurs at nearly the same transaction rate. Resource Analyzer confirms that all the threads, including the new total, are still in use while response time continues to rise.

Now they know that the thread starvation is a symptom of the problem but not the cause. The next step is to re-create the problem again. This time they take a dump of the Java Virtual Machine and examine the Java threads for a pattern. They see that all threads are blocked on the same method in their bean. They examine the source code and discover that this method is synchronized (that is, under lock control). A developer investigates and reports that the code need be synchronized only while it updates a shared object, but that the programmer synchronized the entire long running method. This causes all transactions to block, waiting for this common routine.

The programmer codes a fix and test reruns the test. With the change made, the test system can fully utilize the CPU. The transaction rate is no longer constrained. The bottleneck is broken. Test schedules a regression test for that evening and the next day. Meanwhile, IT has configured an additional server to handle the production load pending availability of the fix. Testing is complete by the weekend. The fix is deployed into production during the Sunday morning maintenance period. By Monday evening, production monitoring confirms that the bottleneck is resolved, transaction rates are up, and response time is within criteria.

Appendix B. Tools for monitoring performance

The appendix introduces some of the tools available to monitor Web site performance.

WebSphere Application Server (WAS) Resource Analyzer

The WAS Resource Analyzer can be used with operating system tools such as vmstat to monitor a number of performance measures related to the application server. These metrics are classified into Enterprise JavaBeans (EJBs), ORB thread pool, system runtime resources, database connection pool, and servlets. WAS Resource Analyzer is available for all WAS platforms.

Resource Analyzer on EJB

The Resource Analyzer monitors execution of your EJBs at three levels: server, EJB container, and individual EJB. The table below summarizes the statistics provided.

Statistic
Stateless Session Beans
Stateful Session Beans
Entity Beans
Instantiate
Yes
Yes
Yes
Destroy
Yes
Yes
Yes
Requests
Yes
Yes
Yes
Requests per second
Yes
Yes
Yes
Execution time
Yes
Yes
Yes
Live beans (pooled and active)
Yes
Yes
Yes
Creates  
Yes
Yes
Removes  
Yes
Yes
Activation  
Yes
Yes
Passivation  
Yes
Yes
Loads    
Yes
Stores    
Yes

Resource Analyzer on servlet

The Resource Analyzer monitors execution of servlets at three levels: servlet engine, Web application, and individual servlet. It monitors and collects cumulative metrics at servlet engine levels and provides an analysis of the metrics at the Web application and for individual servlets. Metrics collected include requests per second, average response time, and number of concurrent requests.

Resource Analyzer on system resources

The Resource Analyzer monitors system resources consumed by the Java Virtual Machines (JVM). It collects and reports such JVM metrics as total memory and amount of memory used/available.

WAS Site Analyzer

The WebSphere Application Server Site Analyzer measures Web site traffic. Site Analyzer provides detailed analysis of Web content integrity, site performance, usage statistics, and a report writing feature to build reports from the content integrity and usage statistics. The table below summarizes the functions of each major features of the Site Analyzer.

Feature Functionality
Content & Site Structure Analysis
  • Identify duplicate and inactive files on Web server
  • Detects unavailable resources such as broken links or missing files
  • Content with excessive load time
Usage Analysis
  • Who accessed the site
  • Where they visited
  • How they navigated the site
Visualization and Reports
  • Allow users to view site structure and quickly locate pages with problems via color schemes and icons
  • On-demand searching for certain page attributes
  • Provide predefined reports that are fully customizable
Client/Server Configuration
  • Analyzers in server side to transform raw data into valuable information and store it in database
  • Client interface provides administration, visualization, and report-generation functions

AIX® performance tools

A variety of AIX tools is available to first identify and understand the work load, and then to help set up an environment that approximates the ideal execution environment for the work. The table below summarizes the AIX monitoring tools.

Tasks Tools
AIX monitoring AIX tools
Perfagent tools
Sample tools
Adapter tools
Switch tools
Managing memory resources vmstat
sar
lsps
ps
svmon
Managing CPU resources vmstat
sar
time
cpu_state
Managing network resources netstat


Tasks Tools Metrics
Netscape monitoring Perfdump Cache hit ratios, memory, threads


Tasks Tools Metrics
Site Investigator
Quality of Service
Synthetic Transaction
Analyze data/generate reports

Tivoli Web Services Manager (TWSM)

Tivoli Web Services Analyzer (TWSA)

Content
Response time
Availability
Site traffic analysis



References

Document references

  1. IBM High Volume Web Site white papers
    • Design for Scalability, December 1999
    • Design Pages for Performance, May 2000
    • Planning for Growth, October 2000
  2. Tetsuya Shirai, Lee Dilworth, Raanon Reutlinger, Sadish Kumar, David Bernabe, Bill Wilkins, and Brad Cassels, UDB Performance Tuning, 2000
  3. Ken Ueno, Tom Alcott, Jeff Carlson, Andrew Dunshea, Hajo Kitzhofer, Yuko Hayakawa, Frank Mogus, and Colin Wordsworth, WebSphere V3 Performance Tuning Guide, 2000
  4. IBM Redbooks

Product references

  • The IBM WebSphere software platform for e-business includes edge servers, Web application servers, development and deployment tools, and Web applications. For more information, visit the WebSphere Developer Domain.
  • Find out about the IBM WebSphere Commerce Suite, used by customers who run large-scale online shopping sites that we have studied.
  • Find out more about the software used by trading sites we've studied: WebSphere's Application Server and MQSeries®.
  • Download a demo version of Page Detailer, the tool in WebSphere Studio that measures in detail every element in a page download to assists in performance analysis and optimization.
  • For more information, go to IBM's Capacity Advantage tool.

阅读全文
0 0

相关文章推荐

img
取 消
img