08.06.2014 Views

Download PDF (1.3 MB) - IBM Redbooks

Download PDF (1.3 MB) - IBM Redbooks

Download PDF (1.3 MB) - IBM Redbooks

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

If a number of services in a FanOut section of a mediation flow exists, calling these services<br />

synchronously results in an overall response time equal to the sum of the individual service<br />

response times.<br />

Calling the services asynchronously (with parallel waiting configured) results in a response<br />

time of at least the largest individual service response time in the MFC. The response time<br />

also includes the sum of the time taken by the MFC to process the remaining service callout<br />

responses on the messaging engine queue.<br />

For a FanOut/FanIn block, the processing time for any primitives before or after the service<br />

invocations must be added in both cases.<br />

To optimize the overall response time when calling services asynchronously in a<br />

FanOut/FanIn section of a mediation flow, start the services in the order of expected latency<br />

(highest latency first), if known.<br />

You must consider the trade-off between parallelism and additional asynchronous processing.<br />

The suitability of asynchronous processing depends on the size of the messages being<br />

processed, the latency of the target services, the number of services being started, and any<br />

response time requirements expressed in service level agreements (SLAs). We suggest<br />

running performance evaluations on mediation flows, including FanOuts with high latency<br />

services, if you are considering asynchronous invocations.<br />

The default quality of service-on-service references is Assured Persistent. You can gain a<br />

substantial reduction in asynchronous processing time by changing this setting to Best<br />

Effort (non-persistent). This change eliminates I/O to the persistence store, but the<br />

application must tolerate the possibility of lost request or response messages. This level of<br />

reliability for the service integration bus can discard messages under load and might require<br />

tuning.<br />

3.2.11 Large object considerations<br />

This section presents best practices for performance (particularly memory use) when using<br />

large objects (LOBs). Very large objects put significant pressure on Java heap use,<br />

particularly for 32-bit JVMs. The 64-bit JVMs are less susceptible to this issue because of the<br />

large heap sizes that you can be configure, although throughput, response time, and overall<br />

system performance can still be impacted by excessive memory usage. Follow these<br />

practices to reduce the memory pressure and avoid OutOfMemory exceptions and improve<br />

overall system performance.<br />

Avoiding lazy cleanup of resources<br />

Lazy cleanup of resources adds to the live set required when processing large objects. If you<br />

can clean up any resources (for example, by dropping object references when no longer<br />

required), do so as soon as is practical.<br />

Avoiding tracing when processing large business objects<br />

Tracing and logging can add significant memory resources. A typical tracing activity is to<br />

dump the BO payload. Creating a string representation of a large BO can trigger allocation of<br />

many large and small Java objects in the Java heap. For this reason, avoid turning on tracing<br />

when processing large BO payloads in production environments.<br />

42 <strong>IBM</strong> Business Process Manager V8.0 Performance Tuning and Best Practices

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!