08.06.2014 Views

Download PDF (1.3 MB) - IBM Redbooks

Download PDF (1.3 MB) - IBM Redbooks

Download PDF (1.3 MB) - IBM Redbooks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Batched inputs: Sending large objects as multiple small objects<br />

If a large object must be processed, the solutions engineer must find a way to limit the<br />

number of allocated large Java objects. The primary technique for limiting the number of<br />

objects involves decomposing large BOs into smaller objects and submitting them<br />

individually.<br />

If the large objects are actually collections of small objects, the solution is to group the smaller<br />

objects into conglomerate objects less than 1 <strong>MB</strong> in size. Several customer sites have<br />

consolidated small objects in this way, producing good results. If temporal dependencies or<br />

an all-or-nothing requirement for the individual objects exists, the solution becomes more<br />

complex. Implementations at customer sites demonstrate that dealing with this complexity is<br />

worth the effort as demonstrated by both increased performance and stability.<br />

Certain WebSphere adapters (such as the Flat Files adapter) can be configured to use a<br />

SplitBySize mode with a SplitCriteria set to the size of each individual object. In this case,<br />

a large object is split in chunks (of a size specified by SplitCriteria) to reduce peak memory<br />

usage.<br />

Claim check pattern: Only a small portion of an input message is used<br />

When the input BO is too large to be carried around in a system and that process or<br />

mediation needs only a few attributes, you can use a pattern known as the claim check<br />

pattern. Using the claim check pattern, as applied to a BO, involves the following steps:<br />

1. Detach the data payload from the message.<br />

2. Extract the required attributes into a smaller control BO.<br />

3. Persist the larger data payload to a data store and store the “claim check” as a reference<br />

in the control BO.<br />

4. Process the smaller control BO, which has a smaller memory footprint.<br />

5. When the solution needs the whole large payload again, check out the large payload from<br />

the data store using the key.<br />

6. Delete the large payload from the data store.<br />

7. Merge the attributes in the control BO with the large payload, taking into account the<br />

changed attributes in the control BO.<br />

The claim check pattern requires custom code and snippets in the solution. A less<br />

developer-intensive variant is to use custom data bindings to generate the control BO. This<br />

approach is limited to certain export and import bindings. The full payload still must be<br />

allocated in the JVM.<br />

2.5.3 Data management<br />

The use of document management functions, such as document attachments and the<br />

integration capabilities of content that is stored in enterprise content management<br />

repositories, might result in large objects. The capacity that is required for processing of<br />

documents depends on the size of the Java heap and the load that is placed on that heap<br />

(that is, the live set) by the current level of incoming work. The larger the heap, the larger the<br />

data that can be successfully processed.<br />

Document attachments and content integration artifacts are stored in the Process Server<br />

database. Over time and depending on the size and amount of documents, the database<br />

might grow in size. Completed process instances should be archived or deleted (see 4.13.7,<br />

“Archiving completed process instances” on page 81 for more information).<br />

20 <strong>IBM</strong> Business Process Manager V8.0 Performance Tuning and Best Practices

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!