26.12.2014 Views

Fabric Manager Users Guide, Version 6.1, Revision A - QLogic

Fabric Manager Users Guide, Version 6.1, Revision A - QLogic

Fabric Manager Users Guide, Version 6.1, Revision A - QLogic

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2–Advanced <strong>Fabric</strong> <strong>Manager</strong> Capabilities<br />

Performance <strong>Manager</strong><br />

Histogram Data<br />

• XmitWaitCongestionPct10 – Percentage of time that the outgoing port was<br />

congested and unable to transmit. This value is only available for <strong>QLogic</strong><br />

Host Channel Adapters and requires that PmaCaVendorEnable to be<br />

enabled. This is expressed as an integer*10. So for example 100% is<br />

reported as 1,000.<br />

• XmitWaitInefficiencyPct10 – Percentage of time that the outgoing port was<br />

congested when it could have been transmitting. Computed as<br />

XmitWaitCongestionPct/(XmitWaitCongestionPct+Utilization%). This value<br />

is only available for <strong>QLogic</strong> Host Channel Adapters. This is expressed as an<br />

integer*10. So for example 100% is reported as 1,000.<br />

The default configuration combines XmitDiscard, XmitInefficiencyPct10, and<br />

XmitWaitInefficiencyPct10 with XmitDiscard heavily weighted such that any<br />

packet discards are weighted higher than inefficiencies. The default threshold of<br />

100 represents 10% inefficiency and/or any packet loss due to extreme<br />

congestion.<br />

It is left to the user to decide if the use of XmitCongestionPct10 and<br />

XmitWaitCongestionPct10 is of more interest for the given fabric. Typically<br />

XmitInefficiencyPct10 and XmitWaitInefficiencyPct10 will report a value higher<br />

than XmitCongestionPct10 and XmitWaitCongestionPct10.<br />

For each condition a histogram is tracked. Each histogram has a set of buckets<br />

which count the number of ports in a given group with a certain range of<br />

performance/errors.<br />

For Bandwidth, ten histogram buckets are tracked, each 10% wide. The buckets<br />

each count how many ports in a given group had this level of utilization relative to<br />

wire speed.<br />

For Errors (Integrity, Congestion, SmaCongestion, Security, Routing), five<br />

histogram buckets are tracked. The first four are 25% wide and indicate the ports<br />

whose error value is the given percentage of the configured threshold. The fifth<br />

bucket counts the number of ports whose error rate exceeded the threshold. The<br />

overall summary error status for a given group is based on the highest error rate<br />

for any port tabulated by the group.<br />

Support for iba_report<br />

In addition to tracking interval counters, the PM also keeps one set of long running<br />

counters per PMA. These counters will be accessed by iba_report and provide<br />

compatible support for existing iba_report error analysis and other Fast<strong>Fabric</strong><br />

tools, such as fabric_analysis and all_analysis.<br />

As such, when the PM is running, iba_report will access data from the PM,<br />

therefore options such as -o errors and -C will access PM internal copies of PMA<br />

counters and will not affect the actual hardware PMA counters.<br />

2-32 IB0054608-01 B

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!