14.04.2020 Views

Maintworld 1/2020

ROTATING EQUIPMENT SERVICES: A COMPREHENSIVE, WORRY-FREE PACKAGE // SELF-INFLICTED RELIABILITY PROBLEMS OF ROTATING MACHINERY // VIEWING MAINTENANCE AS A SYSTEM TO OPTIMIZE PERFORMANCE

ROTATING EQUIPMENT SERVICES: A COMPREHENSIVE, WORRY-FREE PACKAGE // SELF-INFLICTED RELIABILITY PROBLEMS OF ROTATING MACHINERY // VIEWING MAINTENANCE AS A SYSTEM TO OPTIMIZE PERFORMANCE

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

TECHNOLOGY<br />

Industrial AI’s capacity to analyze very large amounts of<br />

high-dimensional data can change the current maintenance<br />

paradigm and shift from preventive maintenance systems to<br />

new levels. The key challenge, however, is operationalizing predictive<br />

maintenance, and this is much more than connecting<br />

assets to an AI platform, streaming data, and analyzing those<br />

data. By integrating conventional data such as vibration, current<br />

or temperature with unconventional additional data, such<br />

as audio and image data, including relatively cheap transducers<br />

such as microphones and cameras, Industrial AI can enhance<br />

or even replace more traditional methods. AI’s ability to predict<br />

failures and allow planned interventions can be used to<br />

reduce downtime and operating costs while improving production<br />

yield. For example, AI can extend the life of an asset beyond<br />

what is possible using traditional analytics techniques by<br />

combining data information from designer and manufacturer,<br />

maintenance history, and Internet of Things (IoT) sensor data<br />

from end users, such as anomaly detection in engine-vibration<br />

data, images and video of engine condition. This information<br />

fusion during the lifecycle of the asset is called product lifecycle<br />

management (PLM).<br />

Explainable AI in Maintenance<br />

Advances in AI for maintenance analytics are often tied to<br />

advances in statistical techniques. These tend to be extremely<br />

complex, leveraging vast amounts of data and complex algorithms<br />

to identify patterns and make predictions. This<br />

complexity, coupled with the statistical nature of the relationships<br />

between input data that the asset provides, makes them<br />

difficult to understand, even for expert users, including the<br />

system developers, Figure 4. This makes explainability a major<br />

concern.<br />

Figure 4: Engineers and data scientist must co-create the AI<br />

solution for maintenance together<br />

While increasing the explainability of AI systems can be<br />

beneficial for many reasons, there are challenges in implementing<br />

explainable AI. Different users require different<br />

forms of explanation in different contexts, and different contexts<br />

give rise to different needs. To understand how an AI<br />

system works in the maintenance domain, users might wish to<br />

know which data the system is using, the provenance of those<br />

data, and why they were selected; how the model and prediction<br />

work, and which factors influence a maintenance decision;<br />

and why a particular output is obtained. To understand what<br />

type of explanation is necessary, careful stakeholder engage-<br />

ment and well-thought-out system design are both necessary<br />

as can be seen in figure 5.<br />

Figure 5: Architecture of explainable AI systems for Maintenance<br />

Decisions<br />

There are various approaches to creating interpretable systems.<br />

Some AI is interpretable by design; these systems tend to<br />

be kept relatively simple. An issue with them is that they cannot<br />

get as much customization from vast amounts of data as<br />

more complex techniques, such as deep learning. This creates a<br />

performance-accuracy trade-off in some settings, and the systems<br />

might not be desirable for those applications where high<br />

accuracy is prized. In other words, maintainers must accept<br />

more black boxes.<br />

In some AI systems – especially those using personal data or<br />

those where proprietary information is at stake – the demand<br />

for explainability may interact with concerns about privacy. In<br />

areas such as healthcare and finance, for example, an AI system<br />

might be analyzing sensitive personal data to make a decision<br />

or recommendation. In determining the type of explainability<br />

that is desirable in these cases, organizations using AI will<br />

need to take into account the extent to which different forms<br />

of transparency might result in the release of sensitive insights<br />

about individuals or expose vulnerable groups to harm.<br />

In the area of maintenance, when the AI recommends a<br />

maintenance decision, decision makers need to understand<br />

the underlying reason. Maintenance analytics developers need<br />

to understand what fault features in the input data are guiding<br />

the algorithm before accepting auto-generated diagnosis<br />

reports, and the maintenance engineer needs to understand<br />

which abnormal phenomena are captured by the inference algorithm<br />

before following the repair recommendations.<br />

One of the proposed benefits of increasing the explainability<br />

of AI systems is increased trust in the system. If maintainers<br />

understand what led to an AI-generated decision or recommendation,<br />

they will be more confident in its outputs. But the<br />

link between explanations and trust is complex. If a system<br />

produces convincing but misleading explanations, users might<br />

develop a false sense of confidence or understanding. They<br />

might have too much confidence in the effectiveness or safety<br />

of systems, without such confidence being justified. Explanations<br />

might help increase trust in the short term, but they do<br />

not necessarily help create systems that generate trustworthy<br />

outputs or ensure that those deploying the system make trustworthy<br />

claims about its capabilities.<br />

REFERENCES:<br />

Galar, Diego, Pasquale Daponte, and Uday Kumar. Handbook of Industry 4.0 and SMART Systems. CRC Press, 2019.<br />

Galar, Diego. Artificial intelligence tools: decision support systems in condition monitoring and diagnosis. Crc Press, 2015.<br />

Galar, Diego, Uday Kumar, and Dammika Seneviratne. Robots, Drones, UAVs and UGVs for Operation and Maintenance. CRC Press, <strong>2020</strong>.<br />

50 maintworld 1/<strong>2020</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!