Predict Failure versus Predictive Maintenance

Gauges
LinkedInEvernoteFacebook

In a recent post by ARC Advisory Group, Peter Reynolds notes that 80% of assets fail randomly despite being supported by programs designed for asset maintenance and reliability. Only 3-5% of maintenance performed is predictive. The vast majority of maintenance is either break-fix or executed based on the OEM’s asset maintenance schedule – needed or not.

A broad set of factors drive asset performance, including variabilities in process conditions/flow outside the asset itself, which previously may not have been considered relevant to determining asset condition. With advanced analytics, the compute power is available to combine asset health, asset condition, and process variables to determine the asset’s true risk of failure.

More importantly, machine learning will provide a means to see beyond a conventionally-understood state leading to asset failure. These machine learning models require an understanding of the operating and failure mode states of these assets. As Reynolds points out, this probably means working with operating personnel, not maintenance personnel, to develop the models. This marks a change from condition-based maintenance and less sophisticated predictive models.

Using sophisticated machine learning models, asset managers can know that a given asset will continue through a rough spot, not fail as might have been predicted by condition monitoring or prognostic models, and will in fact go on to a longer operation. This suggests that the P-F curve in ARC’s post could look more like a sine wave than a gradual drop off. The key is to have confidence in the algorithm’s prediction that failure is actually not imminent. Only the right set of machine learning analytics can predict into the future without a loss of confidence.

Predictive and prescriptive analytics will indeed drive the next wave of improvements in asset performance. But only the right algorithms will provide the highest return on investment for those seeking lasting improvements in asset performance.

 

Image copyright: http://www.123rf.com/profile_frimerke’>frimerke / 123RF Stock Photo

LinkedInEvernoteFacebook

Re-imagining the Future of Asset Maintenance

Copyright: http://www.123rf.com/profile_wi6995
LinkedInEvernoteFacebook

Asset failure, or more accurately, avoiding asset failure, is big business, as it should be. For asset-intensive industries, asset failure can mean revenue loss, customer dissatisfaction, brand degradation, even regulatory fines. So improving the means by which asset failure is avoided is as important as the day-to-day production by the asset.

Many companies continue to take a break/fix approach to asset repair, or cyclical preventative maintenance, where pre-set characteristics of general asset types determine when maintenance is performed. Some are considering Condition-Based Maintenance (CBM), where some parameter of an asset is monitored and repair is performed when that parameter indicates a problem or imminent failure, based on a statistical models for that type of asset. But greater business benefits are achieved with Predictive and Prescriptive Maintenance, often powered by machine learning, which look at the state of each individual asset and predict the probability of failure into the future, and optimize maintenance and repair schedules based on that input along with other constraints.

Arc Advisory Group recently updated its Maintenance Maturity Model to note the availability and benefits of these more sophisticated analytic approaches. They noted that moving from preventive maintenance to predictive and prescriptive models can deliver 50 percent savings in labor and materials, which has a ripple effect from improvements in shipping times to customer satisfaction. They observe that new technologies in the industrial internet of things (IIoT) enable inexpensive, real-time asset monitoring. Measuring vibration, heat, lubricants, and other asset conditions in real-time are essential for the enterprise to adopt Predictive and Prescriptive Maintenance. Creating a ‘digital twin’ or software model of the asset gives analytics software a basis to compare ideal and observed measurements.

Doesn’t CBM provide many of the same benefits? Perhaps to a lesser extent, but there is no reason to settle for CBM. In CBM, analytics examines the current state of the asset to alarm for likely asset failure. However, not every condition that may appear to head toward failure actually will in that specific asset, and true asset maintenance optimization can occur only when an enterprise can reliably determine the difference. Avoiding unnecessary maintenance costs can extend asset life at a fraction of the cost.

Predictive maintenance powered by machine learning should allow you to ‘see over the hill,’ beyond the current condition, to determine the most probable outcome given the current condition of each asset. The combination of machine learning and IIoT could prove to be the missing link in smart and effective asset maintenance.

 

 

Image courtesy 123RF: Copyright: http://www.123rf.com/profile_wi6995

LinkedInEvernoteFacebook

DistribuTECH 2017 – Serious Networking for Energy Nerds

DISTRIBUTECH LOGO
LinkedInEvernoteFacebook

The annual DistribuTECH conference is right around the corner, this year at the San Diego Convention Center from January 31 – February 2.  With over 11,000 attendees from 78 countries and over 500 exhibiting companies, DistribuTECH is the place to be for those even mildly interested in energy transmission and distribution.

Spacetime will be there, this year hosting pre-scheduled meetings in room 3946.  (Schedule your meeting here.)

You can also see Spacetime’s advanced analytics in action on the exhibit floor in our partners’ booths.

 Partner  Booth  Demo
 Siemens 3113 Asset Intelligence integrated with Siemens Spectrum Power
 Sentient Energy 1025 Distribution Intelligence integrated with Sentient AMPLE Platform
 Live Data Utilities 2352 Distribution Intelligence integrated with Live Data RTI Platform

Visit our partners and see the future of advanced analytics for the internet of things today.  Register for DistribuTECH or download a free exhibit hall pass.

 

LinkedInEvernoteFacebook

Machine Learning Analytics

LinkedInEvernoteFacebook

Machine learning is all the rage, with business leaders scrambling to understand how it can benefit their organizations, and for some, even what machine learning is.  One thing is clear: the onslaught of data from the internet of things has made quickly scaling machine learning and advanced analytics the key to optimizing enterprise decision-making, operations, and logistics.

An enterprise-grade machine learning solution begins with three core capabilities:

  1. predictions without relying on knowledge of past events
  2. analysis and visualization of time series data
  3. optimized decision-making under uncertain conditions.

With these, an enterprise can put its data to work to improve operations and planning.

advanced-machine-learning

Handy resources to learn more about machine learning:

State of Enterprise Machine Learning

Major Roadblocks on the Path to Machine Learning

Mainstreaming Machine Learning

LinkedInEvernoteFacebook

National Grid Webinar: Answering Your Questions

LinkedInEvernoteFacebook

Recently David Salisbury, Head of Network Engineering for National Grid and Neil Barry, Senior Director EMEA at Space-Time Insight, presented the webinar “How Analytics Helps National Grid Make Better Decisions to Manage an Aging Network“, hosted by Engerati.  [Listen to the recording here.] Unfortunately, not all the submitted questions were able to be answered in the time allotted.  However, responses have been provided in this post.

How were pdf data sources incorporated into your analytics? How will that be kept up to date?

To correct to the discussion in the webinar, pdf data sources were not analysed in the valves and pipeline use cases. For the corrosion use case, data from pdf reports was manually rekeyed into the analytics solution.

 

Are there mechanisms built into the system that facilitate data verification and data quality monitoring?

In the general case, metrics were computed for data completeness (e.g., of the desired data, how much was actually available) and confidence (e.g., how recent was the data we used). For the corrosion use case, there are checks for data consistency and completeness.  For pipelines and valves, these metrics have not yet been fully configured.

 

Could you describe how this helps with the audit trail?  As the system changes, the current snapshot is updated.  How do you show the status at a certain point in the past when a decision was made?

For the corrosion use case, the history is stored and accessible, providing an audit trail. The foundation analytics does offer a ‘time slider’ that delivers animated time series data, making it easy for the user to go back in time.  However, this is not currently configured for National Grid.

 

Please provide specific examples of how decisions were made based on analytics and demonstration of analytics/predictive analysis

David described an example at around the eight minute mark into the webinar – budgets used to be set locally, but the insight from analytics might show that a particular type of problem is located in a specific geographic area. This can help with decisions around investment and risk.

 

How have you defined Asset Health? What data is required to assess?

Models for asset health were agreed upon by National Grid and Space-Time Insight during the implementation process. For pipelines, as was mentioned in the webinar, two of the data sets are Close Interval Potential Survey (CIPS) and Inline Inspection (ILI). For valves, a number of data sets are used, including test results and work orders.

 

Did you look at techniques to predict issues based on historical data…so you can target risk areas?

This has not been implemented by National Grid.  However, the product software has the capability to predict the probability of failure and the criticality of that failure, as one example.

 

Has Space Time insight worked on developing a situational intelligence tool for electric distribution and/or transmission applications? Similar to the gas transmission monitoring developed for National Grid?

Yes, Space-Time Insight offers an asset intelligence solution for electricity transmission and distribution utilities.  More information is available online.

LinkedInEvernoteFacebook