Predict Failure versus Predictive Maintenance


In a recent post by ARC Advisory Group, Peter Reynolds notes that 80% of assets fail randomly despite being supported by programs designed for asset maintenance and reliability. Only 3-5% of maintenance performed is predictive. The vast majority of maintenance is either break-fix or executed based on the OEM’s asset maintenance schedule – needed or not.

A broad set of factors drive asset performance, including variabilities in process conditions/flow outside the asset itself, which previously may not have been considered relevant to determining asset condition. With advanced analytics, the compute power is available to combine asset health, asset condition, and process variables to determine the asset’s true risk of failure.

More importantly, machine learning will provide a means to see beyond a conventionally-understood state leading to asset failure. These machine learning models require an understanding of the operating and failure mode states of these assets. As Reynolds points out, this probably means working with operating personnel, not maintenance personnel, to develop the models. This marks a change from condition-based maintenance and less sophisticated predictive models.

Using sophisticated machine learning models, asset managers can know that a given asset will continue through a rough spot, not fail as might have been predicted by condition monitoring or prognostic models, and will in fact go on to a longer operation. This suggests that the P-F curve in ARC’s post could look more like a sine wave than a gradual drop off. The key is to have confidence in the algorithm’s prediction that failure is actually not imminent. Only the right set of machine learning analytics can predict into the future without a loss of confidence.

Predictive and prescriptive analytics will indeed drive the next wave of improvements in asset performance. But only the right algorithms will provide the highest return on investment for those seeking lasting improvements in asset performance.


Image copyright:’>frimerke / 123RF Stock Photo


DistribuTECH 2017 – Serious Networking for Energy Nerds


The annual DistribuTECH conference is right around the corner, this year at the San Diego Convention Center from January 31 – February 2.  With over 11,000 attendees from 78 countries and over 500 exhibiting companies, DistribuTECH is the place to be for those even mildly interested in energy transmission and distribution.

Spacetime will be there, this year hosting pre-scheduled meetings in room 3946.  (Schedule your meeting here.)

You can also see Spacetime’s advanced analytics in action on the exhibit floor in our partners’ booths.

 Partner  Booth  Demo
 Siemens 3113 Asset Intelligence integrated with Siemens Spectrum Power
 Sentient Energy 1025 Distribution Intelligence integrated with Sentient AMPLE Platform
 Live Data Utilities 2352 Distribution Intelligence integrated with Live Data RTI Platform

Visit our partners and see the future of advanced analytics for the internet of things today.  Register for DistribuTECH or download a free exhibit hall pass.



CrateDB SQL Database Puts IoT and Machine Data to Work


Space-Time Insight joined CrateDB in their launch of CrateDB 1.0, an open source SQL database that enables real-time analytics for machine data applications. We make extensive use of machine learning and streaming analytics, and CrateDB is particularly well-suited for the geospatial and temporal data we work with, including support for distributed joins. It allows us to write and query sensor data at more than 200,000 rows per second, and query terabytes of data. Typical relational databases can’t handle anywhere near the rate of ingestion that Crate can.

Crate handles and queries geospatial and temporal data particularly well. We also get image (BLOB) and text support, which is important for our IoT solutions, as they are often used to capture images on mobile devices in the field and provide two-way communication between people and machines. Crate is also microservice-ready — we’ve Dockerized our IoT cloud service, for example.

Finally, our SI Studio platform uses Java and SQL and expects an SQL interface, so choosing Crate made integration straightforward and allowed us to leverage existing internal skill sets.

Read more at, Space-Time Insight and The Register.


Machine Learning Analytics


Machine learning is all the rage, with business leaders scrambling to understand how it can benefit their organizations, and for some, even what machine learning is.  One thing is clear: the onslaught of data from the internet of things has made quickly scaling machine learning and advanced analytics the key to optimizing enterprise decision-making, operations, and logistics.

An enterprise-grade machine learning solution begins with three core capabilities:

  1. predictions without relying on knowledge of past events
  2. analysis and visualization of time series data
  3. optimized decision-making under uncertain conditions.

With these, an enterprise can put its data to work to improve operations and planning.


Handy resources to learn more about machine learning:

State of Enterprise Machine Learning

Major Roadblocks on the Path to Machine Learning

Mainstreaming Machine Learning


National Grid Webinar: Answering Your Questions


Recently David Salisbury, Head of Network Engineering for National Grid and Neil Barry, Senior Director EMEA at Space-Time Insight, presented the webinar “How Analytics Helps National Grid Make Better Decisions to Manage an Aging Network“, hosted by Engerati.  [Listen to the recording here.] Unfortunately, not all the submitted questions were able to be answered in the time allotted.  However, responses have been provided in this post.

How were pdf data sources incorporated into your analytics? How will that be kept up to date?

To correct to the discussion in the webinar, pdf data sources were not analysed in the valves and pipeline use cases. For the corrosion use case, data from pdf reports was manually rekeyed into the analytics solution.


Are there mechanisms built into the system that facilitate data verification and data quality monitoring?

In the general case, metrics were computed for data completeness (e.g., of the desired data, how much was actually available) and confidence (e.g., how recent was the data we used). For the corrosion use case, there are checks for data consistency and completeness.  For pipelines and valves, these metrics have not yet been fully configured.


Could you describe how this helps with the audit trail?  As the system changes, the current snapshot is updated.  How do you show the status at a certain point in the past when a decision was made?

For the corrosion use case, the history is stored and accessible, providing an audit trail. The foundation analytics does offer a ‘time slider’ that delivers animated time series data, making it easy for the user to go back in time.  However, this is not currently configured for National Grid.


Please provide specific examples of how decisions were made based on analytics and demonstration of analytics/predictive analysis

David described an example at around the eight minute mark into the webinar – budgets used to be set locally, but the insight from analytics might show that a particular type of problem is located in a specific geographic area. This can help with decisions around investment and risk.


How have you defined Asset Health? What data is required to assess?

Models for asset health were agreed upon by National Grid and Space-Time Insight during the implementation process. For pipelines, as was mentioned in the webinar, two of the data sets are Close Interval Potential Survey (CIPS) and Inline Inspection (ILI). For valves, a number of data sets are used, including test results and work orders.


Did you look at techniques to predict issues based on historical data…so you can target risk areas?

This has not been implemented by National Grid.  However, the product software has the capability to predict the probability of failure and the criticality of that failure, as one example.


Has Space Time insight worked on developing a situational intelligence tool for electric distribution and/or transmission applications? Similar to the gas transmission monitoring developed for National Grid?

Yes, Space-Time Insight offers an asset intelligence solution for electricity transmission and distribution utilities.  More information is available online.


How To Not Be Stubborn In 2020


I pulled the following two predictions about analytics and decision making from a recent list of 100 predictions by Gartner analysts (subscription required):

  • By 2018, decision optimization will no longer be a niche discipline; it will become a best practice in leading organizations to address a wide range of complex business decisions.
  • Through 2020, over 95% of business leaders will continue to make decisions using intuition, instead of probability distributions, and will significantly underestimate risks as a result.

Apparently most of us will refuse to get the message about optimizing decisions, even after years of tools and best practices being in place. In Gartner’s 2020, we’re all still stubborn foot-draggers.

In my experience, predictions like these often require a grain of salt. Generalizations such as “over 95% of business leaders” at “leading organizations” who “significantly underestimate risk” lack the mathematical precision necessary to inspire confidence and change behavior.

Predictions like this often contain a grain of truth, as well. We frequently prefer our personal comfort zone, resist change, suffer from confirmation bias, and respect the confines of our organization’s formal and informal cultural.

Keep in mind that being stubborn can quickly lead to being history. Accenture CEO Pierre Nanterme notes that half of the companies in the Fortune 500 disappeared between 2000 and 2015. Why? New business models based on digital technologies, including decision optimization. The rapid pace of change and disruption will continue, and increase.

So, how do you avoid becoming a historical footnote by 2020?

  • Start with the end in mind. Decision optimization starts with the BI dashboards that (I hope) you are using today, and extends to advanced analytics that include prediction, simulation, and prescription. Knowing where you’re heading helps you plan a route and schedule for reaching your destination.
  • Start small. You won’t get to optimal decisions immediately. Identifying what decisions you can automate helps you pinpoint feasible projects with measureable ROI. Chances are, regardless of how digital your industry is now, there is low-hanging fruit to be picked.
  • Start now. Start this quarter or this month or this week, or even today. With hosted and cloud solutions, you don’t need to complete a big IT project before you can start improving decision making through analytics. In fact, you don’t have time for the typical enterprise project that requires years.

The year 2020 may seem like a long way off.  In truth, it’s 12 calendar quarters away. That’s not long. Start now and you’ll be 12 quarters ahead of some other stubborn dog.


How digital is your industry?



Jeff Bezos, founder and CEO of Amazon, famously wrote in 1997 that it was “Day One of the Internet.” Now nearly 20 years later, he still feels that we’re at Day One, and early in the morning to boot. How can that be, given how pervasive and transformative digital technology seems these days?

This Harvard Business Review video describes a McKinsey Global Institute survey about just how digital various industries are today.

The survey examined 27 digital characteristics about assets, usage, and labor across more than 20 industry sectors. It uncovered plenty of room to bring digital technology and approaches into vast areas of our economy.

A few sectors such as IT, media, finance, and professional services, are heavily digital. Other industries such as real estate and chemicals have adopted an ad hoc or tactical approach to digital, but have not converted their value and supply chains to digital from end to end. There remain large portions of the economy, including key functions such as government, healthcare, construction and agriculture, that have very little digitization.

The video points out that the Industrial Internet of Things gives capital intensive industries such as utilities,  manufacturing, and transportation significant opportunity for improvement by connecting physical assets to data and communications networks.

The video also notes that possibly the biggest area for improvement lies in providing workers with digital tools and processes. Providing analytics across the enterprise, instead of keeping it the sole province of IT or data science, is an empowering step industries can take for more productive workers.

Many emerging and developing countries have skipped telephone landlines and moved directly to mobile technology. Can the digital laggards of the economy similarly leapfrog previous digital stages and move directly to end-to-end digital processes with connected, digital assets and advanced analytics?

To my mind, situational intelligence can help government leapfrog from laggard to leader. Many government applications center on documents, payments, and professional services, tasks that are already heavily digitize in other sectors. Government also involves a lot of transportation and real estate functions, sectors that are ahead of government digitally and poised to benefit from the Industrial Internet of Things.

(Image: goodluz / 123RF Stock Photo)


Can Analytics Make Large Power Transformers Immortal?


Large power transformer

Large power transformers (LPT) are the workhorses of the North American electric transmission grid, and many have lived past their life expectancy. The U.S. Department of Energy reports that average LPT is 40 years old; 70 percent of LPTs are 25 years or older. These assets are becoming a weak link in the chain of networked transmission assets and may be subject to catastrophic failure, including from severe weather.

If transmission systems fail, large-scale outages can occur. According to a different Department of Energy report, 85 percent of U.S. outages affecting 10,000 customers or more in 2015 were caused by weather or by asset failure. These outages cause customers economic and cost transmission organizations lost revenue and damaged reputation. Regulators are increasingly focused on loss-of-load probability and loss-of-load hours, both key reliability measures.

The Department of Energy also reports that LPTs cost up to $7.5 million dollars each, weigh up to 400 tons, and take up to 18 months to procure and install. The money and time required goes up significantly if new engineering is required. The cost of LPTs accounts for 15-50 percent of total transmission capital expenditures.

For all these reasons, there is an urgent need to understand the operational contingency of heavily loaded LPTs to manage and reduce outages, and to bridge the time until critical LPTs can be replaced.

With analytics you can make the most of what you have while planning for new assets. And with an 18 month delivery cycle, utilities need to start that analysis now.

The growing array of smart, connected devices in the transmission system generates large silos of data. That data can be useful in maintaining safe, reliable, affordable and sustainable transmission operations, but it cannot be correlated, analyzed and applied in a timely manner without advanced visual analytics.

Recently, Siemens and Space-Time Insight announced a partnership in part to tackle the issue of large power transformers.

Situational intelligence provides a number of ways to apply advanced analytics to silos of data for managing the current population of LPTs more effectively. Consider three scenarios:

  • By correlating and analyzing the health of LPTs along identified transmission corridors with demand forecasts and power dispatch schedules, transmission operators are able to prioritize the delivery of power using assets that are relatively healthier than other assets. This helps organizations increase grid reliability and make the most of their current assets.
  • By correlating weather forecasts, LPT health and forecasted energy demand, analytics gives transmission operators advanced warning of weather impacts on transmission assets so that they can respond accordingly to avoid outages and asset damage.
  • By forecasting the impact of removing some LPTs from service, analytics gives transmission planners better insight for planning and executing outages necessary for LPT maintenance, repair and replacement, potentially extending the life of these essential assets.

With the scale of the power grid and past deficit of investment in transmission infrastructure, we will be dealing with aged LPTs for many years. Analytics gives us tools to make the most of the assets we currently have, but it won’t make LPTs immortal.


Identifying Decisions That You Can Automate


Mars rover web

Automated decisions are making the news. This year, Tesla cars in autopilot mode have experienced crashes in Florida, Montana and China. (The company contends drivers that were not using the autopilot mode properly). Meanwhile on another planet, the Mars Rover can now make its own decisions about what rocks to investigate. Keep your eyes open, because I believe these stories will become more prevalent quickly.

When analytics drives automated decisions, people are faced with another set of decisions about how that capability fits into the work, culture and mission of the organization. It’s akin to when you hire a new employee. What role does the new capability fill? How much autonomy is granted to the new capability? What review and verification processes are in place to ensure safe, productive and profitable work?

As stated in an earlier blog, it’s unlikely that automation will replace entire jobs. New job descriptions will be written, and existing ones likely rewritten, to specify how humans and automation will interact to fulfill a needed role.

The analytics entering organizational roles for the foreseeable future will be focused on specific tasks, if not built for specific purposes. Finance and investment companies are using analytics extensively for trading and portfolio composition, but those same analytics aren’t likely to make employee benefits decisions without modification.

Because they are purpose built, analytics need to specialize in predictable decisions that they perform repeatedly. This is exactly the sort of dull work that’s best left to automation, since people have a tendency to get tired, bored and distracted doing repetitive work.

At least initially, organizations will want to assign to analytics decisions that carry known and usually low-level consequences. Consequences can be measured by the amount of money at stake, the number of employees or customers affected, or the ease with which an automated decision can be reversed if need be. Analytics can help accurately define the type, scope and severity of consequences associated with decisions.

The metrics of predictability and consequence come together nicely in a video from The Harvard Business Review describing how to decide what decisions you can entrust to automation.

What about frequency of decisions? Some decisions, like short-term financial trading, happen so rapidly that humans can’t make every single call. In these situations, humans move from doing the work to maintaining, tuning and improving the automated systems that do the work. Other decisions, such as whether to acquire another company, happen so infrequently that automating the decision probably isn’t worth the effort. Between these two points lies a spectrum of decision frequency that organizations must also weigh in identifying decisions to automate.

The framework of predictability, consequence and frequency gives organizations the model they need to determine what role automated decisions will play. What decisions would you like to automated in your organization, and how would you score them for predictability, consequence and frequency?

(Image courtesy of forplayday / 123RF Stock Photo)


Analytics And Transporting Crowds Of Olympics Fans


crowd escalator train station web

With the European Football Championships having just come to a close and the Olympics due to start, the Summer of 2016 will have seen two major events that only happen once every four years on the sporting calendar. These are in addition to the regular annual sporting events such as Wimbledon, the British Grand Prix and the Rugby League Challenge Cup Final. With events such as these, a lot of people travel whether it be locally or internationally. Such spikes in travel can have implications on the travel networks and cause problems with people getting around.

Despite the fact that the football championship was in France and the Olympics Brazil, back at home in the UK it is likely that a huge number of people will be watching these events live whether that be in a pub, a sporting establishment such as a club or at home. A huge number of people would have traveled to Wimbledon and also to Silverstone as well as those who made a trip to France and the more adventurous who might descend on Brazil.

Of course in the modern day world where we are able to watch all of our TV on demand it doesn’t really matter whether we miss one of our favorite programs. In the case of live sport however, it is extremely difficult to keep away from social media, news alerts and radio during a live game. So it is likely that a lot of people will watch sport live to stop the end result being spoiled for them.

Take the Olympics for example. Not only will a lot of people travel to Brazil from all over the world, they then need to travel inside the country to see various events. Local Brazilians also need to travel around the country to see the various events plus conduct their usual business. This will cause an increase in people traveling around the country over the period that the Olympics is taking place.

How can analytics help in these cases?

Using data to predict spikes in demand for transportation could be paramount to the success of a large sporting event such as the Olympics. For example, how many tickets have been sold for an event in one of the satellite locations in Brazil could indicate a lot of people traveling from Rio at the same time. Using IoT and data analytics could mean looking forward to one of these events to predict who might be traveling and what effects this could have. By enriching the data further with the city or postal code of ticket purchaser could tell planners where people are traveling from.

Of course it is difficult to predict as a lot of the locations are new and Brazil hasn’t hosted the Olympics before, but by pulling together data from previous transport networks and large events, planners might be able to predict where blockages or problems might occur. Predicting potential problems offers the opportunity of preventing problems from occurring in the first place.

The main aim would be to look at passenger info for the main transport hubs and see where the potential problems might occur normally, then predict what could happen when these places are busier due to huge numbers of people. Brazil wants to make a good impression during the Olympics for people who are visiting but also for people from the country to be proud that it did a good job. By predicting how the transport networks could be affected it will mean that the travelers will be happy and safe whilst visiting the country but also the networks will remain reliable and thus the country will see overall economic benefits from hosting a large sporting event.

(Image courtesy paha_l / 123RF Stock Photo )