National Grid Webinar: Answering Your Questions

LinkedInEvernoteFacebook

Recently David Salisbury, Head of Network Engineering for National Grid and Neil Barry, Senior Director EMEA at Space-Time Insight, presented the webinar “How Analytics Helps National Grid Make Better Decisions to Manage an Aging Network“, hosted by Engerati.  [Listen to the recording here.] Unfortunately, not all the submitted questions were able to be answered in the time allotted.  However, responses have been provided in this post.

How were pdf data sources incorporated into your analytics? How will that be kept up to date?

To correct to the discussion in the webinar, pdf data sources were not analysed in the valves and pipeline use cases. For the corrosion use case, data from pdf reports was manually rekeyed into the analytics solution.

 

Are there mechanisms built into the system that facilitate data verification and data quality monitoring?

In the general case, metrics were computed for data completeness (e.g., of the desired data, how much was actually available) and confidence (e.g., how recent was the data we used). For the corrosion use case, there are checks for data consistency and completeness.  For pipelines and valves, these metrics have not yet been fully configured.

 

Could you describe how this helps with the audit trail?  As the system changes, the current snapshot is updated.  How do you show the status at a certain point in the past when a decision was made?

For the corrosion use case, the history is stored and accessible, providing an audit trail. The foundation analytics does offer a ‘time slider’ that delivers animated time series data, making it easy for the user to go back in time.  However, this is not currently configured for National Grid.

 

Please provide specific examples of how decisions were made based on analytics and demonstration of analytics/predictive analysis

David described an example at around the eight minute mark into the webinar – budgets used to be set locally, but the insight from analytics might show that a particular type of problem is located in a specific geographic area. This can help with decisions around investment and risk.

 

How have you defined Asset Health? What data is required to assess?

Models for asset health were agreed upon by National Grid and Space-Time Insight during the implementation process. For pipelines, as was mentioned in the webinar, two of the data sets are Close Interval Potential Survey (CIPS) and Inline Inspection (ILI). For valves, a number of data sets are used, including test results and work orders.

 

Did you look at techniques to predict issues based on historical data…so you can target risk areas?

This has not been implemented by National Grid.  However, the product software has the capability to predict the probability of failure and the criticality of that failure, as one example.

 

Has Space Time insight worked on developing a situational intelligence tool for electric distribution and/or transmission applications? Similar to the gas transmission monitoring developed for National Grid?

Yes, Space-Time Insight offers an asset intelligence solution for electricity transmission and distribution utilities.  More information is available online.

LinkedInEvernoteFacebook

Pipeline Analytics Lower Natural Gas Risk

LinkedInEvernoteFacebook

pipeline welding web

Pipeline accidents in Allentown, Pennsylvania in 2011 and Sissonville, West Virginia in 2012 destroyed homes, caused death and injury, and reminded us how critical careful gas transmission and distribution really is. We are accustomed to natural gas reliably powering our homes and businesses, but even those responsible for getting it there can take that system for granted.

Safety and aging infrastructure are top concerns of natural gas executives surveyed by Black & Veatch in 2015, and with good cause. Replace just five percent of a room’s air with natural gas and the atmosphere becomes explosive. According to a U.S. Department of Transportation report, nearly one-third of natural gas distribution pipelines in the U.S. were built before 1970.  More than 50,000 miles of these older pipes were welded together with outdated techniques that are prone to failure.

Despite these known conditions and obvious hazards, one-third of respondents to the Black & Veatch survey did not have a resilience plan in place for their natural gas operations four years after the Pennsylvania and West Virginia accidents. Fifty-four percent of respondents agreed with the statement that “a formal risk-based planning approach has not yet been undertaken to my knowledge.”

Why should so many utilities executives be without a risk-based plan, when the consequences of risk are so high?

Much of our natural gas infrastructure is hidden underground, which means it is often out of sight and thus out of mind, contributing to our becoming inured to problems. Gas utilities have developed clever methods such as pigging, hydro testing, and cathodic inspection for measuring and maintaining the health of pipes. Those clever methods generate multiple, disparate sources of data related about assets. Today, many utilities are flooded with data but no closer to fresh and useful insights based on that data.

Situational intelligence offers a powerful approach to analyzing those data sources and quantifying the risk present in natural gas assets. By correlating, analyzing and visualizing data related to an asset’s age, condition, location, network relationships, and operating history, situational intelligence provides a method for making decisions based on the likelihood of asset failure and the consequences should failure occur.

With this specific understanding of risk, natural gas managers and executives can prioritize maintenance, repair, refurbishment and replacement work to focus first on the most critical assets. This approach drives down risk faster than following a time-based or even condition-based approach to asset planning and operations.

We can’t fully eliminate risk, but we now have the analytics approaches to understand, quantify and lower risk to help prevent future pipeline accidents.

(Image: smereka / 123RF Stock Photo)

LinkedInEvernoteFacebook

Analytics and Vegetation

LinkedInEvernoteFacebook

vegetation blog

Electric utilities, cable operators, pipeline companies, railroad, municipalities—all will tell you that it’s a jungle out there. Vegetation has a way of interacting with and interrupting the operations of technologically sophisticated and complicated networks. Even your wireless communication networks are not immune to the impacts of vegetation.

Vegetation causes trouble in several ways:

  • Falling onto assets, such as trees falling across roads, damaging them or rendering them unusable
  • Growing into assets, such as roots growing into sewer lines, lowering their performance or making them fail
  • Making contact with assets and causing malfunctions, such as tree limbs touching power lines and causing power outages or sparking fires
  • Allowing wildlife to contact assets and cause equipment failure, such as bushes helping squirrels enter substations and disrupt power operations
  • Obstructing rights of way such as roads, bridges, tunnels and waterways, for example reeds and seaweed clogging ship channels

Similarly, the lack of vegetation can also be a problem. Slopes that have lost their vegetation due to wildfires during times of drought become prone to erosion and landslides when rains finally return. If these areas are adjacent to roads, waterways, power lines, pipelines or other assets that you own or operate, sudden ground movement from erosion or landslide could damage your equipment or block access.

An asset-intensive organization can spend millions of dollar per year on spraying, trimming, pruning, removing and replanting vegetation. Its labor intensive work with costs that add up quickly. When you experience an unplanned event related to vegetation—tree fall, land slide, brush fire—your emergency costs pile up while services are interrupted.

There are vegetation management systems available to organizations today. Maybe you use one. These mainly target the management of scheduled activities, routes and workers. They are useful, and can be augmented to be more valuable by integrating intelligence about actual and potential problems into the scheduling of trim activity. Advanced analytics will identify the areas most in need for trimming or other management and also optimize overall crew schedules so that your vegetation management processes and costs to improve reliability and safety and lower operational costs.

A situational intelligence approach to understanding your vegetation challenges and potential problems maps vegetation’s proximity to your networks, predicts how vegetation will grow and interact with those networks over time, and prioritizes the geographic locations and network sections most susceptible to vegetation problems.

Data about tree and plant species, microclimates, past and future rain fall, time of year, and other variables informs growth models that improve your vegetation management schedules. By applying analytics to this data, you can prioritize your work more effectively to address true problem areas and not just the next assignment in the vegetation management cycle.

By working differently, working smarter, you can optimize your vegetation operations budget and make your networks and assets more reliable.

Copyright: alephcomo / 123RF Stock Photo

LinkedInEvernoteFacebook

The Science of Visualization: Receptor-driven Design for Augmented Reality

Tinklepaugh AR UI
LinkedInEvernoteFacebook

Color brings beauty to our eyes, whether from the wings of a monarch butterfly or the broad brush strokes of a Van Gogh painting. Color also allows us to assign meaning and organization to items. At some point, most people have to ask how they should use color whether they are animating a cartoon character, painting an accent wall or, in my case making, a graphical user interface.

Here I will explain how I would go about using color for utilities-specific augmented reality applications.

The use of color rests on how our eyes and brains process light and detail. When selecting interface colors, I ask myself: What colors do I use, and how to maximize readability and decrease distraction?

It helps to think about how the visual system processes color. In the eye, there are two types of receptors that process light: rods and cones.

Tinklepaugh rods cones

Rods are bad for color, but great for detail. Cones are great for color, and aren’t good for detail.

Color exists partly because of an activity pattern of three retinal receptive cones that are suited for different wavelengths of light: short, medium and long wave. These cones work in combinations to send signals to our lateral geniculate nucleus and visual cortex for what color we are to perceive.

Your visual cortex process most information from red and green receptor cones gathered in a small indent in the back of your eye, called the fovea. More space in your cortex is devoted to processing red and green. What is the takeaway? Since blue receptors aren’t in your fovea, your brain works less to process them. Furthermore, rods also process blue, meaning even less energy is devoted to perceiving it.

Receptor-driven design

These variances in how we process light and color leads car designers to two opposing dashboard color philosophies: blue and red.

Tinklepaugh car dashboards

Red wavelength affects mainly cones, leaving the rods unsaturated, which results in better night vision. On the other hand, red wavelength enters your brain from your fovea, which means you use more visual cortex resources to process for higher acuity. With blue dashboards, your cones don’t require as much detail, which means you use fewer visual cortex resources to process. The trade-off is that your rods are processing light from two sources, the road and your dashboard, and therefore are working harder.

Cortical magnification

Hold up just one finger on your hand and look at it–your brain increase magnification in your visual cortex, which uses more cones and less rods.  Now, look at all five fingers on your hand–your brain lowers magnification, which consumes fewer resources in your visual cortex. This relies on fewer cones and more rods.

Tinklepaugh fingers

Interestingly, if you hold up two hands in front of you with all five fingers extended on the right and only your index finger on your left, your visual cortex activates far more and has more total volume dedicated to the finger than when processing your right hand with all five fingers extended.

So, how does any of this apply to Augmented Reality? Let’s take a look.

Tinklepaugh AR UI

Decreasing cortical magnification and acuity.

Here’s an interface that utility workers might use to assess linear assets in the field. The colors are pleasing, modern, unobtrusive–but that’s not the point of the colors. The color design helps field users visualize information more effectively and effortlessly by drawing attention to only what matters at present.

Remember that rods are most sensitive to light and dark changes, shape and movement, and place the smallest demand on the visual cortex. Let’s put all the UI elements in our peripheral that we can, unless they represent the most important data at this current point in time.

Contextual activation of receptors

Let’s make all our buttons and elements blue or white if we can, so they are less taxing on our visual systems. We use green and red very sparingly since they fall right in our fovea. Red alerts us to where the problem is reported via data being uploaded to our system. Green directs our attention to the start and end of where we think our linear asset is experiencing trouble. We can drag, drop, and slide around the placemarks all we want to better approximate and update the data source in real time, allowing asset planners to better diagnose corrective steps to take.

Now that you understand more about how your brain works with light and detail, you can start to notice how products and programs around you are using color to do more than just look pretty.

 

LinkedInEvernoteFacebook

How Can Utilities Maximize Their Assets?

LinkedInEvernoteFacebook

Electric utilities today are grappling with enormous changes in the way energy is produced, distributed and consumed wrought by renewable and distributed energy sources, smart meters, empowered consumers, changing regulatory models and more.

Accommodating these changes has led to a huge investment in new utility assets that must be integrated and managed alongside a vast portfolio of legacy assets. The range of assets operated by a typical utility spans dozens of categories – from wooden poles to smart meters to high-voltage transformers. To put this in context, the volume of assets a utility needs to manage can add up to tens-of millions within a single operational territory.

To efficiently manage this ever-changing asset portfolio, utilities need insights into how they are used and this requires solutions that bridge the gap between data available via enterprise applications and physical assets in the field. This type of intelligence allows organizations to analyze the data available to know where to invest their time, money, and skills to reduce risk and operational costs.

One example of how utilities gain this type of insight into assets is the new Asset Intelligence 4.0 application. With the latest enhancements, Asset Intelligence gives utilities complete transparency of operational status across the organization, and this ultimately gives them the resources they need to manage their valuable assets and make informed decisions at a moment’s notice.

If you’re curious, read more about the new version of Asset Intelligence.

 

 

LinkedInEvernoteFacebook

About Digital Asset Management: Answering Your Webinar Questions

LinkedInEvernoteFacebook

I recently presented an Energy Central webinar on digital asset management, along with Bill Ernzen of Accenture. Unfortunately, the webinar took up all of the allotted time, leaving no time for Q&A with the audience.

Energy Central kindly shared with me the questions that participants submitted during the webinar. I’ll do my best to answer those questions here. I’ve adapted some of the questions to make them work as part of a blog post.

 

The focus of your talk is based on distribution utilities. How do concepts presented apply to the working environment of transmission utilities?

In the Asset Intelligence section of the webinar, our software demonstrated the capabilities in scoring the condition metric for large transformers and charting the dissolved gas trends over time on a Duval triangle, along with the DGA metrics. Similar capabilities are built into the software for high voltage circuit breakers, tap changers and generation step up transformers, among many other asset types.

Do you see any difference in adoption of digital asset management across gas, electricity or water companies?

All utilities that installed their asset in the 1960’s and 19070’s face a unique challenge, whether they are electric, gas or water. Though gas, water and sewer utilities operate differently than electric, the underlying pain points are very similar. Our software has accommodated their need in how it performs the analysis. The software is structured to perform analysis around business-centered needs, and method applies to electric, gas and water utilities. The method builds from asset health indices to probabilities of failure to risk scores, which are then used for maintenance prioritization, replace versus refurbish analysis and capital planning. The presentation and analysis capabilities also are very similar, except for specific regulations that change some calculations.

Is there any dollar value on what digital asset management can help avoid as ‘risk’ or prevent as ‘avoided cost’?

Putting a dollar value on avoided cost is fairly straightforward. For example, if you can successfully postpone spending money, then you can use your cost of borrowing (weighted average cost of capital) and the inflation rate to calculate the financing costs you didn’t pay because you didn’t do the project.

Putting a dollar value on avoided risk is a little more abstract. Let’s say that an aging power transformer represents an economic consequence of $10 million, should it fail. Assume that, before you start your digital asset management project, that transformer has a ten percent probability of failure. You could describe the transformer’s risk as ten percent of $10 million, or $1 million.

If digital asset management practices directed you to refurbish the transformer to reduce the probability of failure to one percent, then you lowered the risk by $9 million. If your refurbishment project cost you $4 million, then you realized a 225 percent ROI in terms of risk reduction.

How do you define criticality? Aren’t you mixing probability and consequences in your definition of criticality?

In our definition, criticality is the resulting consequence should a failure occur. Consequence can encompass lost revenue from power outage, cost to replace damaged equipment, crew wages to restore power and replace equipment, and other costs. A unique feature of our software is that it runs a connectivity analysis through a topology processor to identify upstream and downstream assets and impacts thereof.

Probability is the likelihood that failure will occur, regardless of the consequences of failure. Probability is based on the age of the asset and its condition, load factor, network relationships and other considerations.

Risk is the product of failure probability multiplied by criticality.

How precisely do you compute your risks?

Where the data for calculating the asset health index, probability of failure and criticality are quantified and precise, our calculated risk metrics are precise.

How do you take “expert feeling” into account?

Our software provides flexibility to customers to tune the asset health index, criticality and probability metrics to match their knowledge and experience by allowing them to modify the factors and weightings in algorithms.

How do you maintain temporal consistency when you have very fast data streams such as PMU and inspection reports which may be once a year?

Our software uses the most recent applicable data in calculations, regardless of its comparative frequency. Where you get into trouble is when you have outdated data, whether it’s monthly data that’s a year old or hourly data that’s a week old. This is why our software computes two additional metrics/indices called Completeness and Confidence. The Completeness index identifies if any data sets were unavailable for computation while Confidence measure if the data sample expected at a point in time was received before the indices were computed. This can be used as an indicator of data quality, data availability or a missed inspection cycle.

Is there a way you can estimate the sensitivity of the risks to your entire system?

Our criticality scores incorporate network connectivity information, and therefore reflect impact on the entire system. Let’s assume that, through an oversight in network design, you have a new, small transformer that sits at the nexus of your entire network and has no redundancy. That transformer could have a very low probability of failure, because it’s brand new, and a sky-high criticality score because it’s the linchpin of your network.

 

Ajay Madwesh is Vice President of the Utilities Business Unit at Space-Time Insight. He possess more than 20 years of experience in software development and technology management in Utility and Process automation environment, and has spent several years evangelizing the integration of real-time operational technologies with IT. He has previously held leadership roles at top companies such as GE, ABB and Infosys.

LinkedInEvernoteFacebook

Smart Meter Deployment and Analytics: Begin with the End in Mind

LinkedInEvernoteFacebook

Sixteen member states of the European Union are currently deploying smart electricity meters. Five member states are deploying smart natural gas meters.  According to a European Commission report, by 2020, 72 percent of meters across the member states will be smart meters.

2020 is still five years away, and the European Commission had originally targeted 80 percent penetration of smart meters by 2020. Shareholders and regulators don’t want to wait years before seeing a return on the investment in smart meters.

If a country is just starting to roll out smart meters, where should they put their first 20 percent of meters to start realizing benefit? Answering that question demands situational intelligence.

Situational intelligence incorporates spatial, temporal and nodal dimensions into analytics. Spatial and nodal concerns for prioritizing smart meter deployments include

  • Where in the service territory does the meter stand (including proximity to other meters to deploy at the same time (route optimization)?
  • Where on the distribution network does the meter lie (network relationship)?
  • What is the age and type of building associated with the meter?
  • What electricity or gas usage is associated with that meter and building?
  • Is the location, network relationship, building type and usage representative of a class of customer or usage that you might want to study (population sampling)?

Once deployed, a small, early subset of smart meters can provide a rich new data source for other applications such as distribution optimization, demand response, energy efficiency program design, revenue protection and more.

In deploying smart meters, situational intelligence and other analytics projects, it pays to begin with the end in mind. You’re less likely to lose your way and more likely to start realizing returns on your investment.

LinkedInEvernoteFacebook