Public Infrastructure Deserves Asset Analytics

LinkedInEvernoteFacebook

Bridge 01

The latest report card from the American Society of Civil Engineers gives infrastructure in the United States a grade of D+. They estimate an additional $200 billion is needed per year, on top of existing public spending, to achieve and maintain a grade of B. This begs the question: if infrastructure money is so tight, where do we put our money first to reduce the risk of failing infrastructure? Or, to state it another way, what is more important for a community or region to repair or upgrade: a public hospital or a highway bridge?

This sort of question comes up often in public debates and elections. It’s a hard question to answer because you are choosing between two critical infrastructures.

There are always plenty of opinions. But opinions are just that, opinions. If you’re lucky, there might be in-depth research studies supporting those opinions. Comparing transportation studies and public health studies is like comparing apples and tractors. Many public sector studies are huge tomes written by experts and rarely digestible by the average voter. Studies can also be hard to discover and access. Because they take a long time to research and compile, public sector studies may not be timely.

Leading electricity, natural gas and water utilities use advanced analytics to assess their infrastructure and assign standardized risk scores across different asset classes to arrive at a prioritized list of projects to address. Knowing which risk is more acute and/or represents higher consequences switches infrastructure debates, in a utility board room or city council chambers, from opinion-driven to data-driven.

Risk is defined as the probability of failure and the severity of consequences if failure occurs. For infrastructure, failure can be defined as something other than complete collapse. Infrastructure is intended to serve the public, so failure can be defined in terms of service.

For instance, failure for both a hospital and a bridge might be the inability to serve at least 75 percent of the normal volume of patients or vehicles in any 30-day period. That might the level at which the respective medicine and transportation systems for the surrounding region can’t absorb the traffic being redirected from the failed asset.

The failure of public infrastructure carries with it multiple consequences in terms of emergency response, public health and safety, economic impact, taxation, policy and more. These consequences can be measured or estimated, weighted, scored and added to the analytics process.

By tying dollar amounts to public projects that also carry a defined amount of risk reduction, politicians, administrators, community advocates and voters gain insight into the trade-offs inherent in any community budgeting decision. Advocates for less spending would know the level of risk that they are accepting; advocates for less risk would know the spending their position requires.

This approach is common in the utility sector. Regulated utilities must make a case for the rates that they charge and the profits that they can earn. Utilities that use advanced analytics to show regulators the relationship between budget requests, spending plans and risk reduction have an easier time arguing, and winning, their cases.

Utilities have multiple systems across their service territories for measuring and gathering data on the state of their electricity, gas or water delivery infrastructure. Other public infrastructure, like hospitals and bridges, don’t necessarily enjoy these same capabilities today. By 2020, however, Gartner predicts that the Internet of Things will contain 30 billion devices. It is likely that at least some of those devices will gather and report useful data on the state of public infrastructure.

Analytics will not eliminate public debates and elections, nor should it. Using analytics could get everyone in the community on the same page about the status of current infrastructure, the potential need for investments, and the impact those investments should have. Public infrastructure has a direct impact on our quality of life. It deserves the same level of analytics being implemented in other sectors of the economy.

Image courtesy of wichits / 123RF Stock Photo

LinkedInEvernoteFacebook

Why We Need Situational Intelligence, Part 1

LinkedInEvernoteFacebook

air traffic control

Why do we need situational intelligence?

When we lack information, or information is not easily consumed or comprehended, then our decisions are compromised. We don’t have the right information, at the right time, or in the right form to lead to the best possible outcome.

At its core, situational intelligence handles all of the relevant data needed to derive insights that will guide your decisions. Today’s environment typically means large volumes of disparate data, pouring too quickly into over-burdened systems, leaving executives and analysts alike wondering if they can believe the data they see.  Some insights can be derived only by sifting through this ever-growing mountain of data looking for hidden correlations. Correctly correlating and analyzing all of necessary the data and correctly presenting results, recommendations and predictions is the biggest differentiator of situational intelligence over traditional analytics.

Unlike typical business intelligence or stand-alone analytics solutions, with situational intelligence we receive valuable details, recommendations and predictions that typically result in enhanced competitive advantage through:

  • Cost savings
  • Increased efficiencies, productivity and performance
  • Increased revenues
  • Improved client engagement that raises satisfaction
  • Better understanding of exposure that facilitates better management of risk

To put this into a concrete example, consider why airports need flight and ground traffic control systems. There is the obvious answer – to know where airplanes vehicles and people are both in the air and on the ground, to safely and efficiently stage their movement. Managing traffic at an airport requires context, such as the current and forecasted weather, to achieve the best possible safety and efficiency. Even a seemingly simple matter such as a broken jetway has many consequences that affect the context of ground control, fueling, cabin cleaning, luggage, passenger relocation, food service, etc. Now, imagine the complexity of handling a crisis situation such as an airplane needing to make an unplanned emergency landing.

Managing an operation with as much complexity, interdependencies and consequences as an airport requires the staff in the operations control center to have a live, real-time, big-picture view of everything that is happening and, ideally, also what is most likely to happen. As you surely recognize, keeping track of so much fast-changing information in a person’s head alone is impossible and prone to errors and omissions.

Clearly having as much relevant and easily comprehensible information as possible provides the context that we naturally seek to guide our decisions and actions. In a follow-on blog I will explain why and how various technologies such as flexible data access, analytics, machine learning, artificial intelligence and visualization should be seamlessly integrated to create and deliver situational intelligence that is truly actionable.

 

(Image courtesy of Flikr)

LinkedInEvernoteFacebook

Visualizing Mobile Assets: Where Are My Bots?

LinkedInEvernoteFacebook

It’s ironic that many of us struggle to find our car keys in the morning, yet we’re also launching billions of mobile and increasingly autonomous devices into the world to report on conditions, track activities, and even perform tasks for us. If we can’t find our dumb keys, how will we keep track of stuff that’s smart and moves?

Let’s pretend it is 2018 and start-up company Entrebot sees an opportunity to further disrupt the gig economy. The company has landed funding to purchase 25 autonomous robots and program them to pick up and deliver small items around the city of San Francisco. They’ll be competing head-to-head with Task Rabbit, Uber Food, Amazon Drone, steampunk bike messengers and other players in the personal service sector.

Entrebot robots run on electricity and charge themselves at ports throughout the city, based on time, location, and predicted labor demand.

Entrebot faces unique challenges each day:

  • Deterrence of bots from their routes via harassment, vandalism and bot-napping
  • Rerouting the bots when the unexpected occurs

Deterrence

Even in 2018, some people have a real problem with bots. It is a wide known fact that humans are afraid of robots. Many of them take out life insurance policies against robot attack.

Dislike of bots motivates people to harass, rob, vandalize and steal bots. These actions lead to increased wait times for deliveries, loss from customer claims when items are damaged or stolen, and volatile public perception of the robot’s impact on jobs and city life.

With so many calls, tweets and email flooding Entrebot, the company decides to get proactive in getting people’s stuff to them.

Rerouting

Entrebot’s customer service department decides to hire a Visualizer, a person whose job it is to see visual correlations between peak demand, robot location and probabilities of incidents. They work with a categorized queue of past, present and predicted loss and must triage it all to maximize their Customer Experience.

The Visualizer uses Augmented Reality to do their work, virtually moving through and manipulating holographic projections of buildings, people, vehicles, objects and the variable rates of change between all of them. This is one way the world may keep track of mobile objects in the near future.

Man and computer would work together to make robot delivery more efficient, effective and satisfying.

Augmented reality interface for tracking mobile assets
Augmented reality interface for tracking mobile assets

The Visualizer makes choices among the options presented from analytics. Analytics would present the Visualizer with possible actions when situations become too complex to analyze or require human input.

The computer can process large amounts of data but is not situated physically in the real world and may not be aware of certain factors impacting the business.

At times, deliveries will need to be rescheduled. Analytics can certainly recalculate routes and times, but the business rules call for human approval of rescheduled deliveries.

Bots need to keep running during an area-wide power outage to meet obligations and avoid backlog. Working with the analytics, the Visualizer can reprioritize the bots’ time, location and routes based on existing battery charge and distance from a back up power source rather, than the typical distribution schema optimized solely for time.

The Visualizer could really help when a bot gets kidnapped. They can trace the location through the city and work with law enforcement or private investigation teams for asset recovery.

Perhaps you think this sounds more like science fiction than an actual viable business model in the next two years. I’d bet against that given the trajectory of legislation for autonomous vehicles put out on Cyber Law Wiki by Stanford showing several states are already moving in that direction.

Driverless vehicle legislation
Status of legislation concerning autonomous vehicle operation

How long will it be before driverless cars are paired with tech like Uber? Just look at Spot, a robotic dog by Boston Dynamics that bears uncanny similarity to the mechanical hound in Farenheit 451, a dystopian novel by Ray Bradbury.

Augmented Reality is a great technology for real world applications such as “Where are my robots?” (or any other connected item). It will be fun to see the technology and software offering evolve into brilliant new solutions to tomorrow’s difficult questions from start-ups all over the world.

 

LinkedInEvernoteFacebook

Five Aspects of Data Quality

LinkedInEvernoteFacebook

Plenty of us complain about data quality, or worry our data isn’t good enough for analytics. But what do we mean when we say “data quality”?

When I was in business school, we defined quality as fitness for an intended purpose. For instance, a broom handle may be a high quality item for when you’re sweeping floors, but a low quality item for when you’re hitting a baseball.

In this sense, quality data is data that fits your intended analysis. Several components comprise fitness for analysis: relevance, accuracy, completeness, recency and cleanliness.

Quality data is relevant. Your data should describe or pertain to the time period, location and / or population that comprise and affect what you are analyzing. It should also be directly related to the goals of your analysis. For instance, if your analytics project is intended to reduce manufacturing waste, then your data should measure defects in raw materials and finished goods, plus products returned from the distribution channel.

How do you know if your data is relevant? You’ll notice more if it’s not relevant. In that case, you analysis will yields results that are unrelated to your problem or just don’t make sense.

What if your data is not relevant? You will need to generate or acquire data that is relevant.

Quality data is accurate. Your data needs to accurately reflect or correspond to what you’re measuring, to the required level of measurement. It should also be free of typos, transpositions, and other inaccuracies of data entry and classification.

How do you know if your data is accurate? Your data will pass spelling checks, check sums, spot checks and other measures of internal accuracy and consistency.

What if your data is not accurate? You’ll need to either fix your data, or generate or acquire new data. But you need to work only on data that is related to the analysis that you’re trying to perform.

Quality data is complete. Complete data measures or describes all the relevant aspects of the problem you’re trying to solve. It encompasses the total population, time period and/or geographic area that you’re studying. There are no items missing from series.

How do you know if your data is complete? You’ll notice more if it’s not complete. In that case, you’ll find yourself redoing calculation and analyses to fill in the gaps you’ve discovered. Discovering and filling gaps can be good for data quality, but can be time-consuming and frustrating.

What if your data is not complete? You have a choice. You can decide that it’s complete enough to base a decision upon, or you can acquire or generate additional data to round out your data set.

Quality data is recent. Recent data reflects the current state of a measurement. Recency is measured relative to the problem you’re trying to solve. Recent geological data will be much older than recent stock market data. Disparate data sets that are related to the problem you’re addressing can have difference levels of recency.

How do you know if your data is recent? Your data should carry some sort of time stamp. Some data might seem old, like census data, and yet be the most recent, relevant data that you can apply.

What if your data is not recent? You can take new measurements if recency is an issue. You could re-measure a portion of your data and see if it varies significantly from your existing data.

Quality data is clean. Clean data is free of duplicate values. The data is organized, standardized, structured and labelled or documented to the extent possible. Most data in the world is unstructured, in that it doesn’t fit into the neat fields of a data table. Think social media, emails, reports, videos and images. Still, even unstructured data can be in a well-documented and standardized format.

How do you know if your data is clean? In part, it will just look clean. You won’t find other data-dependent tasks interrupted by impromptu data cleaning. Also, your counts will be accurate because there are no duplicates.

What if your data is not clean? Use ETL (extract, transform, load) or de-duping tools. Normalize data when different terms or values are used to represent the same information. Make data as consistent as possible in terms of labels, categories, time stamps and other types of structures.

The next time you catch someone, including me or possibly yourself, complaining about data quality, take a moment to dive deeper into these five characteristics of data quality. Getting more specific about quality helps you pinpoint tangible problems with specific solutions that you can implement.

LinkedInEvernoteFacebook

Wind, Coal, Or Natural Gas? Dispatching Electricity with Situational Intelligence

LinkedInEvernoteFacebook

According to the U.S. Energy Information Agency, use of renewable energy sources such as wind and solar power in the United States doubled in just six years. From 2007 to 2013, renewables grew from three to six percent of the electricity generation mix.

Six percent isn’t a big part of the overall mix, but the intermittent nature of these energy sources poses additional challenge for an already complex problem: when to use the various types of generation to meet demand?

Consider these complications:

  • Despite recent advances in battery technology, it’s difficult and expensive to storage large quantities of electricity, as Bill Gates recently noted. This means energy needs to be made on demand and consumed in real-time.
  • It takes time to vary the production of energy. Large nuclear, coal and gas generators can’t be turned on and off like light switches.
  • The sunny doesn’t always shine and the wind doesn’t always blow, which makes scheduling renewables difficult.
  • In addition, regulations outline the criteria for selecting what generation sources to use: lowest cost, most reliable, least polluting.
  • Once electricity is produced, it must be delivered across transmission lines. Produce too much electricity in the area served by a single transmission line and you’re liable to create congestion on the line.

This whole problem what type of generation to use when and where is known as the dispatching of electricity.

As readers of this blog know, situational intelligence applications are ideal solutions for these complex what-where-when problems. Situational intelligence applications employ spatial-temporal-nodal analytics to solve simultaneously for what, when and where.

Using forecasted energy demand, generation availability, weather and other variables along with the constraints of reliability, cost, pollution and location, spatial-temporal-nodal analytics solve the dispatch problem and visualize the needed generation mix.

This mix then guides power brokers and planners, transmission companies, regulators and others in delivering in real time the safe, reliable and affordable power on which we rely.

LinkedInEvernoteFacebook