How To Not Be Stubborn In 2020


I pulled the following two predictions about analytics and decision making from a recent list of 100 predictions by Gartner analysts (subscription required):

  • By 2018, decision optimization will no longer be a niche discipline; it will become a best practice in leading organizations to address a wide range of complex business decisions.
  • Through 2020, over 95% of business leaders will continue to make decisions using intuition, instead of probability distributions, and will significantly underestimate risks as a result.

Apparently most of us will refuse to get the message about optimizing decisions, even after years of tools and best practices being in place. In Gartner’s 2020, we’re all still stubborn foot-draggers.

In my experience, predictions like these often require a grain of salt. Generalizations such as “over 95% of business leaders” at “leading organizations” who “significantly underestimate risk” lack the mathematical precision necessary to inspire confidence and change behavior.

Predictions like this often contain a grain of truth, as well. We frequently prefer our personal comfort zone, resist change, suffer from confirmation bias, and respect the confines of our organization’s formal and informal cultural.

Keep in mind that being stubborn can quickly lead to being history. Accenture CEO Pierre Nanterme notes that half of the companies in the Fortune 500 disappeared between 2000 and 2015. Why? New business models based on digital technologies, including decision optimization. The rapid pace of change and disruption will continue, and increase.

So, how do you avoid becoming a historical footnote by 2020?

  • Start with the end in mind. Decision optimization starts with the BI dashboards that (I hope) you are using today, and extends to advanced analytics that include prediction, simulation, and prescription. Knowing where you’re heading helps you plan a route and schedule for reaching your destination.
  • Start small. You won’t get to optimal decisions immediately. Identifying what decisions you can automate helps you pinpoint feasible projects with measureable ROI. Chances are, regardless of how digital your industry is now, there is low-hanging fruit to be picked.
  • Start now. Start this quarter or this month or this week, or even today. With hosted and cloud solutions, you don’t need to complete a big IT project before you can start improving decision making through analytics. In fact, you don’t have time for the typical enterprise project that requires years.

The year 2020 may seem like a long way off.  In truth, it’s 12 calendar quarters away. That’s not long. Start now and you’ll be 12 quarters ahead of some other stubborn dog.


UK Smart Meter Roll Out: It’s All About The Data



Despite some 2.6 million smart meters already being installed in the UK, it is the data infrastructure that is causing delays with the further roll out of smart meters in the UK, according to a recent BBC article. This IT project is necessary to support the volume of data anticipated to come from the smart meter roll out that is being pushed by the government.

From the chart below you can see how many meters have been installed since 2012.  Higher volumes of data are already being collected which reinforces the need for this important IT project to be up and running as soon as possible.


(Chart and data available from the UK Department of Energy & Climate Change)

With news that the data infrastructure launch is pushed back until the autumn, what impact will this have?

How much data will smart meters generate?

To do a quick calculation on monthly meter reads from the potential smart meters across the UK, there would be around 53 million reads per month. By contrast, with smart meters that record data every 15 minutes, this could mean 96 reads a day from 53 million meters resulting in thousands of times more data being generated. This is obviously a rough estimation but gives an indication as to what the energy companies would be dealing with. This doesn’t include status messages from the meters which would add to the mass amount of data being generated.

Why is this so important, if smart meters are just about making billing automated and putting an end to manual meter reading? There is a lot more value within meter readings and status messages beyond billing.

The benefits of smart meters are clear for consumers: tracking how much energy you are using, monitoring the effect of changes that you have made to your energy consumption, and receiving accurate bills without having to submit a meter reading.

When applied properly, data helps energy companies to manage supply and demand in a much easier fashion. Energy companies benefit from analysing the data collected from the smart meters to enable new rates and business models, implement demand response programs, manage solar power panels in a better way and improve support for electric vehicles, to mention but a few.

To benefit from the thousands-fold growth in meter data, energy companies need analytics that locate the problems and opportunities hidden inside this massive amount of data. Smart meter analytics must be intelligent enough to do the heavy lifting for users, not just make it easier somehow for users to browse among millions of meters. Increasingly, analytics for this size of data set needs the intelligence and autonomy to make decisions independently.

Once the IT infrastructure is in the place, the UK energy companies can start pursuing the new value within smart meter data, analysing it to make better business decisions. All 53 million UK meters likely won’t be changed out by 2020, but that shouldn’t stop UK energy providers from using the smart meter data they already have, or will have soon.

(Image courtesy of rido / 123RF Stock Photo)


Industrial Internet Of Things: End Up On The Winning Side


industrial automation web

Operations and technology executives take notice when experts such as McKinsey project $11.1 trillion in economic value by 2025 as a result of linking the physical and digital worlds.  That’s a tremendous amount of economic value in a very short time, even if the experts might be a little off in their estimates.

The impact of the internet over the last 25 years certainly supports predictions of disruption and promise as the Internet of things (IoT) and Industrial IoT (IIoT) continue to connect the physical and the digital.  Organizations that transform themselves using IIoT can become giants; those who lag or fail in their execution may become mere memories. How do you ensure you and your organization land on the right side of this disruption?

Operational data is not a new phenomenon

Mentions of IIoT pepper nearly every operations- or technology-related conference these days.  Many traditional control system vendors are relabeling their offerings as part of the IIoT movement.  While industrial control systems remain critical components of operations in many industries, simply rebranding existing systems is certainly not going to going to deliver the trillions in economic value that McKinsey and others predict.  That magnitude of value creation comes only from truly transformative changes to how companies and industries operate.

Inherent risks in embarking on transformative change

Any large organization that can greatly benefit from the promise of an IIoT world has a number of existing critical assets, control systems, IT systems, processes, and skilled people that are essential to their operation.  Many industries have equipment and systems that have been acquired over several decades. Displacing all of these existing operational assets with a sparkly new, end-to-end IIoT-enabled operation is risky and typically not economically practical. Mergers, acquisitions, large IT projects and other attempts at transformative change fail at an astonishing rate. Estimated failure rates range from 30% according to the optimists to 70% from the pessimists.

If you are trying to create transformative change while relying on existing systems, processes systems, and people, you inevitably will face execution risks related to:

  • Lack of interoperability and openness of existing control and IT systems
  • Poor data quality in dependent systems
  • Lack of scalability, both technically and economically, of these systems
  • Insufficient internal talent, expertise, and bandwidth to manage a large project that touches the operations and IT sides of the business
  • Security exposures as you open up systems that have traditionally been on a closed loop system
  • Poorly defined objectives and accountability
  • Striking the wrong balance between building versus buying IIoT systems, ending up with either a solution that isn’t fit for purpose or a solution that exceeds cost and timeline estimates and doesn’t scale.
  • Difficulty maintaining balance of schedule, cost, ROI and executive support

How to end up on the winning side of IIoT

The risks and complexity make getting started with an IIoT initiative seem daunting.  But with this sort of disruptive change, playing the laggard is not an option. How do improve your odds for success?  Here are a few recommendations:

Think big but start small – Think big about how your organization can use new data sources and analytics to improve their operations and service uptime, but start small by first tackling a discrete, well-defined problem. Deliver value quickly and then consider another problem to tackle or extending the first solution to solve other related problems.

Clarify the problem, solution and accountability – Ensure the problem, solution requirements and dependencies are clearly understood.  Appoint a clear, accountable owner for the project who has organizational support.

Prioritize vendors that have “skin in the game” – Many software, hardware and communications vendors will happily sell you the parts of an IIoT solution–platform access, software licenses, sensors, access points, gateways, network access, and servers–but leave you to sort how to assemble these parts into something that solves your problem and provides value. Prioritize vendors who provide ongoing service with lower up-front costs.  This enables you to ensure the service delivers on its promised value before you have committed too much funding.

Challenge traditional thinking in your organization – What got you here won’t get you there! Clearly for many industries existing levels of security, reliability and regulatory compliance must not be compromised. However, that shouldn’t mean that new approaches such as cloud computing, internet connectivity, open source software, and commoditized hardware should be dismissed.  These will be required in many cases to realize the potential value of IIoT solutions.  Many companies use these technology solutions successfully today while balancing the associated risks.

Get Started – Don’t get stuck in analysis paralysis – Obviously it is important to ensure a problem, the solution and potential value for solving it are well understood.  It is also critical to assess risks and get necessary organizational buy-in.  Once you have done that, get started, learn and improve.  The opportunity is immense and those who lead with successful IIoT solutions will have tremendous efficiency advantages in their respective industries.

Allan McNichol is the former CEO of GOFACTORY and Managing Director for Intelligent Energy

(Image courtesy zurijeta / 123RF Stock Photo )


Why We Need Situational Intelligence, Part 3


In Part I and Part 2 of this series I addressed why situational intelligence is a natural and essential method of decision-making that is especially apropos for real-time business operations. Inherent in my argument is an altruistic belief that people make the best decisions and take the best actions with the information at hand. That is the crux of the matter – the information at hand and how accurate and actionable it is. What information is available to decision makers? Does it contain insights? Is it current? Is it clear or is interpretation and/or further analysis necessary before the information is actionable? Is it reliable? How comprehensive is the it? Correspondingly, how much uncertainty shrouds the information, the decision, the action and the outcome? What are the risks of making a bad decision (including no decision)?

Ideally the answers to the preceding rhetorical questions should all be encouraging. But how can these attributes of data and insights for decision-making be assured, especially when decisions are made by different people, when decisions needed are for unplanned situations, and when timeliness is important? Systematized decision-making aided by technology-generated intelligence is a way to assure that accurate insights are derived from data and actionable by decision makers. As discussed in the preceding blogs (and other blogs too), advanced analytics and visual analytics are essential building blocks for analytics that support operational decision-making. Data must be transformed into insights and intelligence. The insights must also be transformed so they are readily comprehended at-a-glance and are actionable.

Another key consideration is having a broad composition of data for analysis. The more data from relevant sources within the enterprise, from the IoT and from external sources, the more insights can be derived by analytics. Accessing external data enriches intra-enterprise data sources with relevant context that is useful when decision makers require supplemental information, such as when insights brought forward to decision makers is not immediately actionable. In such cases further discovery helps decision makers gain the needed understandings and confidence to make a decision. This is where additional data sources and the corresponding added context facilitates interactive data exploration so that decision makers can make timely and favorable decisions. Sources and types of external data include: weather, traffic, news, spot market prices and social media.

Having live connections to data sources ensures that decisions are made using the most up-to-date data, and also enables interactive exploration of underlying data to deeply understand and resolve complex multifaceted situations. A single system that maintains live connections to data sources yields another benefit – it helps organizations bridge their data silos and unify their data assets.

Here at the end of this blog series, situational intelligence now sounds easy, and somewhat obvious too – connect to relevant data sources, apply analytics, make the resulting insights and underlying data available to decision makers with intuitive visualizations so they can consistently make the best possible decision in any situation. If you use an off-the-shelf solution to implement situational intelligence, getting started is also relatively simple. Decide for yourself. What does your situation require?

If you have experiences, thoughts, opinions on this topic, please comment and share them.


Why We Need Situational Intelligence, Part 2


Big Picture 01

In my previous blog I addressed the need for situational intelligence (SI) as an approach to decision-making that combines insights with relevant context to create the big picture we need to make the best possible decisions with the lowest risk. I concluded that blog by promising to explain why and how various technologies such as data access and fusion, analytics with machine learning, artificial intelligence and visual analytics come together to support situational intelligence.

Situational Intelligence itself is not a technology, nor can you use just one technology to create it. Rather, a situational intelligence approach requires a combination of integrated technologies. The main types of technologies are listed below. Seamlessly integrating these technologies creates actionable insights that are especially applicable for real-time operational decision-making.

  • Live connections to data, both at rest and in motion, in a variety of formats and structures (including no structure at all). Access to multiple, disparate sources of data provides the context for new, deeper insights. Connecting directly to data creates great efficiency and savings savings because data access and preparation often consume as much as 80% of the effort of making data-driven decisions.
  • Analytics, big data, and streaming foundational technologies (such as Spark, Hadoop, SAP HANA) that are inherently scalable and enable high-performance execution of analytics and processing of large datasets. These foundations are typically distributed and use in-memory processing so that complex software executes and generates answers and insights as quickly as possible. Streaming message brokers such as Kafka and Internet of Things (IoT) platforms are also necessary to manage streams of data that can be passed through streaming analytics for real-time insights and/or to be stored for inclusion in subsequent applications of advanced analytics that derive deeper insights.
  • Advanced analytics and streaming analytics that derive insights from the data at rest and data in motion, respectively. Because situations inherently occur at specific times and locations, the ability to correlate spatial and temporal relationships increases the insights that can be derived. Similarly, the ability to correlate entity-to-entity relationships increases the insights by revealing actual and likely ripple effects. Altogether these analytics make it possible to identify the what, when, where, why and how of situations that happened or may happen. In addition, machine learning allows the analytics to adapt to your data and to your use cases.
  • Visual analytics is essential to complete the transformation of data into actionable insights. Intuitive renderings of the relevant data and resultant insights derived by analytics helps users comprehend and acted on data at-a-glance. Output from visual analytics included in alerts via email and SMS is a powerful way of notifying people about critical matters and focusing their attention on acute situations and the decisions to be made.

In summary, situational intelligence is an approach that combines data and analytics, including visual analytics, to aid human decision-making. Insights from advanced analytics and streaming analytics are combined with relevant underlying data to create context so that decision-makers have a complete understanding of each situation and make decisions that lead to the best possible outcome.


Why We Need Situational Intelligence, Part 1


air traffic control

Why do we need situational intelligence?

When we lack information, or information is not easily consumed or comprehended, then our decisions are compromised. We don’t have the right information, at the right time, or in the right form to lead to the best possible outcome.

At its core, situational intelligence handles all of the relevant data needed to derive insights that will guide your decisions. Today’s environment typically means large volumes of disparate data, pouring too quickly into over-burdened systems, leaving executives and analysts alike wondering if they can believe the data they see.  Some insights can be derived only by sifting through this ever-growing mountain of data looking for hidden correlations. Correctly correlating and analyzing all of necessary the data and correctly presenting results, recommendations and predictions is the biggest differentiator of situational intelligence over traditional analytics.

Unlike typical business intelligence or stand-alone analytics solutions, with situational intelligence we receive valuable details, recommendations and predictions that typically result in enhanced competitive advantage through:

  • Cost savings
  • Increased efficiencies, productivity and performance
  • Increased revenues
  • Improved client engagement that raises satisfaction
  • Better understanding of exposure that facilitates better management of risk

To put this into a concrete example, consider why airports need flight and ground traffic control systems. There is the obvious answer – to know where airplanes vehicles and people are both in the air and on the ground, to safely and efficiently stage their movement. Managing traffic at an airport requires context, such as the current and forecasted weather, to achieve the best possible safety and efficiency. Even a seemingly simple matter such as a broken jetway has many consequences that affect the context of ground control, fueling, cabin cleaning, luggage, passenger relocation, food service, etc. Now, imagine the complexity of handling a crisis situation such as an airplane needing to make an unplanned emergency landing.

Managing an operation with as much complexity, interdependencies and consequences as an airport requires the staff in the operations control center to have a live, real-time, big-picture view of everything that is happening and, ideally, also what is most likely to happen. As you surely recognize, keeping track of so much fast-changing information in a person’s head alone is impossible and prone to errors and omissions.

Clearly having as much relevant and easily comprehensible information as possible provides the context that we naturally seek to guide our decisions and actions. In a follow-on blog I will explain why and how various technologies such as flexible data access, analytics, machine learning, artificial intelligence and visualization should be seamlessly integrated to create and deliver situational intelligence that is truly actionable.


(Image courtesy of Flikr)


Three Rules For User-Centered IoT Analytics


In a recent TechTarget article, Maribel Lopez of Lopez Research says that manufacturing may have a head start in implementing Internet of Things (IoT) solutions, but she still sounds skeptical about IoT in general.

IoT “is a lot of talk and not a lot of action,” she says. “First of all, the phrase ‘IoT’ is meaningless because it doesn’t talk about anybody doing anything that’s useful. Just connecting your stuff is not enough.”

How do you make IoT solutions that are more action than talk? Lopez cites three rules for user-centered analytics in the Internet of Things:

  1. Be relevant to users:  Pushing data to users just because it’s possible is not helpful. Information presented to users needs to be relevant to a task or situation that needs attention. For instance, reporting that vibration in manufacturing equipment is within acceptable limits is of little use. Such information requires no action from the user.
  2. Do the work for users: Performing analysis for users is more useful than equipping users to perform their own analysis. Business intelligence tools may make a table of vibration data  easier to manipulate and visualize, but that manipulation and visualization work takes users away from their main task of operating the equipment. As Lopez says, “Saying that the vibration [of manufacturing equipment] is out of range is interesting yet not sufficient.”
  3. Be timely for users: Presenting users with exception data in context with time to react has far more value to users. That keeps users on task and ahead of potential issues. As Lopez says, “Saying the vibration is out of range and if it continues for the next two hours, it’s going to shut down the plant — that’s more interesting.”

Situational Intelligence abides by these rules by turning big data to little data, focusing users on events or conditions that require attention. It’s not looking at all the data that counts; it’s looking at the right data at the right time.


What if My Data Quality Is Not Good Enough for Analytics or Situational Intelligence?


Spreadsheet 01


You may feel that the quality of your data is insufficient for driving decisions and actions using analytics or situational intelligence solutions.  Or, you may in fact know that there are data quality issues with some or all of your data.  Based on such feelings or knowledge, you may be inclined to delay an analytics or situational intelligence implementation until you complete a data quality improvement project.

However, consider not only the impact of delaying the benefits and value of analytics , but also that you can actually move forward with your current data and achieve early and ongoing successes. Data quality and analytics projects can be done holistically or in parallel.

“How?” you ask. Consider these points:

  • Some analytics identify anomalies and irregularities in the input data. This, in turn, helps you in your efforts to cleanse your data.
  • Some analytics, whether in a point solution or within a situational intelligence solution, recognize and disregard anomalous data. In other words, data that is suspect or blatantly erroneous will not be used, so the output and results will not be skewed or tainted (see this related post for a discussion about: “The Relationship Between Analytics and Situational Intelligence“). This ability renders data quality a moot point.
  • It is a best practice to pilot an analytics solution prior to actual production use. This allows you to review and validate the output and results of analytics before widespread implementation and adoption. Pilot output or results that are suspect or nonsensical can then be used to trace irregularities in the input data.  This process can  play an integral part in cleansing your data.
  • Some analytics not only identify data quality issues but also calculate a data quality score that relates to the accuracy and confidence of the output and results of the analytics. End-users can therefore apply judgement if and how to use the output, results, recommendations, etc. Results with low data quality scores point to where data quality can and should be improved.
  • Visualization is a powerful tool within analytics to spot erroneous data. Errors and outliers that are buried in tables of data stand out when place in a chart, map or other intuitive visualization.

You can be pleasantly surprised at how much success you can achieve using data that has not been reviewed, scrubbed or cleansed. So set aside your concerns and fears that your analytics or situational intelligence implementation will fail or have limited success if you do not first resolve data quality issues.

Instead, flip such thinking around and use analytics as one of the methods to review and rectify data quality.  In other words, integrating analytics into your efforts to assess and cleans your data is a great way to leverage your investment in analytics and get started sooner rather than later.

What are you waiting for?  Get started exploring and deriving value from your data no matter the status of its quality.


Detecting Energy Theft With Situational Intelligence


In 2010, BC Hydro estimated that energy theft cost the utility up to CAD $100 million annually in lost revenues. That’s enough power to supply 77,000 homes for a year.

As described in this EY article, much of the theft supported illegal marijuana growing operations hidden in residential buildings across the BC Hydro service territory, an area the size of the UK and France combined.

At the time, BC Hydro had just finished installing smart meters for all of their 1.9 million customers. Smart meters meant no more human meter readers walking routes and reporting suspicious activity. But smart meters also meant a valuable source of big data for detecting and prosecuting energy theft.

BC Hydro also installed smart meters along feeders and segments on the power grid, upstream from consumption meters on houses and businesses. By using situational intelligence applications that combined spatial, temporal and network analytics, BC Hydro began to detect which locations had abnormally low energy consumption, which could indicate power theft.

In particular, BC Hydro could compare energy delivered down a feeder with energy billed from meters along that feeder. If those two numbers weren’t equal, then energy was being lost and potentially stolen. Other factors such as weather, season, time of day, size and type of building, and so on enhanced BC Hydro’s ability to identify likely instances of energy theft.

Advanced visualizations helped revenue analysts and field investigators prioritize, locate, and research suspected theft cases. If a case warranted legal prosecution, the data and visualizations provided compelling evidence at trial.

Today, theft has been reduced by 75 percent. Plus, customers are safer, with fewer dangerous hacks into the power grid. BC Hydro continues to use advanced analytics and visualization to reclaim revenues lost to theft as well as technical and non-technical loss.


Analytics Beyond the World of Dashboards: Part 2


In the first part of this post, I drew a bright line between the world of BI dashboards and the situational intelligence analytics that industries are now deploying to derive value from the variety of data sources at their disposal.

Here are some examples of situational intelligence applications that exemplify these points:

  • Optimizing workforce scheduling—an operator of wind farms faces multiple variables in scheduling crews to perform maintenance and repairs. Assigning the day’s work depends on crew availability and skills, weather conditions, part availability, safety constraints, travel time to and from a turbine, climbing time up and down the turbine, and much more. It used to take managers many hours each day to build work schedules using traditional tools and those schedules had to be manually revised when weather conditions, for example, suddenly changed. Now optimization software automatically builds the most efficient schedule in minutes, making adjustments on-the-fly as conditions demand.
  • Predicting failure—a utility faces mounting pressure from regulators and ratepayers after catastrophic failure of power distribution equipment. How do they determine the true risk inherent in their millions of assets spread over thousands of square miles? They had charts showing the historical performance of their assets, but all assets age differently based on geographic location, relationship to other assets in the network, workload, maintenance record, and more. With predictive analytics, asset planners see the likelihood that an asset will fail plus the consequences should it fail. These two measures given them an accurate gauge of risk for each asset and group of assets. Risk-based decisions (as opposed to gut instinct and incomplete data) drive choices about maintenance and capital expenditures.
  • Detecting anomalies—a railroad has tens of thousands of miles of track to inspect and maintain. Wear and tear, weather conditions, and natural disasters continually affect track conditions. Their visual dashboards showed them on a monthly basis which routes had slow throughput and which sections of track were overdue for inspection or repair. This data is significant since an increase in system-wide train speed translates into millions of dollars of revenues. Using anomaly detection, they now pinpoint sections of track that warrant inspection and possible repair before they fail or cause train delays. The data analysis is presented on maps that highlight problematic sections of tracks.
  • Streaming analytics—a construction company needs to know where tens of thousands of vehicles, tools and pieces of equipment are and how they are being used (or abused). By using streaming analytics on the data from sensors placed on trucks and tools, the company pinpoints equipment that is delayed in transit, reassigns unused equipment to other nearby sites, prevents tool theft and loss, and audits vehicle movements to support applications for fuel tax rebates, to name a few.

These “real” analytics systems may sound like they require highly sophisticated and educated users to operate. In reality, not only are these regular business users, but they do not require specialized training and they have the ability to interact with the analytics to execute what-if scenarios, for example.

Dashboards still have a role for many users in many scenarios. But as computing and communications technologies continue to connect the world into an Internet of Things, true analytics systems for prediction, anomaly detection, optimization, and streaming data will take their place at the head of the table.