Improving Your Operations with Data Visualization Applications

LinkedInEvernoteFacebook

Operational and Cost Efficiency

Visual analytics is an effective way to understand the stories in your data and take appropriate actions. Effective data visualizations powered by visual analytics enable you to easily and interactively dive deep into your data to identify opportunities to improve efficiency and productivity, cut costs, and optimize resources; all of which are at the crux of increasing operational efficiency. Hence operations leaders want to quickly understand what is in their data so they can initiate and monitor improvement programs as well as make the best possible decisions for the types of situations they regularly face.

But there’s a small catch – while it’s tempting to believe that putting powerful data visualization authoring software in the hands of business users will result in useful solutions, this is rarely a recipe for success. That’s because creating effective data visualizations requires expertise because visualization authoring software itself does not magically surface insights. Human expertise is required to implement the type of functionality that makes it possible to surface and intuitively convey insights that are actionable for making operational improvements.

Basic tables and charts are easy to create, however solving problems and enabling data discovery that leads to root causes and opportunities to improve operations is an entirely different matter. Spreadsheets and most visualization software make it easy to create pretty charts as well as to combine tables and charts into dashboards but do not fully meet the needs of operations leaders. Let’s face it, if spreadsheets alone were sufficient, you’d have all you need to effectively run your business operations.

Questions that you should ask yourself are:

Do one or more simple visualizations or dashboards containing multiple simple visualizations solve real business operations problems?

Do the visualizations surface buried insights and make them readily comprehensible and actionable?

Is it possible to clearly visualize activity on a map and see changes unfold over time?

Is it possible and to synchronize components within a dashboard to update when data is selected, filtered and sorted?

Of these capabilities, how easily can they be implemented (if at all)?

The answer to these questions exposes the necessity for specific functionality that transcends just data visualizations; application software is required to deliver such functionality, or to be specific – data visualization applications. This is one of the reasons that expertise is required, because applications must be implemented to fully deliver on the promise of visual analytics. Expertise must include art, skill and knowledge that typical operations personnel do not possess. Business users do not understand their organization’s data landscape or how to model and transform their data for visualization. And they often don’t have the time to learn and hone the expertise needed to implement data visualization applications regardless of how simple modern data visualization developments tools are to use.

Full service offerings that use best-of-breed visual analytics are a great way to obtain the needed combination of expertise and visual analytics to enable you to achieve the objective set forth in this post – to improve all aspects of operational efficiency.

LinkedInEvernoteFacebook

Mobile Apps for Internet of Things Data Acquisition

LinkedInEvernoteFacebook

nugget, gold, bronze, copper, iron

The Internet of Things in many ways is a catchall phrase that is used to describe everything from types of devices, to communications gateways, to new service-oriented business models. IoT devices generally are capable of sensing and communicating. IoT devices in the consumer sector include thermostats, door locks, garage door openers, etc. In the industrial sector there are many sensors used in manufacturing processes, vehicles, heavy equipment, and so on. Sensing and communicating data has traditionally been referred to as data acquisitions – a common IoT use case. What is often overlooked is the use of smartphones and tablets for data acquisition. These devices include several sensors such as for audio, video and motion.

The following story highlights how the mobile devices that we use every day are becoming integral to the IoT ecosystem.

Recently I was at a cafe with a friend. A former coworker of my friend whose name is Craig walked in, so my friend invited him to join us. My friend asked Craig “where you currently working?” Craig answered “I am working as an independent contractor developing a unique mobile app.”

With the Apple and Google app stores full of apps, and in many cases offering multiple apps that essentially do the same thing, I wondered what app could be new and unique. I quickly found out as Craig described how the app would help mining companies improve how they determine where mineral deposits most likely exist. Easier identification of mineral deposits will accelerate, optimize and lower the cost of mining – a definite game changer.

Determining places to explore and excavate is a combination of manual labor and trial and error. Miners typically pick and scrape away at surfaces in a mine to collect sample material to be examined to determine if the sample contains mineral deposits. If mineral deposits are detected then further exploration at that area would be initiated.

Craig then explained how the app works. Each mineral has a unique signature that can be identified by a spectrometer (from how the mineral reflects light). Photos and videos taken with smartphones and tablets use compression so the signature cannot be detected using standard photo and video apps. The app he developed interfaces directly to the video sensor so it can analyze the reflected light with the needed fidelity to recognize spectral signatures that identify specific areas where desired mineral deposits can likely be found. The locations identified are marked and uploaded to an operations center for further review and for planning.

Learning about this app shows how ingenuity and software running on commercial off-the-shelf smartphones and tablets makes them applicable for data acquisition use cases. More use cases that integrate people and mobile apps into IoT use cases will surely ensue.

So the next time you pick up a smartphone or tablet think of the myriad of uses it can be programmed to perform, especially when connected to other devices, systems and people. If you know of clever uses of mobile apps for IoT use cases, please comment.

LinkedInEvernoteFacebook

Augmented Reality For The Enterprise: A Use Case In Electrical Substation Field Service

LinkedInEvernoteFacebook

2016-08-08_13-34-33

Augmented reality can make a real impact in field service workers in almost any industry. For a specific example, let’s look at a field technician visiting a problematic transformer at a substation.

Currently field technicians download test plans and view them on a laptop or iPad. They have to remain near their car as they conduct spatial reasoning with regards to electrical circuits and their locations. They have to hold some of they learned and deduced in memory, meaning they are eating up more bandwidth of the part of their brain that holds memory, conducts planning and spatial reasoning or navigation. This forces them to continually reference back and forth between the computer screen, supporting documents, tools and the work site. Many of the artifacts they will use are situated at angles and distances where they must turn away and effectively interrupt the mental process of fixing their eyes on the object they plan to use to do their work so that they can do the reasoning. They are basically switching between numerous cognitive tasks.

Below is a screen grab from a video of one technician conducting such procedures.

2016-08-09_16-07-50

The problem

Field technicians show up to substations and other work sites without knowing how to navigate the site. They also need to find the proper tools and documentation for completing maintenance, tests and repairs to assets.

They continually circulate their eyes between consulting site plans, asking site staff where assets are located, monitoring asset performance and completing their task checklist. Their hands are likely juggling multiple items that they need to set down in particular locations and keep track of in order to do the job.

Solution

2016-08-09_17-17-302016-08-09_17-17-30

With augmented reality, field service workers don glasses at the job site. The glasses give them an internal map of the site they are at. Locations of tools and supplies are flagged or highlighted. Digital documents needed to complete a task appear in their field of vision. They can see any chart for an asset by shifting their attention to the asset. Field workers won’t need to remember anything or fumble with tools and documentation simultaneously. They can just focus on performing the work.

Here’s an example of how a substation needing maintenance might appear with augmented reality.

2016-08-08_13-34-45

  1. The menu helps decrease the total space filled by UI elements. Only the current UI element is up. Others appear wilted until workers turn gaze at it or access it with their hand or voice.
  2. This check list helps the user complete their work.
  3. The radar helps them navigate the station. The orange square shows the location of the nitrogen they will need.
  4. A capacity indicator of the transformer cylinder is off in the distance. Relative size is used to indicate importance. It isn’t up close because it isn’t exigent and spatial relationship is the primary representation in use.
  5. The chart shows year-over-year performance for the transformer. The orange year shows the current performance which fluctuates outside the range it should be in.
  6. The problematic area of the transformer is highlighted to draw attention to the user

This is not science fiction. It is reality. All the technologies and features listed in this use case are completely feasible with many of the headsets coming on market in 2016-2017. One example I love is the Meta headset which has gesture tracking among other awesome features.

LinkedInEvernoteFacebook

Could ‘Pokémon Go’ Inspire Enterprise Productivity?

LinkedInEvernoteFacebook

Renewable

The world is going crazy for Pokémon Go. Nintendo’s stock value is making huge gains while masses of people are out hunting little ‘holographic’ critters. The technology isn’t new. Yelp has had a similar functionality out for 4 years called monocle. Yet this ‘Pokemon Go’ has made a ginormous splash. How? They used a technology to solve a pain point which is evidenced by their profitability so I don’t want validate that product vision here. Instead I’d like to answer another question which is “How can Augmented Reality help Enterprise on the same scale as the B2C market?”

The first part of the answer is that the motive to introduce AR to enterprise should not be about making a lot of money but rather helping a lot of brilliant people attain more.  Guy Kawasaki says, “The genesis of great companies is answering simple questions that change the world, not the desire to become rich.” The same applies within an organization eyeing new product offerings.

The second part of the answer is more involved so first I’ll discuss proof that Augmented Reality indeed helps brilliant people dive into a use case and then highlight some good User Experience design principles stemming from neuroscience that will make the use case come to life.

Augmented Reality (AR) is a technology that registers digital aspects onto the physical world around us. A stop light could be considered primitive Augmented Reality (AR). AR on Mobile Devices isn’t new but neither was the tablet before Apple made mountains of cash by designing it the right way. After Google Glass, there was a healthy dose of skepticism that everyday people would enjoy using Augmented Reality. Many would assume that if people don’t want it on their glasses with their hands free then why would they want to use a hand to hold the same information that they didn’t find useful?

Even as AR is in its technological infancy, Market guru Greg Babb explains how Augmented Reality reduced errors and time to complete task for wing assembly study presented by Paul Davies in conjunction with Iowa State and Boeing.

ARIncreaseProductivity-graph2

The chart above shows that wing assembly took significantly less time than traditional methods of assembly. There tends to be a major drive in Enterprise AR use cases to build applications for field crews and those assembling things. Meta CEO Meron Gribetz is on the record of saying he expects to throw away all the monitors in his office by next year and just work with AR headsets. This is compelling because Meta consists of developers, designers and scientists. That means knowledge workers would be using AR at their desk. This is either crazy or prophetic.

So what would an enterprise application look like? Well lets look at it through the lens of situational intelligence. Let’s consider the following use case:

Wind Turbine Use Case

A scientist at an energy company needs to run some prescriptive analytics for wind turbines. Government compliance and regulations have just been reformed and costly repairs and updates to the machinery has to be implemented on an accelerated pace. Heavy fines will be instituted for safety violations. Our scientist wants to use Matlab to run simulations on existing wind turbines to predict which turbines have a greater risk of breaking down, overheating or malfunctioning. He is think she can get more longevity out of gearboxes for yearly use within certain confidence levels.

  • He first speaks out loud saying,”Show me wind farms in central California.”
  • He sees a 3D map of several wind farms that he spins with his hands
  • The length of time he gazes at a certain region makes it slowly expand
  • He sees temperature on 4 wind farms with a high failure prediction
  • He increases severity of wind velocity changes in his data set and sees gearbox 4 fail
  • His gaze on the fourth gearbox chart line causes a line to appear animating towards the holographic turbine
  • He picks up the turbine, swipes the outer shell away and sees the motor spinning
  • He dictates notes about the turbine and recommends replacing the gearbox sooner than others

Outcome

Our scientist is able to work faster in a more implicit manner without opening various files and programs with a mouse. He has more visual real estate and can now work with depth rather than just up and down dimensions. He is more productive, which makes his company more money.

Neural Interface Design

Pictorial Cues are an important aspect of good user experience that make a scene more realistic in AR. Some of these can be:

  • Occlusion – One object blocking part of another
  • Relative Size – Equal sized objects taking less area within our field of vision when their distances from us vary
  • Shadows – Objects that casts shadows seem more real
  • Accretion – Aspects of an object appear as a user moves in the physical world

Another aspect to design for in AR is binocular disparity as it assists with depth perception. Studies show that the neurons in our visual cortex fire optimally when there is a amount of specific disparity with a stimulus. So that is why you see one wind turbine moved to the right of another in the wireframe.

So maybe you won’t get to go hunting Pokémon at work if your company is smart enough to outfit you with one but you will definitely feel like a Jedi when manipulating the world around you with your senses. You will love sharing this world with your co-workers. Augmented Reality unleashes you imagination. The next couple years show some exciting times ahead for Reality Computing!

Special thanks to JulianHzg for making a great Wind Turbine in Blender at  that I recolored and used in my wireframe.

 

LinkedInEvernoteFacebook

Use Visual Analytics to Get Started with the IIoT

LinkedInEvernoteFacebook

Industrial IoT (IIoT) applications bring about many opportunities to increase operational efficiency by presenting personnel with timely insights into their operations. Visualizing IIoT data using visual analytics is a proven way to facilitate insight-driven decisions. So at the very least your IIoT initiative will start off by integrating IIoT connectivity, visual analytics and other system components. To best ensure early and ongoing success it is recommended that you follow the best practice of starting small, attaining quick wins and then increasing scope and/or scale.

The first step is to connect devices and systems and use visual analytics to create a simple visualization of your IIoT data. If the IIoT devices are mobile or geographically separated, then an appropriate visualization would be to display the location of the devices on a map such as shown above. This is an effective way to verify connections and validate successful integration.

The second step is to collect and intuitively visualize your IIoT data. At this point you can identify issues to make operational efficiency improvements.  As an example, a freight trucking business can see a map with the locations and times of where their trucks are moving at slower than expected speeds. This information is used to change the routes on the fly to maximize on-time deliveries. As this example highlights, connecting to IIoT data streams and visualizing the data facilitates operational efficiency improvements.

The third step is to correlate data from different systems and data sources, including time series data from devices at different locations. Visualizing data correlated by time and location makes it possible to create comprehensive big picture views that reveal details about what happened and is happening, where, when, why and how. Using the trucking example, areas where driving speeds are consistently slower than expected are highlight by the red lines on the map above. This information is used to refine future routes, schedules and delivery commitments.

The fourth step is to apply advanced analytics to the IIoT data to generate insights for inclusion into the visualizations. Returning to the trucking example, advanced analytics will recommend the optimal average truck speed to minimize fuel costs based on the weight of the load they are carrying. Visualizing each truck using color coding to highlight the biggest offenders makes the analytics results actionable at-a-glance so that operations managers and drivers can improve driving efficiency. In the image above it is easy to see the truck icons colored yellow and red that represent the trucks that are traveling outside of the optimal speed range.

Having completed these steps you are positioned to leverage your IIoT infrastructure and expand on your competency by combining visual analytics, data correlation and advanced analytics in innovative ways to address business problems and facilitate operational efficiencies that would not otherwise be possible. Future blog posts will cover such combinations and the corresponding operational efficiencies.

LinkedInEvernoteFacebook

Why We Need Situational Intelligence, Part 3

LinkedInEvernoteFacebook

In Part I and Part 2 of this series I addressed why situational intelligence is a natural and essential method of decision-making that is especially apropos for real-time business operations. Inherent in my argument is an altruistic belief that people make the best decisions and take the best actions with the information at hand. That is the crux of the matter – the information at hand and how accurate and actionable it is. What information is available to decision makers? Does it contain insights? Is it current? Is it clear or is interpretation and/or further analysis necessary before the information is actionable? Is it reliable? How comprehensive is the it? Correspondingly, how much uncertainty shrouds the information, the decision, the action and the outcome? What are the risks of making a bad decision (including no decision)?

Ideally the answers to the preceding rhetorical questions should all be encouraging. But how can these attributes of data and insights for decision-making be assured, especially when decisions are made by different people, when decisions needed are for unplanned situations, and when timeliness is important? Systematized decision-making aided by technology-generated intelligence is a way to assure that accurate insights are derived from data and actionable by decision makers. As discussed in the preceding blogs (and other blogs too), advanced analytics and visual analytics are essential building blocks for analytics that support operational decision-making. Data must be transformed into insights and intelligence. The insights must also be transformed so they are readily comprehended at-a-glance and are actionable.

Another key consideration is having a broad composition of data for analysis. The more data from relevant sources within the enterprise, from the IoT and from external sources, the more insights can be derived by analytics. Accessing external data enriches intra-enterprise data sources with relevant context that is useful when decision makers require supplemental information, such as when insights brought forward to decision makers is not immediately actionable. In such cases further discovery helps decision makers gain the needed understandings and confidence to make a decision. This is where additional data sources and the corresponding added context facilitates interactive data exploration so that decision makers can make timely and favorable decisions. Sources and types of external data include: weather, traffic, news, spot market prices and social media.

Having live connections to data sources ensures that decisions are made using the most up-to-date data, and also enables interactive exploration of underlying data to deeply understand and resolve complex multifaceted situations. A single system that maintains live connections to data sources yields another benefit – it helps organizations bridge their data silos and unify their data assets.

Here at the end of this blog series, situational intelligence now sounds easy, and somewhat obvious too – connect to relevant data sources, apply analytics, make the resulting insights and underlying data available to decision makers with intuitive visualizations so they can consistently make the best possible decision in any situation. If you use an off-the-shelf solution to implement situational intelligence, getting started is also relatively simple. Decide for yourself. What does your situation require?

If you have experiences, thoughts, opinions on this topic, please comment and share them.

LinkedInEvernoteFacebook

Does This ‘Hologram’ Fly?

LinkedInEvernoteFacebook

 

An Augmented Reality Headset Interface
An augmented reality headset interface

Holograms are awesome. Their power lies in allowing humans to share imaginations, which is why they work great in movies. But what are holograms well suited for, other than for entertainment purposes like Star Wars special effects  or resurrecting Tupac at Coachella in 2012? Well, while what we are calling ‘Holograms’ are technically just an optical illusion called Pepper’s Ghost, they are very useful for other applications. From here on out we will refer to ‘pseudo’ holograms as Augmented Reality.

2pac
An optical illusion of 2Pac in 2012 using Pepper’s Ghost

As Tupac rapped, “Reality is wrong. Dreams are for real.” I think he meant that dreams allow us to simulate all our hopes and fears and consider the best outcomes. Augmented Reality allows us to ‘dream’ into the future and make really smart decisions affecting our present.

Can AR be used for business application? Sure! AR is powerful for visually exploring predictive models that are registered to our physical surroundings.

For example, take an air traffic controller who just started a position in the all new, $126 million control tower that goes online July 28 at San Francisco International (SFO).

The new control tower coming online in July 2016

Did you know that SFO as well as many other air traffic control units have been operating by paper strips until last year?

paper strips
A paper flight ticketing system still in use today

The FAA is currently implementing an upgraded air traffic control system called NexGen in ‘pilot’ airports (pun intended). But this is still limited to a screen. Air traffic controllers will still need switch between looking at information regarding the plane and then the plane itself. They also have to look into the future.

As part of NexGen, why not simplify the air traffic control job by melding information about the plane with a hologram of the plane itself and allow for direct manipulation of the scenario through modern UI? Enter Holograms.

The Use Case:

SFO air traffic controllers are directing planes during the holidays. They need the ability to look into the future and see where issues may arise. For instance a storm might be affecting arrival times for planes coming in from the east. Our controller needs to see where margins are thin for collisions and potentially reroute.

What Is On The Horizon?

Orange outlines represent closer planes, white dots show planes further away
Orange outlines represent closer planes, white dots show planes further away

Planes that are far off in the distance can be represented by white dots of varying size. Size will represent spatial distance. When planes come within a predetermined distance they can be rendered as low-fidelity holograms. This plane is orange since it is triggering a warning. Other planes are white since they are all speculative.

Radar Projections

With an augment reality interface, a controller can reach out to a series of radars showing simulated or real-time inbound and outbound planes by time of day. There is no clicking to do, just grabbing and moving hologram-like objects.

radar projects
Radar time machine for controllers

For example, here is a radar timeline that projects all the events and planes happening in real time. They are arranged in a circular dial that spins and expands to show you the radar at the specific time you are interested. The white box corresponds to the largest circle with two orange dots. If you dialed the radar back to the blue circle (current-real time) then the box would move to the ‘Now’ indicator. Obviously the present is always moving so the word ‘Now’ would move with time. Anything projected or in the past takes on a white color. Once controllers select a radar matching the busiest time of day, they can verbally tell their computer to, “Show projected warnings” or grab the two orange dots that would slightly expand as their arm approaches.

Flight Simulations

Flight simluation
A traditional pop up window in AR showing a warning

Once they interactively activate the warning dots, controllers would see low fidelity planes landing, departing and taxing along the runway. Gazing or moving an arm towards a plane would populate an information window displaying information and a warning for that asset, where relevant.

Closer Examination

closer examination

Controllers can grab the virtual plane by reaching out, at which point its display quality would increase to a CAD model with more detail. The philosophy behind this design is that you start at a high level and then are offered more detail as you interact with items. This alleviates the need to have an explicit file structure. You can spin the plane, open it to see section, or even view pilot and system information.

Conclusion

Augmented reality software engineer Tyler Lindell says, “Using augmented reality and interacting with real-world objects will take us beyond the barrier we have known for the last 40 years, the personal computer.”

Computers should feel natural. You should be able to use any interface and feel like you have always used it. No more opening applications. No more going through file systems. No more explicit metaphors trapped in 2D ‘planes’ like icons and spreadsheets trapped in a PC.

As imaginative as this prototype AR interface is, it is likely to have myriad shortcomings and flaws. That is why I’m asking you, the reader, to complete this survey regarding this UI. Please be brutally honest.

 

LinkedInEvernoteFacebook

Why We Need Situational Intelligence, Part 2

LinkedInEvernoteFacebook

Big Picture 01

In my previous blog I addressed the need for situational intelligence (SI) as an approach to decision-making that combines insights with relevant context to create the big picture we need to make the best possible decisions with the lowest risk. I concluded that blog by promising to explain why and how various technologies such as data access and fusion, analytics with machine learning, artificial intelligence and visual analytics come together to support situational intelligence.

Situational Intelligence itself is not a technology, nor can you use just one technology to create it. Rather, a situational intelligence approach requires a combination of integrated technologies. The main types of technologies are listed below. Seamlessly integrating these technologies creates actionable insights that are especially applicable for real-time operational decision-making.

  • Live connections to data, both at rest and in motion, in a variety of formats and structures (including no structure at all). Access to multiple, disparate sources of data provides the context for new, deeper insights. Connecting directly to data creates great efficiency and savings savings because data access and preparation often consume as much as 80% of the effort of making data-driven decisions.
  • Analytics, big data, and streaming foundational technologies (such as Spark, Hadoop, SAP HANA) that are inherently scalable and enable high-performance execution of analytics and processing of large datasets. These foundations are typically distributed and use in-memory processing so that complex software executes and generates answers and insights as quickly as possible. Streaming message brokers such as Kafka and Internet of Things (IoT) platforms are also necessary to manage streams of data that can be passed through streaming analytics for real-time insights and/or to be stored for inclusion in subsequent applications of advanced analytics that derive deeper insights.
  • Advanced analytics and streaming analytics that derive insights from the data at rest and data in motion, respectively. Because situations inherently occur at specific times and locations, the ability to correlate spatial and temporal relationships increases the insights that can be derived. Similarly, the ability to correlate entity-to-entity relationships increases the insights by revealing actual and likely ripple effects. Altogether these analytics make it possible to identify the what, when, where, why and how of situations that happened or may happen. In addition, machine learning allows the analytics to adapt to your data and to your use cases.
  • Visual analytics is essential to complete the transformation of data into actionable insights. Intuitive renderings of the relevant data and resultant insights derived by analytics helps users comprehend and acted on data at-a-glance. Output from visual analytics included in alerts via email and SMS is a powerful way of notifying people about critical matters and focusing their attention on acute situations and the decisions to be made.

In summary, situational intelligence is an approach that combines data and analytics, including visual analytics, to aid human decision-making. Insights from advanced analytics and streaming analytics are combined with relevant underlying data to create context so that decision-makers have a complete understanding of each situation and make decisions that lead to the best possible outcome.

LinkedInEvernoteFacebook

Why We Need Situational Intelligence, Part 1

LinkedInEvernoteFacebook

air traffic control

Why do we need situational intelligence?

When we lack information, or information is not easily consumed or comprehended, then our decisions are compromised. We don’t have the right information, at the right time, or in the right form to lead to the best possible outcome.

At its core, situational intelligence handles all of the relevant data needed to derive insights that will guide your decisions. Today’s environment typically means large volumes of disparate data, pouring too quickly into over-burdened systems, leaving executives and analysts alike wondering if they can believe the data they see.  Some insights can be derived only by sifting through this ever-growing mountain of data looking for hidden correlations. Correctly correlating and analyzing all of necessary the data and correctly presenting results, recommendations and predictions is the biggest differentiator of situational intelligence over traditional analytics.

Unlike typical business intelligence or stand-alone analytics solutions, with situational intelligence we receive valuable details, recommendations and predictions that typically result in enhanced competitive advantage through:

  • Cost savings
  • Increased efficiencies, productivity and performance
  • Increased revenues
  • Improved client engagement that raises satisfaction
  • Better understanding of exposure that facilitates better management of risk

To put this into a concrete example, consider why airports need flight and ground traffic control systems. There is the obvious answer – to know where airplanes vehicles and people are both in the air and on the ground, to safely and efficiently stage their movement. Managing traffic at an airport requires context, such as the current and forecasted weather, to achieve the best possible safety and efficiency. Even a seemingly simple matter such as a broken jetway has many consequences that affect the context of ground control, fueling, cabin cleaning, luggage, passenger relocation, food service, etc. Now, imagine the complexity of handling a crisis situation such as an airplane needing to make an unplanned emergency landing.

Managing an operation with as much complexity, interdependencies and consequences as an airport requires the staff in the operations control center to have a live, real-time, big-picture view of everything that is happening and, ideally, also what is most likely to happen. As you surely recognize, keeping track of so much fast-changing information in a person’s head alone is impossible and prone to errors and omissions.

Clearly having as much relevant and easily comprehensible information as possible provides the context that we naturally seek to guide our decisions and actions. In a follow-on blog I will explain why and how various technologies such as flexible data access, analytics, machine learning, artificial intelligence and visualization should be seamlessly integrated to create and deliver situational intelligence that is truly actionable.

 

(Image courtesy of Flikr)

LinkedInEvernoteFacebook

Visualizing Mobile Assets: Where Are My Bots?

LinkedInEvernoteFacebook

It’s ironic that many of us struggle to find our car keys in the morning, yet we’re also launching billions of mobile and increasingly autonomous devices into the world to report on conditions, track activities, and even perform tasks for us. If we can’t find our dumb keys, how will we keep track of stuff that’s smart and moves?

Let’s pretend it is 2018 and start-up company Entrebot sees an opportunity to further disrupt the gig economy. The company has landed funding to purchase 25 autonomous robots and program them to pick up and deliver small items around the city of San Francisco. They’ll be competing head-to-head with Task Rabbit, Uber Food, Amazon Drone, steampunk bike messengers and other players in the personal service sector.

Entrebot robots run on electricity and charge themselves at ports throughout the city, based on time, location, and predicted labor demand.

Entrebot faces unique challenges each day:

  • Deterrence of bots from their routes via harassment, vandalism and bot-napping
  • Rerouting the bots when the unexpected occurs

Deterrence

Even in 2018, some people have a real problem with bots. It is a wide known fact that humans are afraid of robots. Many of them take out life insurance policies against robot attack.

Dislike of bots motivates people to harass, rob, vandalize and steal bots. These actions lead to increased wait times for deliveries, loss from customer claims when items are damaged or stolen, and volatile public perception of the robot’s impact on jobs and city life.

With so many calls, tweets and email flooding Entrebot, the company decides to get proactive in getting people’s stuff to them.

Rerouting

Entrebot’s customer service department decides to hire a Visualizer, a person whose job it is to see visual correlations between peak demand, robot location and probabilities of incidents. They work with a categorized queue of past, present and predicted loss and must triage it all to maximize their Customer Experience.

The Visualizer uses Augmented Reality to do their work, virtually moving through and manipulating holographic projections of buildings, people, vehicles, objects and the variable rates of change between all of them. This is one way the world may keep track of mobile objects in the near future.

Man and computer would work together to make robot delivery more efficient, effective and satisfying.

Augmented reality interface for tracking mobile assets
Augmented reality interface for tracking mobile assets

The Visualizer makes choices among the options presented from analytics. Analytics would present the Visualizer with possible actions when situations become too complex to analyze or require human input.

The computer can process large amounts of data but is not situated physically in the real world and may not be aware of certain factors impacting the business.

At times, deliveries will need to be rescheduled. Analytics can certainly recalculate routes and times, but the business rules call for human approval of rescheduled deliveries.

Bots need to keep running during an area-wide power outage to meet obligations and avoid backlog. Working with the analytics, the Visualizer can reprioritize the bots’ time, location and routes based on existing battery charge and distance from a back up power source rather, than the typical distribution schema optimized solely for time.

The Visualizer could really help when a bot gets kidnapped. They can trace the location through the city and work with law enforcement or private investigation teams for asset recovery.

Perhaps you think this sounds more like science fiction than an actual viable business model in the next two years. I’d bet against that given the trajectory of legislation for autonomous vehicles put out on Cyber Law Wiki by Stanford showing several states are already moving in that direction.

Driverless vehicle legislation
Status of legislation concerning autonomous vehicle operation

How long will it be before driverless cars are paired with tech like Uber? Just look at Spot, a robotic dog by Boston Dynamics that bears uncanny similarity to the mechanical hound in Farenheit 451, a dystopian novel by Ray Bradbury.

Augmented Reality is a great technology for real world applications such as “Where are my robots?” (or any other connected item). It will be fun to see the technology and software offering evolve into brilliant new solutions to tomorrow’s difficult questions from start-ups all over the world.

 

LinkedInEvernoteFacebook