Augmented Reality For The Enterprise: A Use Case In Electrical Substation Field Service

LinkedInEvernoteFacebook

2016-08-08_13-34-33

Augmented reality can make a real impact in field service workers in almost any industry. For a specific example, let’s look at a field technician visiting a problematic transformer at a substation.

Currently field technicians download test plans and view them on a laptop or iPad. They have to remain near their car as they conduct spatial reasoning with regards to electrical circuits and their locations. They have to hold some of they learned and deduced in memory, meaning they are eating up more bandwidth of the part of their brain that holds memory, conducts planning and spatial reasoning or navigation. This forces them to continually reference back and forth between the computer screen, supporting documents, tools and the work site. Many of the artifacts they will use are situated at angles and distances where they must turn away and effectively interrupt the mental process of fixing their eyes on the object they plan to use to do their work so that they can do the reasoning. They are basically switching between numerous cognitive tasks.

Below is a screen grab from a video of one technician conducting such procedures.

2016-08-09_16-07-50

The problem

Field technicians show up to substations and other work sites without knowing how to navigate the site. They also need to find the proper tools and documentation for completing maintenance, tests and repairs to assets.

They continually circulate their eyes between consulting site plans, asking site staff where assets are located, monitoring asset performance and completing their task checklist. Their hands are likely juggling multiple items that they need to set down in particular locations and keep track of in order to do the job.

Solution

2016-08-09_17-17-302016-08-09_17-17-30

With augmented reality, field service workers don glasses at the job site. The glasses give them an internal map of the site they are at. Locations of tools and supplies are flagged or highlighted. Digital documents needed to complete a task appear in their field of vision. They can see any chart for an asset by shifting their attention to the asset. Field workers won’t need to remember anything or fumble with tools and documentation simultaneously. They can just focus on performing the work.

Here’s an example of how a substation needing maintenance might appear with augmented reality.

2016-08-08_13-34-45

  1. The menu helps decrease the total space filled by UI elements. Only the current UI element is up. Others appear wilted until workers turn gaze at it or access it with their hand or voice.
  2. This check list helps the user complete their work.
  3. The radar helps them navigate the station. The orange square shows the location of the nitrogen they will need.
  4. A capacity indicator of the transformer cylinder is off in the distance. Relative size is used to indicate importance. It isn’t up close because it isn’t exigent and spatial relationship is the primary representation in use.
  5. The chart shows year-over-year performance for the transformer. The orange year shows the current performance which fluctuates outside the range it should be in.
  6. The problematic area of the transformer is highlighted to draw attention to the user

This is not science fiction. It is reality. All the technologies and features listed in this use case are completely feasible with many of the headsets coming on market in 2016-2017. One example I love is the Meta headset which has gesture tracking among other awesome features.

LinkedInEvernoteFacebook

Could ‘Pokémon Go’ Inspire Enterprise Productivity?

LinkedInEvernoteFacebook

Renewable

The world is going crazy for Pokémon Go. Nintendo’s stock value is making huge gains while masses of people are out hunting little ‘holographic’ critters. The technology isn’t new. Yelp has had a similar functionality out for 4 years called monocle. Yet this ‘Pokemon Go’ has made a ginormous splash. How? They used a technology to solve a pain point which is evidenced by their profitability so I don’t want validate that product vision here. Instead I’d like to answer another question which is “How can Augmented Reality help Enterprise on the same scale as the B2C market?”

The first part of the answer is that the motive to introduce AR to enterprise should not be about making a lot of money but rather helping a lot of brilliant people attain more.  Guy Kawasaki says, “The genesis of great companies is answering simple questions that change the world, not the desire to become rich.” The same applies within an organization eyeing new product offerings.

The second part of the answer is more involved so first I’ll discuss proof that Augmented Reality indeed helps brilliant people dive into a use case and then highlight some good User Experience design principles stemming from neuroscience that will make the use case come to life.

Augmented Reality (AR) is a technology that registers digital aspects onto the physical world around us. A stop light could be considered primitive Augmented Reality (AR). AR on Mobile Devices isn’t new but neither was the tablet before Apple made mountains of cash by designing it the right way. After Google Glass, there was a healthy dose of skepticism that everyday people would enjoy using Augmented Reality. Many would assume that if people don’t want it on their glasses with their hands free then why would they want to use a hand to hold the same information that they didn’t find useful?

Even as AR is in its technological infancy, Market guru Greg Babb explains how Augmented Reality reduced errors and time to complete task for wing assembly study presented by Paul Davies in conjunction with Iowa State and Boeing.

ARIncreaseProductivity-graph2

The chart above shows that wing assembly took significantly less time than traditional methods of assembly. There tends to be a major drive in Enterprise AR use cases to build applications for field crews and those assembling things. Meta CEO Meron Gribetz is on the record of saying he expects to throw away all the monitors in his office by next year and just work with AR headsets. This is compelling because Meta consists of developers, designers and scientists. That means knowledge workers would be using AR at their desk. This is either crazy or prophetic.

So what would an enterprise application look like? Well lets look at it through the lens of situational intelligence. Let’s consider the following use case:

Wind Turbine Use Case

A scientist at an energy company needs to run some prescriptive analytics for wind turbines. Government compliance and regulations have just been reformed and costly repairs and updates to the machinery has to be implemented on an accelerated pace. Heavy fines will be instituted for safety violations. Our scientist wants to use Matlab to run simulations on existing wind turbines to predict which turbines have a greater risk of breaking down, overheating or malfunctioning. He is think she can get more longevity out of gearboxes for yearly use within certain confidence levels.

  • He first speaks out loud saying,”Show me wind farms in central California.”
  • He sees a 3D map of several wind farms that he spins with his hands
  • The length of time he gazes at a certain region makes it slowly expand
  • He sees temperature on 4 wind farms with a high failure prediction
  • He increases severity of wind velocity changes in his data set and sees gearbox 4 fail
  • His gaze on the fourth gearbox chart line causes a line to appear animating towards the holographic turbine
  • He picks up the turbine, swipes the outer shell away and sees the motor spinning
  • He dictates notes about the turbine and recommends replacing the gearbox sooner than others

Outcome

Our scientist is able to work faster in a more implicit manner without opening various files and programs with a mouse. He has more visual real estate and can now work with depth rather than just up and down dimensions. He is more productive, which makes his company more money.

Neural Interface Design

Pictorial Cues are an important aspect of good user experience that make a scene more realistic in AR. Some of these can be:

  • Occlusion – One object blocking part of another
  • Relative Size – Equal sized objects taking less area within our field of vision when their distances from us vary
  • Shadows – Objects that casts shadows seem more real
  • Accretion – Aspects of an object appear as a user moves in the physical world

Another aspect to design for in AR is binocular disparity as it assists with depth perception. Studies show that the neurons in our visual cortex fire optimally when there is a amount of specific disparity with a stimulus. So that is why you see one wind turbine moved to the right of another in the wireframe.

So maybe you won’t get to go hunting Pokémon at work if your company is smart enough to outfit you with one but you will definitely feel like a Jedi when manipulating the world around you with your senses. You will love sharing this world with your co-workers. Augmented Reality unleashes you imagination. The next couple years show some exciting times ahead for Reality Computing!

Special thanks to JulianHzg for making a great Wind Turbine in Blender at  that I recolored and used in my wireframe.

 

LinkedInEvernoteFacebook

Visualizing Mobile Assets: Where Are My Bots?

LinkedInEvernoteFacebook

It’s ironic that many of us struggle to find our car keys in the morning, yet we’re also launching billions of mobile and increasingly autonomous devices into the world to report on conditions, track activities, and even perform tasks for us. If we can’t find our dumb keys, how will we keep track of stuff that’s smart and moves?

Let’s pretend it is 2018 and start-up company Entrebot sees an opportunity to further disrupt the gig economy. The company has landed funding to purchase 25 autonomous robots and program them to pick up and deliver small items around the city of San Francisco. They’ll be competing head-to-head with Task Rabbit, Uber Food, Amazon Drone, steampunk bike messengers and other players in the personal service sector.

Entrebot robots run on electricity and charge themselves at ports throughout the city, based on time, location, and predicted labor demand.

Entrebot faces unique challenges each day:

  • Deterrence of bots from their routes via harassment, vandalism and bot-napping
  • Rerouting the bots when the unexpected occurs

Deterrence

Even in 2018, some people have a real problem with bots. It is a wide known fact that humans are afraid of robots. Many of them take out life insurance policies against robot attack.

Dislike of bots motivates people to harass, rob, vandalize and steal bots. These actions lead to increased wait times for deliveries, loss from customer claims when items are damaged or stolen, and volatile public perception of the robot’s impact on jobs and city life.

With so many calls, tweets and email flooding Entrebot, the company decides to get proactive in getting people’s stuff to them.

Rerouting

Entrebot’s customer service department decides to hire a Visualizer, a person whose job it is to see visual correlations between peak demand, robot location and probabilities of incidents. They work with a categorized queue of past, present and predicted loss and must triage it all to maximize their Customer Experience.

The Visualizer uses Augmented Reality to do their work, virtually moving through and manipulating holographic projections of buildings, people, vehicles, objects and the variable rates of change between all of them. This is one way the world may keep track of mobile objects in the near future.

Man and computer would work together to make robot delivery more efficient, effective and satisfying.

Augmented reality interface for tracking mobile assets
Augmented reality interface for tracking mobile assets

The Visualizer makes choices among the options presented from analytics. Analytics would present the Visualizer with possible actions when situations become too complex to analyze or require human input.

The computer can process large amounts of data but is not situated physically in the real world and may not be aware of certain factors impacting the business.

At times, deliveries will need to be rescheduled. Analytics can certainly recalculate routes and times, but the business rules call for human approval of rescheduled deliveries.

Bots need to keep running during an area-wide power outage to meet obligations and avoid backlog. Working with the analytics, the Visualizer can reprioritize the bots’ time, location and routes based on existing battery charge and distance from a back up power source rather, than the typical distribution schema optimized solely for time.

The Visualizer could really help when a bot gets kidnapped. They can trace the location through the city and work with law enforcement or private investigation teams for asset recovery.

Perhaps you think this sounds more like science fiction than an actual viable business model in the next two years. I’d bet against that given the trajectory of legislation for autonomous vehicles put out on Cyber Law Wiki by Stanford showing several states are already moving in that direction.

Driverless vehicle legislation
Status of legislation concerning autonomous vehicle operation

How long will it be before driverless cars are paired with tech like Uber? Just look at Spot, a robotic dog by Boston Dynamics that bears uncanny similarity to the mechanical hound in Farenheit 451, a dystopian novel by Ray Bradbury.

Augmented Reality is a great technology for real world applications such as “Where are my robots?” (or any other connected item). It will be fun to see the technology and software offering evolve into brilliant new solutions to tomorrow’s difficult questions from start-ups all over the world.

 

LinkedInEvernoteFacebook

Prescriptive Analytics, meet Contextual Augmented Reality

LinkedInEvernoteFacebook

AR1

How should man and machine work together? Should they inform one another from our embodied and digital perspectives? Should machines be pro-active or reactive? Do we query them or can they query us? If so, how?

Check out this Augmented Reality prototype.

The prototype explores prescriptive operating systems that was inspired by my team at Futures Design Lab in San Francisco. We hatched an idea to use artificial intelligence and reality computing to help humanity. This blog builds on that idea to imagine how we might work interactively with machines or distributed personal computers via Augmented Reality.

What if your AR headset machine knew how you approach problems? What if it learned which applications you use at different parts of the day in relation to inputs and outputs on you communication channels and schedule? For example, often times the work I do is distributed across a Word Processor, Spreadsheet, Calendar, Design Tool, Email, Chat Application, Data Repository, and on and on…Wouldn’t it be great if I didn’t have to open an application but that the parts of applications I need work harmoniously and independently of me?

Today let’s consider a hypothetical new way to work with a fun operating system concept we will call YOU. YOU could be a contextual Augmented Reality interface. It might provide the options you want based on your own behavior. Beyond that, it may analyze and prescribe or recommend to you decision options to optimize how you manage your tasks.

COMPUTATION AT YOUR FINGER TIPS

AR2

When you put on your Augmented Reality Headset, YOU would be with you the moment you move your hand. Using advanced computer vision algorithms it positions UI elements at your fingertips for actions you take most frequently at that time of day, independent of what device you are on. No need to pick up a phone or log in to your PC. It is all right there in front of you.

WORKFLOWS BASED ON YOUR CALENDAR

AR3

You look at your Calendar by touching it with your other hand. You don’t need to remember how to activate it because YOU learns your actions. In other words, your interface is performing multivariate testing on your behavior to see which schema you are biologically and emotionally predisposed to. Once you activate the calendar you see your schedule populate. You see that you have Satellite Analysis to do because you work for NASA.

RAISING PRESCRIPTIVE ABILITIES

AR4

This is where a new level of options gets presented to you. YOU knows you look at many charts and graphs. YOU interfaces with a satellite reporting system for real time updates of all of the 1100 satellites in space at any given time. YOU would ideally present you with a few different charts and applications to select from, but let’s assume that step happened already. You chose a treemap depicting the statuses of aging satellites running out of propellant which, if not refueled, could descend back into the Earth’s atmosphere and burn up. Of all the satellites you could examine, YOU has selected one in particular for review by highlighting it in bright green.

THE WORLD IN YOUR HANDS.

AR1

You touch this satellite and a new modeling application pops up showing you the relative angular velocity of a healthy satellite with enough propellant. From here you speak to YOU and tell it to model various outcomes.

SO WHAT?

Prescriptive computation is happening all around us but it is subtle. You can notice it in Google Maps or Waze when it offers you a faster route. Enterprise Analytics companies are integrating prescriptive features into their products.

Computation is now transcending all the media we experience it in. While YOU is one vision of the future, it’s clear the current computational paradigm is rapidly changing. The ways you interact with your current desktop and mobile devices are going the way of the typewriter and pager. We all have a hand in shaping how we interact with data tomorrow. Let’s imagine a world where we might not have to be sitting at a chair for 8 hours a day to do groundbreaking work.

 

LinkedInEvernoteFacebook

Augmented Reality for Water Utilities: La Forge into the Future!

LinkedInEvernoteFacebook

Let’s imagine a world where people use Augmented Reality for a moment. Pull up a bean bag chair and we will pretend we’re at PARC. First, we need a story. How about an earthquake in a big city? Hundreds of water pipes can break in a quake. Oakland, CA is a good example. Their average water pipe is 80 years old, with some pipes dating back to the 1880’s .

Enter John, a water utility superintendent in Oakland and ardent “Star Trek” fan, who is in the middle of a busy day in the field when the quake hits. The ground shakes, roads crack, bridges sway, and hundreds of John’s water pipes burst.

He drives a truck, so he can get creative in accessing the sites despite traffic congestion. Besides, he likes off-roading and this is a great excuse to use government property to do such.  He needs to decide quickly which pipes to repair first, but headquarters is without power and thus no help. Where should he direct his crews?

John pulls up his augmented reality app and accesses an interactive tree map to help solve his dispatching problem. Here is a mock-up of John’s view:

Water AR

“Wait,” you say, “treemaps in an earthquake?”

Yes! Treemaps are for real. They provide a fast, visual way to sort information into an easily scanned hierarchy. Scanning spreadsheets or tabular data is difficult and time consuming.

But will people use treemaps?

Ben Schneiderman of Perceptual Edge explores sales, product and even coffee flavor tree maps in a brief paper outlining the effectiveness of this visualization. So we know they are useful in a range of scenarios.

Back in 2010, Marcos Weskamp made a news tree map that demonstrated the ease of scanning and filtering content. He is now Head of Design at Flipboard, the successful web site and app that uses related information architecture to build treemaps for content curation.

Still not sold? Even the big guys are loving treemaps.

Microsoft is adding treemaps to Excel 2016.

At Oracle, the advanced user interfaces division recently published a paper entitled “Enterprise Network Monitoring Using Treemaps.” Study participants using treemaps performed better and were faster than those using tables when:

  • Identifying or counting items
  • Comparing using one or more criteria
  • Doing advanced comparison
  • Performing open-ended analysis

Okay, now back to the story.

John is in his truck in the middle of an earthquake and doesn’t have time to crawl through pages of tabular data when there is so much commotion around him. He needs better, more intuitive tools to help him make fast, accurate decisions. Hence, interactive tables in augmented reality.

Augmented Reality is a great way to represent non visible aspects of reality to support cognition during critical thinking. John can visually filter through the most significant water breakages to minimize the impact of the earthquake on his community.

John can quickly navigate the breakage alerts by population density, risk, electrical asset proximity and more. He filters his list by predicted water loss–Oakland’s in a drought and can’t afford to lose large amounts of water–and immediately dispatches crews to circumvent any further water loss.

But more importantly, John gets to live his “Star Trek” dream of working like Lieutenant Commander Geordi La Forge.

LinkedInEvernoteFacebook

Is 2D Dead?

LinkedInEvernoteFacebook

Does the rise of immersive, 3D visualization through gaming technology, virtual reality, and augmented reality mean that 2D displays are dead? After all, why click and scroll through a 2D map when you can virtually visit a 3D landscape?

Which view of the Brandenburg gate do you find more engaging?
Which view of the Brandenburg gate do you find more engaging?

3D does have benefits over 2D. 3D depicts the height, volume, and contour of objects to give viewers a sense of spatial relationships between objects and a more nuanced understanding of the texture of objects than 2D can provide.

But 3D also presents challenges to displaying, understanding and manipulating data. When you add the Z-dimension, you increase the amount of spatial data available about an object – more data, more challenges to determining how and what data to present, and how to avoid overwhelming a person’s ability to take in, and make use of, more data points.

Also, when an object has volume, it takes up more visual space making it more likely that nearby objects get obscured. And alas, we just haven’t gotten that good yet at displaying 3D information in ways that are smooth and feel natural to manipulate. The HoloDeck from “Star Trek” is a nice start, but we’re not there yet.

There’s also the challenge of managing level of detail versus height of viewing. When looking down on a city block from a simulated elevation of 100 feet, you can take in details about individual items. But if you move to a simulated viewing elevation of 10,000 feet, all those details become too much to display both from a comprehension point of view and from a software performance point of view.

Decisions need to be made about what information is important at 10,000 feet vs. 100 feet. Those decisions depend on who’s doing the looking. Making software that intelligently handles these changes in viewing elevation requires forethought and deep understanding of the user’s tasks.

Designers can compromise using 2.5D displays. These visualizations add perspective to a 2D display, but don’t go all the way towards three true dimensions.

Synchrophasor video 01

You get the sense of height perhaps, but not of volume. This can be useful in making items stand out more against a 2D surface, plus it looks a little cooler and affords the ability to rotate displays in space to understand better the spatial relationship between objects.

As with many aspects of visualization, the choice between 2D, 2.5D, and 3D display comes down to design thinking. What task are users trying to perform, and what’s the best way to accommodate and accomplish that task? It all depends on your user’s point of view.

(Image courtesy of Flikr under Creative Commons License.)

LinkedInEvernoteFacebook

The Science of Visualization: Receptor-driven Design for Augmented Reality

Tinklepaugh AR UI
LinkedInEvernoteFacebook

Color brings beauty to our eyes, whether from the wings of a monarch butterfly or the broad brush strokes of a Van Gogh painting. Color also allows us to assign meaning and organization to items. At some point, most people have to ask how they should use color whether they are animating a cartoon character, painting an accent wall or, in my case making, a graphical user interface.

Here I will explain how I would go about using color for utilities-specific augmented reality applications.

The use of color rests on how our eyes and brains process light and detail. When selecting interface colors, I ask myself: What colors do I use, and how to maximize readability and decrease distraction?

It helps to think about how the visual system processes color. In the eye, there are two types of receptors that process light: rods and cones.

Tinklepaugh rods cones

Rods are bad for color, but great for detail. Cones are great for color, and aren’t good for detail.

Color exists partly because of an activity pattern of three retinal receptive cones that are suited for different wavelengths of light: short, medium and long wave. These cones work in combinations to send signals to our lateral geniculate nucleus and visual cortex for what color we are to perceive.

Your visual cortex process most information from red and green receptor cones gathered in a small indent in the back of your eye, called the fovea. More space in your cortex is devoted to processing red and green. What is the takeaway? Since blue receptors aren’t in your fovea, your brain works less to process them. Furthermore, rods also process blue, meaning even less energy is devoted to perceiving it.

Receptor-driven design

These variances in how we process light and color leads car designers to two opposing dashboard color philosophies: blue and red.

Tinklepaugh car dashboards

Red wavelength affects mainly cones, leaving the rods unsaturated, which results in better night vision. On the other hand, red wavelength enters your brain from your fovea, which means you use more visual cortex resources to process for higher acuity. With blue dashboards, your cones don’t require as much detail, which means you use fewer visual cortex resources to process. The trade-off is that your rods are processing light from two sources, the road and your dashboard, and therefore are working harder.

Cortical magnification

Hold up just one finger on your hand and look at it–your brain increase magnification in your visual cortex, which uses more cones and less rods.  Now, look at all five fingers on your hand–your brain lowers magnification, which consumes fewer resources in your visual cortex. This relies on fewer cones and more rods.

Tinklepaugh fingers

Interestingly, if you hold up two hands in front of you with all five fingers extended on the right and only your index finger on your left, your visual cortex activates far more and has more total volume dedicated to the finger than when processing your right hand with all five fingers extended.

So, how does any of this apply to Augmented Reality? Let’s take a look.

Tinklepaugh AR UI

Decreasing cortical magnification and acuity.

Here’s an interface that utility workers might use to assess linear assets in the field. The colors are pleasing, modern, unobtrusive–but that’s not the point of the colors. The color design helps field users visualize information more effectively and effortlessly by drawing attention to only what matters at present.

Remember that rods are most sensitive to light and dark changes, shape and movement, and place the smallest demand on the visual cortex. Let’s put all the UI elements in our peripheral that we can, unless they represent the most important data at this current point in time.

Contextual activation of receptors

Let’s make all our buttons and elements blue or white if we can, so they are less taxing on our visual systems. We use green and red very sparingly since they fall right in our fovea. Red alerts us to where the problem is reported via data being uploaded to our system. Green directs our attention to the start and end of where we think our linear asset is experiencing trouble. We can drag, drop, and slide around the placemarks all we want to better approximate and update the data source in real time, allowing asset planners to better diagnose corrective steps to take.

Now that you understand more about how your brain works with light and detail, you can start to notice how products and programs around you are using color to do more than just look pretty.

 

LinkedInEvernoteFacebook

Analytics On Display At DistribuTECH 2015

LinkedInEvernoteFacebook

Analytics are on display—literally—at DistribuTECH 2015, the North American utility conference running this week in San Diego. At the Space-Time Insight booth, I walked through a virtual reality inspection tour of a substation, with asset analytics displayed next to malfunctioning equipment for complete assessment and troubleshooting.

The roster of exhibitors at DistribuTECH includes nearly 80 companies claiming ‘Data Analytics’ as a primary descriptor of their products and services. Exhibitors can claim only a handful of descriptors, so their “vote” for ‘Data Analytics’ demonstrates industry interest in the topic.

Analytic offerings come from several different sources:

  • Dedicated analytics companies such as Space-Time Insight
  • IT companies such as Intel and Oracle
  • Traditional utilities vendors that offer analytics such as Elster and Siemens
  • Consultants such as Accenture and CapGemini

And of course, roaming the exhibition hall turns up permutations resulting from partnerships, OEM agreements and other forms of collaboration between these sources.

This plethora of analytics companies may prove that the utility sector is finally recognizing the growing challenge and opportunity of big data and the Internet of Things.

But it seems like many analytics solutions are still offered as siloed systems, focusing on one vendor’s equipment, one asset class, or one part of the utility value chain.

Some of the dedicated analytics companies, such as Space-Time Insight, and some of the consultants, such as Accenture, are grasping the opportunity of situational intelligence to span the analytic silos within utilities to create actionable insight to improve reliability, safety, and affordability.

Time will tell whether the trend toward insight across silos, as well as within them, grows to keep pace with big data and the Internet of Things in the utility sector.

LinkedInEvernoteFacebook

Virtual Reality and Visual Analytics: Context for the Internet of Things

LinkedInEvernoteFacebook

A related post explains how virtual reality revolutionizes visual analytics by providing a virtual presence combining data and the work environment. But a virtual presence is just one advantage of virtual reality.

Consider other, more analytical advantages of a 3D environment:

  • More space for information: because virtual reality offers a 3D immersive experience, there’s practically unlimited space for exploring your data. Compare this with the desktop metaphor, where you get more space by either adding more monitors or opening additional windows and moving between them.
  • A third dimension for information: because virtual reality is 3D, it offers an additional axis for data display and manipulation. In virtual reality, you can have a cube of data; on the desktop, you can have a table or spreadsheet. Many datasets are inherently three-dimensional. For instance, make, model, and year of cars in a fleet of vehicles.
  • Context for data from the Internet of Things: An explosion of devices are becoming connected to the Internet and generating data: thermostats, appliances, cars and more. Having an immersive environment that replicates the physical one makes it easier to place all that data in a familiar context, making correlation and analysis much more intuitive.

Yes, you’d look silly in your office tomorrow morning, working at your desk with an Oculus headset covering half your head. Just remember that the original computer mouse was a block of wood with sensors, and computer monitors used to require cathode ray tubes that occupied half your desk. It’s quite possible that virtual reality will have more impact on how we work than the mouse and the monitor.

LinkedInEvernoteFacebook

Virtual Reality: A Revolution in Visual Analytics

LinkedInEvernoteFacebook

A previous post asserts that visualization cannot be a commodity, because visualization leads to visual analytics that improve, simplify and speed data-driven decision making. Virtual reality is poised to move visual analytics, and many other aspects of how we work, beyond commodity and to the edges of science fiction.

Virtual reality is a computer generated, three-dimensional environment that people can interact with and explore. Oculus is one virtual reality company you may recognize. (In March 2014, Facebook announced that it would acquire Oculus for $2 billion, before the company had even shipped a consumer product.)

With virtual reality, nearly any environment, factual or fantastic, can be generated—including the familiar environs of our workaday world. This means virtual reality has the promise of helping workers resolve issues faster and more safely with less expense.

Oculus 2
Virtual reality is changing the way we do our jobs and interact with the world.

For example, one environment that can be modeled in virtual reality is an electrical substation. That model can display IT, operational, and external data related to the substation. When an operator receives notice of a malfunction at the substation, he or she could perform a practical walk-through of the substation using virtual reality correlated, analyzed and visualized (situational intelligence) with actual data read from the real-world environment. This allows easy, contextual inspection of the problem without the expense of rolling a service truck to the site just to identify the problem.

One company offering a glimpse into how virtual reality combined with visual analytics might look is Space-Time Insight, and the 2015 DistribuTECH event. They’ve offered a summary of what you can see if you stop by their booth.

Using the virtual walkthrough, operators can either identify how to remedy the malfunction without sending a crew, or fully diagnosis the problem so that crews can repair the malfunction with a single trip. (This applies as well to mobile assets, such as planes, trains, and ships–a virtual environment allows an operator or technician to virtually board the vehicle while it is moving, to diagnosis issues during operation.)

In dangerous environments, such as hazardous waste disposal and sites rendered unstable by natural disasters, virtual reality provides unprecedented safe access to inspect conditions systems. By combining virtual reality with advanced data analytics and robotics, remote operators can inspect and repair locations and equipment without jeopardizing workers.

Until now, virtual reality has been the province of research labs, video gamers, and sci-fi writers. Industrial applications, such as visual analytics, will quickly move virtual reality into the mainstream.

LinkedInEvernoteFacebook