By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy and Privacy Policy for more information.
HomeArticles

Getting SaaS-y About Industrial Data Products

Getting SaaS-y About Industrial Data Products

The benefits of SaaS software are well documented, but can be challenging to achieve in industrial data-driven products. A few guiding principles can help.

The benefits of software as a service (SaaS) software are well documented, but can be challenging to achieve in the emerging area of industrial data-driven products. A few guiding principles can help.

Ah, the sweet promise of design-driven SaaS products:

  • Intuitive and consistent user experience
  • Rapid on-boarding with minimal set-up and training
  • On-going feature enhancements
  • Best-in-class security
  • Consumption free of internal IT constraints with shared infrastructure expense

How does this match-up to the reality of operating industrial installations with:

  • Complex, non-standard asset configurations
  • Variable operating processes
  • Variable regulatory and environmental contexts (e.g. a plant built to run in Siberia will require different physical tolerances and standard operating procedures than one built to operate in the Sahara)

In short, how do we reconcile the promise of a more "consumer-driven" software experience of simple, persona-driven workflows in the industrial world of infinite combinations and complexity?

Further, how do we do so with data products requiring models that reflect highly specific asset contexts in order to show meaningful insight? How can we avoid having to effectively deliver a custom science experiment every time we want to show value?

Well, as my Dad used to note, "No one said it was going to be easy." There are however some important principles that industrial data product managers working with their data science colleagues can commit to:

DRIVE CONFIGURATION INTO THE SOFTWARE TO SIMPLIFY THE DATA SCIENCE

Even as you’re trying to generalize models to accommodate broader sets of predictive behavior, recognize that you can simplify things by adding software parameters that can reduce the complexity of what an algorithmic model needs to solve. For example, in a recent customer case in applying machine learning to extract data from images, we simply asked the question "Is it reasonable to expect that the customer would know the classification of the base image?" As the answer was, yes, we supplied that as a model parameter and thereby simplified what the model had to detect.

LEVERAGE MACHINE LEARNING FOR SCALE, NOT JUST INSIGHT

Too many "data" products are currently being built as science projects that don’t scale. This is because the work required to build a generalizable model required to both be able to create insight on a single asset and then be applied to a population of like assets can itself be an impossible task. We have learned that a better question to ask is "How can we leverage machine learning to assist the machine learning?" A practical example of this is our emerging work on virtual flow meters for onshore wells where we’re applying machine learning to the problem of initial set-up and calibration, as well as ongoing re-calibration of VFM models.

BUILD FOR THE ASSET AND THEN LAYER CONTEXT ON TOP, BUT DON’T ASSUME THAT THE SUM OF THE PARTS EQUALS THE WHOLE

At Arundo, we work around a general construct of:

  • Single equipment
  • Multi-equipment
  • Process
  • Plant or installation

There are successive levels of complexity as you graduate to taking on each additional layer. Context explodes, as does the potential for different types of behavior. By adding layers of complexity over time, we found it’s possible to build towards solving seemingly intractable problems.

At the same time, complex systems being what they are, simplifying top-down assumptions can be, well, simplifying. Ask yourself the question: Are you solving for precision or accuracy? You should ask this question particularly if you’re bringing insight from large data sets to a problem that has previously been judged heuristically. This might give comfort that a simpler top-down model that doesn’t reflect all or even the majority of system elements can bring benefits to an operator. Our recent work around a particular type of operating unit in refineries has shown clearly that we don’t need to understand the entirety of a process to materially improve operator decisions and outcomes, safely.

DON’T FORGET THE USER EXPERIENCE

Seems obvious to say we live in an increasingly design-led world where business user expectations are driven by consumer-driven design. Yet, there are plenty of non-intuitive interfaces being built. While we’re not in the business of building complex transactional systems where users have hundreds of decisions to make, we’re in the business of building complex decision support systems. Here the interface is critical in giving the user an intuitive grasp of what data or the equipment is trying to tell them. We start with an aspiration of building a zero-learning time interface which essentially means we want a user to have an intuitive grasp of how to proceed after being walked through basic features one time. This means getting real user feedback early in the process and being prepared to scrap designs that are just, "okay".

WHAT’S YOUR EXPERIENCE?

What makes a great "SaaS-y" Data product in your mind? What are some of the methods and mindsets you have adopted to be successful? How is this all made more complex by the challenges of operating in industrial environments?

Join the conversation, we’d love to hear your perspectives.