Editor’s Note: Seth Johnson President of Powerside is a member of our Executive Editorial Board and as such, Seth has a unique perspective about changes in the power grid, globally. While this article does use the offerings that Powerside offers, they are used more to explain Data Indigestion and how to avoid it. His references to these offerings in the article therefore are illustrative and deemed appropriate, thus making this a Featured Editorial Article.
Introduction
We certainly live in an age of excess information. That description also holds true for Power Quality (PQ) metering data collected on electrical power systems. There are application-specific systems, such as those used for SCADA (long a staple for utility grid operations centers), but in general, we have more data than we know what to do with. In essence, we have a form of Big Data without the Big Processing required to bring tangible results. It would be entirely accurate to call this Data Indigestion, as the processing system appears to be somewhat constipated. Call this crass, but that is indeed our present condition. We could call it Stranded Data, since indeed most of the data sits stranded in an electrical netherworld. However, Data Indigestion elicits a feeling “like,… you know,… it really needs to go somewhere,… and,… like,… it just keeps accumulating”. So, let’s stick with Data Indigestion.
Additionally, the meters collecting this data are installed at numerous locations and collect data in a largely isolated and unrelated manner. Since we are focusing on power distribution systems, this could range from a handful of meters to hundreds. Nevertheless, the meters are not communicating with each other, and it is rare for a metering solution provider to be given a network model that relates metering locations to grid points. And thus, in reality, these meters are collecting data in isolation, and those developing Big Processing are supposed to make sense of everything collected, bringing together this mostly unrelated volume of information.
Big Processing
Where are the data Big Processing systems? It appears that they are certainly not ubiquitous and do not emerge quickly. So, what’s the hang-up? Eventually, with some investigation, one will discover the extensive engineering and data science effort required. This effort hinges on expert data review and pre-processing being seamlessly integrated with intelligent and thoughtful incorporation into a purposeful system for smart monitoring (one example of the many methods needed shown in Figure 1). Now, of course, utility SCADA systems completely fulfill their real-time monitoring purpose as their meters have been carefully specified and incorporated into an entire monitoring system.
These systems are extensive and very costly, with engineered infrastructure, data stream handling, and final incorporation into a SCADA presentation for an operations center. However, power quality data has never been incorporated into such a financially lucrative environment. Consumers often complain about meter costs. The Big Processing we need appears to be dependent upon meters that may not have been designed with this type of scientific analysis in mind (i.e. simple metering and energy billing).
Combining such loosely specified equipment into a system to create knowledgeable feedback is a daunting task. And, if the volume of data is not already daunting enough, imagine the engineering and programming time needed to deal with the inconsistencies, high-speed vs. low-speed aggregations, the multiplicity of unrelated locations, and the lack of a network topology. Can you sense the size of the effort yet?

Implementing AI requires considerable engineering attention, not the type of attention generated through unrelated and individual research projects and papers.
AI’s Great Purpose
Today, we constantly hear statements made about using such data for “great” purposes and that it should be incorporated into AI to generate “great” answers and insights. But how can this truly be accomplished? What do we do? That is an excellent question, and several companies and institutions are working within their budgets and resources to tackle this extreme data processing task.
This leads to our first key observation: Implementing AI requires considerable engineering attention, not the type of attention generated through unrelated and individual research projects and papers. Such papers include thousands of ideas and approaches, with little central focus on accomplishing this Apollo-moon-landing-like mission. Research is good in its time. However, there is a time to research and a time to hammer out the details.
This brings us to our second key observation: Big Processing is hammer time. There needs to be a dedicated and forthright approach, and when companies pursue this with a likewise need to be profitable (which the moon landing definitely was not), progress can be limited and slow. However, let’s not lose hope, as some are hammering out the foundational elements with detailed engineering assessments that bring together many of the significant details needed for PQ data Big Processing. Much is required before AI is even considered.
Big Data and Big Processing
Now, the key issues with Big Data are Volume, Velocity, and Variety. This, unfortunately, is an overly simplistic 3-v-word statement that encompasses thousands of manhours of effort. Beyond the three-v’s, power system Big Data has several additional factors that make Big Processing a formidable task:
- The volume is excessively enormous if we include waveform captures, harmonics, and a host of other related environmental sensors.
- The velocity varies, but more importantly, it is a continuous river of information spilling into an ever-increasing reservoir of data.
- The data variety may appear compartmentalized, but the variations of that data should not be underestimated: waveforms, RMS quantities, phase values, sequence values, harmonic values, phasors, real power, reactive power, cycle-by-cycle recording, 1-minute recording, 10-minute recording, etc.
- Most of the data exists in forms that supply little context, specifically, the electrical system context. Creative ways are needed to relate this data without a network context, as obtaining network data is a sensitive subject.
- Metering data is suspect in its quality, and in many cases, comes from devices that were not installed with an engineering and electrical network purpose in mind. Even good meters have issues.
- Much of the data is low resolution: lower sample rate converters, storing demand data only once an hour or once a day.
- Data may only include basic electrical quantities. Although ubiquitous, such meters do not contain the additional information necessary for Big Processing.
The size of the effort is incredibly significant, and formidable engineering must be achieved before Big Processing can occur.
Well-trained and experienced engineers are essential for developing AI preprocessing and properly training AI engines.
Metering Approaches
There are two extremes that distributed metering systems align with. They are as follows:
- Closed Meter Support – A system using a set of specific meters, and only these meters are supported.
- Open Meter Support – Use a variety of meters from any number of manufacturers that become supported through developing driver Application Programming Interfaces (APIs).
At Powerside, we are utilizing both avenues, and as such, this works for a good illustration here. Our line of PQube meters is handled explicitly by our cloud-based QubeScan software. And, through Electrotek Concepts, the on-premise server-based PQView software handles over 50 different meters with various capabilities and data densities. Big Data, being an inclusive concept, can encompass both possible options. Which is better? This is similar to an Apple vs. Microsoft argument, which unfortunately becomes very subjective very quickly. QubeScan is akin to Apple and PQView is akin to Microsoft. Many PQView users specifically want their meters to be supported, and that is a key and vital need for them. They desire to utilize the metering infrastructure they have invested in, and some users appreciate the fact that they don’t need a specific meter, just a supported one. QubeScan users, on the other hand, typically desire an integrated system where the meters they use (PQubes in this case) are specifically designed for the cloud-based system. One can debate the issues between these approaches, and they typically look like this (please note, this is not an advertisement, but for the authors, the simplest way to illustrate the two approaches):

Table 1. Feature comparison between open (PQView) and closed (QubeScan) meter support.

However, note that the table above does not include Big Processing options, at least not yet. This is a work in progress that holds promise for the next generation of electrical power systems. Thus, the instrument approach then brings out our third key observation: The metering implementation will impact data pre-processing within Big Processing. Let’s not enumerate those details but take clear note of the added complexity.
The Present Condition and the AI Push
What is the present condition in the industry? It appears that, for most institutions using power system monitoring, little Big Processing is happening. Adverse system conditions are handled mainly by staff on an as-needed basis. This means that our key path to detecting system anomalies and mitigating adverse conditions mostly occurs after post-apocalyptic grid events garner attention. Some automatic report generation also accompanies this, but for the most part, “normal” conditions are ignored. Are conditions really that normal, and are we missing critical trends or events? The concept being promoted in AI research papers and parts of the industry is that our PQ data can yield greater insights, predict failures, and detect abnormal conditions that have gone undetected or ignored. We also believe this to be true. However, we (and hopefully you as well) view Big Processing with a sense of practicality and relevance, rather than with a magical sense where an all-knowing AI engine delivers revelatory insights that we, as poor humans, could not see for ourselves. We must note that a truly smart AI engine is complicated to achieve, especially since humans train AI systems with their imperfections and corporate directives.
And this brings forth our fourth key observation: Well-trained and experienced engineers are essential for developing AI preprocessing and properly training AI engines. These engineers, working with expert data scientists, form the pipeline between the ever-accumulating reservoir of data and Big Processing. This delivers confident event categorization and AI implementations that are well-trained. Companies that involve less-experienced staff who mislabel power conditions will generate AI systems that produce erroneous results. Yes, the principle of ‘garbage in, garbage out’ still holds true in the AI age.
Well-trained and experienced engineers are essential for developing AI preprocessing and properly training AI engines.

This fourth observation may seem trivial and obvious; however, finding knowledgeable engineers in our present day is becoming increasingly complex. A lack of mentorship and a disconnect between generations is yielding a newer breed of engineers with less experience and a decreased desire to understand the deeper aspects of power system modeling and analysis, which are essential skills for evaluating power system conditions.
Power quality monitoring and electrical power system simulations create a reality-simulation pair that bring greater depth and understanding to power quality conditions.
A More Complete Picture – The Reality / Simulation Pair
Though we could build the best data pre-processing methods and train an AI engine with 100% accuracy, a definitive context will still be missing. Where is this meter in the network? How does this meter relate to another meter electrically upstream? What is the impedance between these two meters? Are my meters and my network simulation matching? To answer these questions and gain a deeper understanding of PQ events and conditions, a network model or digital twin of the electrical system is necessary. This leads to our fifth and final key observation: Power quality monitoring and electrical power system simulations create a reality-simulation pair. Consider these supporting comments:
- Power quality measurements supply benchmarks for model validation.
- Power quality measurements capture conditions that need to be understood so models can be improved to predict them.
- Power system simulations predict expected power quality conditions for design, system operation, and measurement and verification.
- Power system simulations and power quality measurements can be used to validate each other.
- Power system simulations can educate a user in the finer aspects of the vector and transient nature of AC power systems.
- Power system simulations have a network view that can aid power quality measurements, which have a narrow field of view. Together, they bring greater depth and understanding to power quality conditions.
To gain the added network view, additional details will need to be hammered out that integrate network simulation results into AI. This is compounded by the fact that both the monitored system and the network model can dynamically change as the system changes. As such, there is much to do, and hopefully, by now, you sense the magnitude of the effort required.
Conclusion
In conclusion, let’s review our key observations regarding Data Indigestion and the need for Big Processing:
- Getting to an AI implementation needs formidable engineering attention.
- Big Processing is hammer time, not research time.
- The metering implementation will impact data pre-processing within Big Processing.
- Well-trained and experienced engineers are mandatory to develop AI preprocessing to train AI engines properly.
- Power quality monitoring and electrical power system simulations create a reality/simulation pair that bring greater depth and understanding to power quality conditions.
There is a future for our PQ data, which will require considerable work and effort. As this article suggests, we need an open-minded approach, grounded in practicality and relevance. Properly implemented Big Processing is the key to intelligent systems and AI’s glorious purpose
Seth Johnson is President and General Manager of Powerside, a company that specializes in optimizing power quality for Utilities and C&I markets worldwide. He holds a bachelor’s degree from the University of Minnesota—Twin Cities and is an established thought leader and industry expert.

Christopher Duffey is a Senior Technical Fellow at Powerside with 40 years of electrical power system engineering experience. That experience includes power system measurements, harmonics, power system dynamics, failure analysis, expertise in power system simulations, consulting, teaching, and designing and coding power system analysis engines. In addition, he is a Senior member of IEEE, holds two patents on airgap torque transfer devices, has BS and MS degrees in electrical engineering, wrote his Master’s thesis on wind energy in Kansas, and is an Eagle Scout.

This article was originally published in the November 2025 issue of the Resilience of the Power System magazine.
View Magazine