In the electric sector, there has been a sudden surge in the proliferation of digital twin solutions tailored to address distinct utility challenges. Yet, this rapid influx has also unleashed a wave of confusion, as the industry grapples with the varied and often limited definitions of what exactly a digital twin is.
As I started in this journey to better understand and define digital twins specifically for the utility industry, I realized that different entities—both within and across organizations—
had varying perspectives on the meaning and purpose of digital twins. Depending on whom I talked to, some viewed it as akin to a grid management systems or power-flow simulation tools, while others saw it equivalent to Hardware in the Loop, 3D visualizations of assets, or machine learning models that leverage sensor data. This diversity led me to conclude that either nobody is correct, or maybe everyone is correct. It had to be one or the other. I’m here to tell you it is the latter.
Stakeholders must leverage the dictionary definition of a digital twin as a starting point, and mold it to their specific use case or need. The definition of a digital twin, maybe, should be loosely defined and open to interpretation. The flexibility of the use and the definition mean that we may have been using digital twins for the past 20 years in our industry! Now, I could either cut the article here and claim success, or I could dig just a bit deeper to ponder the main question in my mind: Is the grid of today or the grid of the future the same as the historic grids that the existing digital twins were built for? Do the future and past grids both have the same challenges and complexities? If the answer for your utility is yes, the rest of this article might be overkill for you. If the answer is no or “it depends”, then I implore you to stay with me to understand why digital twins, in the traditional sense, are no longer adequate. After all, we can’t expect the tools from even 5 years ago, which were based on a simpler grid, to be relevant in another 5 years, even with incremental innovation.
First, let’s understand why the grid of the future, or even the grid of today, is so different from the past. I’ll start with the obvious: Our electric grid is becoming a complex, nondeterministic system. With the addition of different technologies and processes in the last 20 years to help manage, protect, and maintain the grid, we have transformed the grid into a multi-dimensional, interrelated system of systems. The digital transformation within our industry has been remarkable. Transitioning from electro-mechanical relays to micro-processor relays, to now having the ability to control DERs to help operate the grid is nothing short of amazing. However, now that we have taken these strides, we must look back on the impact of these incremental changes to our grid and our backend systems. The current situation for many utilities is that they are sitting on petabytes of data growing exponentially.
They have a complex design of their T&D system to improve their company metrics (such as reliability) and are setting ambitious plans to become carbon neutral within 20-30 years through various methods such as adopting various technologies (EV, DERs, building electrification). There are more factors out of the utility’s control that impact the grid and should also be integrated seamlessly in the decision-making process (e.g. climate, cyber/physical attacks, load growth, regulations, politics, government incentives). The customer is becoming an even more integral part of the story and driving utility decisions. Interestingly, the industry’s drive to innovate and solve forthcoming challenges is a must for survival.
However, we have yet to understand how these innovations and factors discussed above interact with and impact each other. Many innovations today are developed in silos meant to target a specific challenge, including digital twins in the traditional meaning. This may no longer be a sustainable approach for the grid of the future, and this leads me to pose a new framework to help utilities attain a more holistic view to managing the electric grid of the future. This framework is based on a new definition that goes beyond siloed digital twins (version 1.0), maybe we need to call it Digital Twin 2.0 for the future grid 2.0.
Many innovations today are developed in silos meant to target a specific challenge, including digital twins in the traditional meaning. This may no longer be a sustainable approach for the grid of the future, and this leads me to pose a new framework to help utilities attain a more holistic view to managing the electric grid of the future.
The Grid 2.0 is becoming a living organism, like the human body
Let me begin by illustrating the concept of Digital Twin 2.0 using an analogy from a vastly different system: the human body. This comparison will hopefully drive home the idea that holistic digital twins, which operate cross-domains, are much more effective to understand the full story and make optimal decisions. Let’s start by understanding the human body as a complex network of systems and processes. The human body is composed of various loads/ generation sources (organs), a transmission system (nerves, veins), management/ orchestration systems (nervous system, digestive system, cardiovascular system, sensory system, circulatory system), a backbone that protects the body and needs reinforcement (skeletal system, skin, bones, muscle), sensors (eyes, ears, mouth, nose, skin), and nodes that connect various part of the body (joints, valves, etc.). These systems are either transporting, managing, producing, consuming, absorbing, or exerting various fluids and chemicals. Each of these components typically has specialists or professionals that can diagnose, test, and operate on (fix) them by collecting information from the body through equipment or from a patient.
Such a complex system, as we all can relate, is a web of interconnected systems that all can impact each other in various scenarios. We have all experienced, in one form or another, how a single injury/action could trigger various pain-points or anomalies across our body’s systems. The effects could be experienced simultaneously or in a staggered fashion over time—seconds, minutes, or even longer-term effects. Finally, the human body is exposed to so many environmental factors that could impact it, with only some of those factors within our control.

Learning from The Medical Industry Architecture
In the medical world, the primary physician is responsible for conveying the big picture and providing patients with a wholistic explanation of a situation. Specialists on the other hand, are professionals to whom a patient is referred for further analysis. They focus on specific areas of the body and report back to the primary physician. They essentially act as federates to the central decision-maker and storyteller – the primary physician.
This framework in the medical industry works because it encourages collaboration among different doctors to share results and insights for solving specific issues.
Now, imagine if this didn’t exist – a world where you had no primary physician and had to visit every specialist, then collect all the information yourself to come up with a wholistic diagnosis! This collaboration amongst medical professionals may not be automated and doesn’t involve a user interface to communicate with the patient, the doctor acts as the analyzer and the storyteller, but it’s better than the alternative. It has proven to be essential because the body is so interconnected and sometimes, one perspective may not be enough for an effective diagnosis.
With this existing framework in the medical industry, what if doctors didn’t ask us about the environment around us, such as our habits, personal life situation, professional life, etc.? Would they be able to properly diagnose us? I think not. They need to understand context, usually delving deeper into it during a patient’s visit beyond the questionnaire to really understand the full picture.
This is an essential component and can be just as crucial as the exams doctors would perform during a visit.
In the electric industry, we’ve made some progress towards some of these best practices in the medical world, but we’ve barely scratched the surface. We have different internal groups with experts performing specific functions across domains such as protection, design, planning, operations, maintenance, construction, and more. However, we lack a holistic approach that is only attained by seamlessly sharing insights across these organizations, we don’t have a clear primary physician equivalent. The insights today stay trapped within the domain specific functional teams in the organization and no relationships or trends can be drawn across the different domains to drive holistic decisions. Some findings may get shared during meetings in a PowerPoint or in some other text form, but how much of that really is integrated into other business functions? As for the environment in which the grid, its assets, employees, and customers exist, electric utilities only consider some of those in the decision-making process. While we recognize and acknowledge the need to incorporate more environmental considerations, we are not equipped with the necessary tools and have limited access to the right datasets.
Now, to be fair to the electric industry, the scale of the electric system, its challenges and the impact of making the wrong decision is a completely different dimension. Nevertheless, it’s good to learn from the successes and shortcomings of established industries such as the medical industry, that deals with a system that is complex in its own respect, since it could be a matter of life or death. This metaphor was meant to draw loose but relevant parallels between the human body and the electric grid. Both are reliable today. However, the grid is evolving into a complex, interrelated system of systems that needs to be regarded and managed like the human body. Unlike the human body however, the grid will not naturally sustain itself as it evolves; it will require intentionally designed technologies and processes at the fingertips of utility personnel (the doctors of the grid) to enable holistic situational awareness, planning, operations, maintenance, and design. This lack of natural self-healing/autonomous characteristics in the grid drives the dire need for utilities to identify and replicate the equivalent of primary physicians. This means having the right sensing, design, technologies, and workforce in place to consider the relationships between the different domains that we consider and analyze in a siloed manner today.
As for the environment in which the grid, its assets, employees, and customers exist, electric utilities only consider some of those in the decision-making process. While we recognize and acknowledge the need to incorporate more environmental considerations, we are not equipped with the necessary tools and have limited access to the right datasets.
Digital Twin Definition
Moving on from the human body, let’s delve into the comprehensive definition of a Digital Twin 2.0 (DT) for the electric grid. It is the holistic storyteller of the grid. The DT is a detailed virtual representation of a physical system, ranging from a single grid asset to an entire electric grid within the utility industry. This virtual model integrates an array of information including attributes, GIS, sensor data, economic, societal, customer data, electric asset data, and other relevant datasets across different domains. These can represent and reflect the properties, behavior, and context of the utility’s grid, environment, its customers, and its employees.
The data and information are leveraged to develop and maintain surrogate models of different fidelities of the grid, and its assets that enable utilities to run advanced single or multi-domain scenarios. These models can either be physics-based or can be physics models augmented with historical data and advanced machine learning techniques to better reflect the reality of how the grid and its assets behave and interact with each other. This allows users to understand the current state of the system, predict its behavior under different scenarios, and observe the impact of each scenario on the system, along with any potential cascading effects.
The DT can be used in real-time for operations or offline for planning, each requiring different sets of information. A real-time DT provides situational awareness, uses machine learning (ML) models to analyze the system’s performance, and considers external dynamic factors such as climate and traffic. An offline DT, on the other hand, can be used for investigations, troubleshooting, and planning studies by learning from historical data and combining physics-based models with ML to reflect real-world conditions of the grid and its surroundings. This holistic approach enables effective decision-making and risk management in the utility sector.
Success of the DT Depends on its Architecture
Through several iterations and years of research and discussions, I have concluded that the digital twin 2.0 cannot feasibly be developed by any single or a few vendors alone. It should be a combination of technologies, algorithms, and processes that communicate with each other based on some common framework and understanding. Furthermore, I’d encourage you to explore how Artificial Intelligence could be used as a tool to enable and support the digital twin 2.0 .
Here are some core Features and dependencies that every utility exploring the digital twin 2.0 should consider:
1. Use case driven – A strategic roadmap to build the DT over time is necessary to develop the architecture and ultimate DT vision
2. Scalable – It will grow over time as the use cases increase and vary. They will require various datasets, models, and functionalities. Therefore, the architecture and design choice should account for the scalability over time across these various dimensions.
3. Inter-Operable – It will require tremendous collaboration within the industry and research community to develop the proper tools that all can exchange information seamlessly and instantaneously.
1. Accurate, Up-to-date models – Just as for DT 1.0, if the DT 2.0 models that represent the grid are not accurate, your results could be misleading.
2. A robust, flexible simulation engine – DT should support running simulations across different domains, using different datasets and models fast and efficiently
3. Quality data – Data used by the DT will be leveraged to generate insights and inform optimal decision making; it must be ready to be consumed.
4. Intuitive, User-friendly interface – All of this will be meaningless if the users can’t intuitively and quickly run simulations and process insights and results.
There is a Lot of Work Ahead of Us, but it’s Worth It
The journey towards Digital Twin 2.0 is not merely an incremental step but a transformative leap. The complexity of modern grids, with the increase in interconnected systems and external influences, demands a sophisticated, holistic approach. Digital Twin 2.0 is not just a tool but a narrative framework that tells the complete story of the grid’s past, present, and future. It requires a dedicated, knowledgeable team to navigate through the industry’s existing offerings and discern what truly works and what gaps exist between today’s offerings and the north star. As we stand on the brink of this new era, it is only through collaboration, innovation, and a deep understanding of the grid’s evolving challenges can we harness the full potential of Digital Twin 2.0. This is the future of utilities, and it is a future that promises to revolutionize how we understand and manage our most critical infrastructure.
Digital Twin 2.0 is not just a tool but a narrative framework that tells the complete story of the grid’s past, present, and future.


Abder Elandaloussi is an engineering manager at Southern California Edison in the Grid Technology Innovation team. In his current role, Abder focuses on Transmission and Distribution innovation to modernize the electric grid and prepare it best for all the challenges ahead. Some of his team’s areas of interest for innovation include digital twins, machine learning, robotics, advanced sensors, advanced T&D applications, microgrids, and blockchain.
Abder and his team are always looking for collaboration with partners to advance the state of the electric grid through development and testing of pre-market technology. They collaborate to demonstrate cutting-edge technologies and their effectiveness to address utility needs. Abder is the chair of the digital twin taskforce under ITSLC in IEEE and has co-authored the first IEEE report relating to using machine learning to enhance power system protection and control. He has over 12 years of experience in the utility sector and has held positions previous to SCE in engineering consulting and R&D. He holds a master’s degree in electrical engineering from Kansas State University and an MBA from the University of Kansas. He is a registered Professional Engineer in California and Kansas.
This article was originally published in the August 2024 issue of the Resilience of the Power System magazine.
View Magazine
