By: John McNally
Analytics isn’t a new concept. Using data to drive our actions, like using Key Performance Indicators to improve business performance, is a common objective. As they say, “what gets measured gets done.” What’s changing is the timeframe for receiving information back, and the amazing quantity of information that is coming back.
Data is being created at such a rapid rate today that it’s estimated that 90 percent of the data in the world has been created in the last two years. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals, to name a few.
This explosion of data is also occurring within the energy and utilities industry. Utilities are geometrically increasing the quantity and volume of measured events. For a utility with 100,000 meters that offer 15-minute intervals for data collection, that translates to 111 data points per second, captured and stored, just for a single channel of data from the meter. There are many utilities capturing 4 channels of data from residential meters and 10 channels from their C&I meters. Depending on system design, there could be another 25-30 data points per second taken for switches, reclosers, capacitors, line regulators, sensors and automated substations. In addition, there might be additional measurements for asset condition monitoring points. This scenario accumulates around 4.5-15 billion data points per year for a utility with 100,000 meters, depending on the frequency and number of channels being collected. That’s what is meant by Big Data/Big Volume.
As new systems are put in place, the series of questions relating to data becomes:
These questions then lead to additional planning, legal, and governance questions. How much of this information needs to be stored and for how long? What is the utility’s data governance model? What is a utility’s data privacy strategy?
West Monroe Partners recommends taking a fresh strategic approach to data planning rather than an incremental approach. The underlying concept of the smart grid transformation is that the processes and infrastructure used for managing the grid in the 20th century are not suited for managing the grid in the 21st century. A new data strategy is needed.
Leveraging data is essentially a continual improvement process. Along with this strategy, utilities need to define how to collect and manage the data accurately. In other words, the key is to: Measure. Analyze. Act. Repeat.
Data occurs in two time-frames: static and real-time. Let’s think of data in a two-by-two matrix, with static and real-time on one axis, and simple and complex analysis on the other axis.
Static information has value beyond the moment of capture, such as documenting a baseline for forecasting. Three standard measurements in the utility sector—Customer Average Interruption Duration Index (CAIDI), System Average Interruption Frequency (SAIFI), Customer Average Interruption Frequency Index (CAIFI)—are examples of static measurements and serve as a good example of the process of ”peeling the onion” for gaining more insight from data. If a utility is looking for improvement, it will likely have to start with a baseline measure. SAIFI is a starting point for reliability by giving a measure of services interruptions at the system level. But what actions can a utility take with that information?
SAIFI is captured as a relative measure, but additional insight is needed. SAIFI doesn’t give insight into the impact to individual customers. SAIFI averages interruptions across the entire customer base. CAIFI was created to gain additional insight. Are a utility’s outages evenly distributed across its customer base or are they concentrated and repeatedly impacting a particular segment of the utility’s customer base?
But are all outages equal? What is the severity of impacts? CAIDI tells us duration. Insight is increased by showing the interruptions at the customer level based on the customers that are impacted and the level to which they are impacted. Some customers may be severely impacted by a single event, while other customer may be lightly impacted across multiple events.
These measurements start peeling the onion. They provide the beginning capability for a fishbone analysis that identifies causes along with opportunities to improve reliability for customers. Recommendations can be judged against a cost-benefit analysis. For example, can overhead lines be routed underground where the utility is most vulnerable to interruptions? This is not just a budget decision. Reliability can be measured. Customer opportunity costs and public perception can be quantified.
Each level of insight provides the seed for asking questions to penetrate the next level of insight. This SAIFI/CAIDI/CAIFI example is from historical, or static, data. An example of Complex Static information could be analyzing enough historical data to determine data relationships that may not be discovered from ordinary queries and reporting.
The New Data Frontier.
The next frontier for analytics is moving into real-time data. Real-time data is delivered for operating decisions, asset condition monitoring, and to customers (or near real-time data) for making energy management decisions. Real-time data drives Distribution Automation, Volt-VAR, and predictive analytics and home energy usage patterns. This real-time data should be captured because it continues to be valuable as static data for interests such as load research and interval data snapshots of customers by demographic category. Complex real-time data analytics that leverage data from smart meters, SCADA, expected system demand, and weather forecasting will be needed to address challenges such as interconnection of highly variable wind and solar components, risk management, and energy trading.
Achieving real-time analytics could be daunting to implement in one “Big Bang.” It’s a digital disruption in how utilities have operated in the past. West Monroe Partners recommends starting with a high level strategic view and critical thinking to ask questions such as: “What are the possibilities?“ and ‘What if…?’ Then the next steps are to identify a measurement to answer these possibilities, followed by the development of first-order questions that the utility would like answered in real-time scenarios. When that is established, the utility can proceed with asking second-order questions.
How do you get started?
The first thing needed is a framework, a technology roadmap, purpose, and data governance. Start with a modest plan. What can you measure that will help you? Set it up. Measure it. Automate data collection. Automate reporting or actions. Don’t think of data as a destination with a final conclusion. Think of it as a continual service improvement journey. Think it, prove it, learn from it, refine it, and then expand it. Repeat.
New data collection opportunities – changes introduced with real-time data collection.
You will need an analytics capability that sources data from a data warehouse and a big data platform. You will need in-house skills to make it work technically and analytically, dealing with large amounts of structured and unstructured content. You will need integration to pull data from multiple sources and distribute components to multiple locations.
In summary, you will need:
What gets measured gets done. If you want to talk more about this topic, please feel free to reach out to me at JMcNally@WestMonroePartners.com or at 312-980-9490.