For over a century, the fuse has been at the core of many electric distribution system protection schemes. Early fuses were used to protect telegraph lines from lightning, and Thomas Edison incorporated fuses into the electric distribution system he developed in 1890.
In 1909, a fuse that failed to “quench the arc” and fully interrupt a fault caused a fire at the Fisk Generating Station in Chicago. Shortly after this, two young Chicago engineers, Edmond Schweitzer and Nicholas Conrad, began investigating a number of innovations, including a fuse filled with liquid to ensure the arc was quenched. This led Schweitzer and Conrad to form S&C Electric Company. Decades later Schweitzer’s grandson continued the tradition of innovative protection when he formed Schweitzer Engineering Laboratories (SEL), an early leader in “intelligent” protection devices.
Fuses are popular for a variety of reasons. They are economical, reasonably small, and easy to replace. The degree to which they have made the electric system safer and more reliable over the last century is incalculable.
But there is one fundamental requirement if a fuse is going to work: there must be sufficient fault current available to cause the fuse to “blow” when a problem arises.
Traditionally this has not been an issue. Sometimes people use the term “big iron” to refer to portions of the power system that range from the huge generators at power plants to transformers larger than your average electric vehicle at substations. If a portion of a distribution feeder faults, this infrastructure helps ensure an increase in current flow and that current, in turn, can cause a fuse to open.
At least until recently…
With the influx of Distributed Energy Resources (DERs), the distribution system is evolving in ways that may lead to significant changes in behavior. A key example is in the area of system protection. When faced with a fault on the distribution system, the typical inverter on a residential roof-top solar system (or even the bank of inverters on a larger solar farm) will simply shut down. To protect itself, it does not attempt to push massive amounts of current into the fault and, as a result, the inverter will not make any significant contribution to the fault current required to blow a fuse and clear the fault.
Low DER penetration limits the impact of this issue on most distribution systems today. But as feeders connect the ever growing number of DERs, many utilities will eventually be forced to address this issue. This scenario is especially likely to arise when utilities leverage DERs to realize benefits such as reducing end of line voltage issues that will, in turn, allow feeders to extend beyond their traditional reach.
Many utilities are concerned about being the always-present “backup” choice for customers who primarily use DERs. But with the issue described above, utilities should also be concerned about being the sole source of the “fault current” needed for the proper operation of traditional protection schemes.
What is the answer?
As both DERs and system protection advance in the coming years, many factors need to be taken into consideration. One fundamental change to today’s common approach would be to utilize protective devices that watch for the voltage dips associated with faults instead of depending on fault current surges. “Intelligent” protection schemes that rely on various devices sharing information about what they are seeing in near real-time can also help identify and isolate faults with protection devices such as reclosers. This technology is proven and available today, but when compared to fuses its implementation requires a more intelligent (and more expensive) device.
There may also be more subtle impacts to consider. For example, if we reach the point where there is not sufficient fault current to operate fuses, we might also find that fault indicators no longer operate reliably. In the long term, a utility might be better off scaling back the deployment of fault indicators and accelerating the deployment of more advanced protection schemes which can also provide insight on fault location.
Different utilities will progress down the DERs path at different paces and in different ways, but to optimize capital investments a robust long-term view is needed. For example, during a recent discussion about one utility’s plans for widespread deployment of additional fault indicators, the conversation led to a debate about whether it would be better to invest some or all of the capital dollars into a more advanced protection scheme. Although individual points in the advanced protection scheme would be more costly to implement and provide less granularity on fault locations, it might still be the better long-term investment. There was no answer easy answer, but at least the right questions were now on the table for consideration, and an approach for making those decisions is being built.
Yet another example is the various devices that are “hung” on the distribution system and are energized using “power harvesting” from the line. With DERs spread along a feeder, current flows along portion of the feeder may begin to decrease and eventually drop below the minimum required current need for power harvesting technologies to work. If current plans include deploying new devices that use power harvesting, DER penetration is one more thing that has to be considered as part of solution evaluation and placement selection.
West Monroe has helped clients work through the process of identifying these type of “less than obvious” impacts and built business cases that help optimize the prioritization of capital projects. Although there is inherent uncertainty around the business case, having a clear framework for decisions and explicitly recognizing that uncertainty exists helps successfully navigate the choices. And, as is sometimes pointed out, models are often not so much about providing an answer as they are about gaining insight.