29
Sat, Apr

Distributed Energy Resources Offer a Better Alternative

Analysis
Typography

Motivated by the challenge faced in designing a grid appropriate to the 21st century, this report first focuses on determining the quantifiable net economic benefits that DERs can offer to society. The approach taken builds on existing avoided cost methodologies – which have already been applied to DERs by industry leaders – while introducing updated methods to hardto-quantify DER benefit categories that are excluded from traditional analyses. While the final net benefit calculation derived in this report is specific to California, the overall methodological advancements developed here are applicable across the U.S. Moreover, the ultimate conclusion from this analysis – that DERs offer a better alternative to many traditional infrastructure solutions in advancing the 21st century grid – should also hold true across the U.S., although the exact net benefits of DERs will vary across regions.

A. Methodology 

The methodology utilized in this paper is built upon well-established frameworks for valuing policies, programs and resources – frameworks that are grounded in the quantification of the costs and benefits of distributed energy resources. Specifically, the methodology employed here: 

1. Begins with the Electric Power Research Institute’s 2015 Integrated Grid/Cost Benefit Framework in order to quantify total net societal costs and benefits in a framework that applies nationally. 

2. Quantifies the benefits for the state of California, where the modeling of individual cost and benefit categories is possible using the California Public Utilities Commission 2015 Net Energy Metering Successor Public Tool. Within the context of California, this report’s DER avoided cost methodology is expanded beyond EPRI’s base methodology to incorporate commonly recognized (although not always quantified) categories of benefits and costs, while also proposing methodologies for several hard-to-quantify categories using the Public Tool. 

3. Incorporates the full costs of DER integration, including DER integration cost data as identified by California utilities in their 2015 Distribution Resource Plans to determine the net benefits of achieving 2020 penetration levels.

4. Repeats the methodology in a concrete case study by applying it to the planned distribution capacity projects from the most recent Phase I General Rate Case in California. 

Enhancing Traditional Cost/Benefit Analysis and Describing Benefits as Avoided Cost 

Cost/benefit analyses have been conducted for many decades to evaluate everything from utility-owned generation to utility administered customer programs such as energy efficiency rebates and demand response program funding. This paper replicates established methodologies wherever possible, and offers new or enhanced methodologies where appropriate to consider new benefit categories that are novel to customer-driven adoption of DERs, and therefore often excluded from traditional analyses.

A key component of cost/benefit analysis commonly used for valuing the benefits of DER is the avoided cost concept, which considers the benefits of a policy pathway by quantifying the reduction in costs that would otherwise be incurred in a business-as-usual trajectory. While avoided cost calculations can be performed with varying scopes, there is some degree of consensus on what the appropriate value categories are in a comprehensive avoided cost study. Groups like IREC, RMI and EPRI have attempted to take these standard valuation frameworks even further, describing general methods for valuing some of the benefit categories that are often excluded from traditional analyses. 

Each step taken by researchers to enhance previously used avoided cost methodologies advances the industry beyond outdated historical paradigms. DER-specific methodological updates include the consideration of new types of avoided costs that could be provided by distributed resources, or a revision of the assumption that resources adopted by customers are uncontrollable, passive deliverers of value to the grid and that proactive planning and policies cannot or will not be implemented to maximize the value of these grid-interactive resources. 

This report continues the discussion using EPRI’s 2015 Integrated Grid/Cost Benefit Framework as a springboard. EPRI’s framework, depicted in the following image) was chosen as it is the most recently published comprehensive cost/benefit analysis framework for DERs. This report assumes a basic familiarity with EPRI’s methodology – or avoided cost methodologies in general – on the part of the reader, although explanations of each cost or benefit category are included in the following section.

The Value of DERs within California 

While the overall methodology enhanced within this report is applicable nationwide, the focus of this report’s economic valuation of DERs in the cost/benefit analysis is limited to the state of California. For California’s NEM 2.0 proceeding, the energy consulting firm Energy+Environmental Economics (E3) created a sophisticated model that parties used to determine the impact of various rate design proposals. A major component of this model was the ability to assess DER avoided costs under different input assumptions. The more traditional avoided cost values in this paper are derived from the inputs used in the NEM 2.0 proposal filing of The Alliance of Solar Choice (TASC) for the E3 model, which is available publicly online. 

Additionally, benefit and cost categories for DERs – along with accompanying data and quantification methods – are being developed in the CPUC Distribution Resource Plans (DRP) proceeding. This update of the DER valuation framework in the DRP proceeding, however, is not present in the existing methodologies being used to quantify the benefits of rooftop solar in California as part of the NEM 2.0 proceeding due to the concurrent timing of the two proceedings. This report bridges these two connected proceedings in its economic analysis of the value of DERs within California. 

While evaluating net societal benefits at the system level in California is a key step in understanding the total potential value of DERs, there remains much discussion within the industry regarding whether calculated net benefits can actually be realized from changes in transmission and distribution investment planning. To this end, this analysis applies the developed California DER valuation framework to a real-world case study utilizing the latest GRC filed in California, PG&E’s 2017 General Rate Case Phase I filing. By utilizing this third dataset, in addition to the NEM 2.0 and DRP proceedings, this analysis delivers a comprehensive and up-to-date consideration of the potential value DERs can provide to the grid.

Analysis Scope, Assumed Scenario, and End State 

This report evaluates the benefits of customer DER adoption, the associated costs, and the resulting net benefit/cost.

The benefits and costs of DER are highly dependent on penetration levels. Therefore, this analysis utilizes a set of common assumptions for expected DER penetration, and specifies a market end state scenario upon which benefits and costs are quantified. The end-state assumed in this report utilizes scenarios in Southern California Edison’s (SCE) July 1, 2015 Distribution Resource Plan, which includes DER adoption levels and integration cost estimates for the 2016-2020 period. These integration costs inform DER penetration assumptions, which are applied consistently across the benefits calculations to ensure that the costs of low penetration are not attributed to the benefits of high penetration, and vice-versa.

To simplify the discussion, solar deployment is focused on the years 2016-2020, adopting the penetration levels and costs associated with the TASC reference case as filed in the CPUC NEM 2.0 proposal filing, which corresponds approximately to SCE’s Distribution Resource Plan Scenario 3. Of the approximately 900,000 new solar installations expected to be deployed during this period, SolarCity estimates 10% would adopt residential storage devices and 20% would adopt controllable loads (assumptions are based on customer engagement experience and customer surveys). These adoptions are central to the ability of customer DER deployments to defer and avoid traditional infrastructure investments as assessed in this paper. 

The assumptions described above are used to complete the cost/benefit analysis of DERs for the whole of California. After evaluating net societal benefits at the system level, the methodology is then applied to a particular case study of actual distribution projects proposed under the latest GRC filed within California, PG&E’s 2017 General Rate Case Phase I filing. 

In the following sections, the deployment scenario is evaluated both qualitatively and quantitatively under a cost-benefit framework that is grounded in established methodologies, but enhanced to consider the impact of such a large change in the way the electric system is operated. The study consolidates a range of existing analyses, reports and methodologies on DERs into one place, supporting a holistic assessment of the energy policy pathways in front of policy-makers today.

B. Avoided Cost Categories 

The avoided cost categories evaluated in this report are summarized in the following table. The first seven categories are included within traditional cost-benefit analyses, and as such are not substantially extended in this report (see Appendix for methodological overviews and TASC NEM Successor Tariff filing for comprehensive descriptions and rationale on assumptions). The next five categories (in yellow highlight) represent new methodology enhancements to hard-to-quantify avoided cost categories (i.e. benefit categories) that are often excluded from traditional analyses. In this section, we detail the methodology and rationale for quantifying these five avoided cost categories.

Voltage, Reactive Power, and Power Quality Support 

Solar PV and battery energy storage with ‘smart’ or advanced inverters are capable of providing reactive power and voltage support, both at the bulk power and local distribution levels. At the bulk power level, smart inverters can provide reactive power support for steady-state and transient events, services traditionally supplied by large capacitor banks, dynamic reactive power support, and synchronous condensers. For example, in Southern California the abrupt retirement of the San Onofre Nuclear Generation Station (SONGS) in 2013 created a local shortage of reactive power support, endangering stable grid operations for SCE in the Los Angeles Basin area. To meet this reactive power need, SCE sought approval to deploy traditional reactive power equipment at a cost of $200-$350 million, as outlined in the table below. DERs were not included in the procurement to meet this need. Had DERs with smart inverters been evaluated as part of the solution, significant reactive power capacity could have been obtained to avoid the deployment of expensive traditional equipment.

At the distribution level, smart inverters can provide voltage regulation and improve customer power quality, functions that are traditionally handled by distribution equipment such as capacitors, voltage regulators, and load tap changers. While the provision of reactive power may come at the expense of real power output (e.g. such as power otherwise produced by a PV system), inverter headroom either exists or can readily be incorporated into new installations to provide this service without impacting real power output. The capability of DER smart inverters to provide voltage and power quality support is currently being demonstrated in several field demonstration projects across the country. For instance, a demonstration project in partnership with an investor-owned utility is currently demonstrating the voltage support from a portfolio of roughly 150 smart inverters controlling 700kW worth of residential PV systems. The chart below depicts the dynamic reactive power delivered to support local voltage. In this instance, smart inverter support resulted in a 30% flatter voltage profile.

Projects such as the SONGS reactive power procurement project provide recent examples where utility investment was made for reactive power capacity. These projects were used to quantify the economic benefit of DERs providing reactive power support. To do so, a corresponding $/kVAR-year value was applied to the inverter capacity assumed in the deployment scenarios to determine the value of the services offered by the DER portfolio. Note, also, that markets including NYISO, PJM, ISO-NE, MISO, and CAISO already compensate generators for capability to provide and provision of reactive power.

Conservation Voltage Reduction 

Smart inverters can enable greater savings from utility conservation voltage reduction (CVR) programs. CVR is a demand reduction and energy efficiency technique that reduces customer service voltages in order to achieve a corresponding reduction in energy consumption. CVR programs are often implemented system-wide or on large portions of a utility’s distribution grid in order to conserve energy, save customers on their energy bills, and reduce greenhouse gas emissions. CVR programs typically save up to 4% of energy consumption on any distribution circuit. The utilization of smart inverters is estimated to yield another 1-3% of incremental energy consumption savings and greenhouse gas emissions reductions. 

From an engineering perspective, CVR schemes aim to reduce customer voltages to the lowest allowable limit as allowed by American National Standards Institute (ANSI) standards. However, CVR programs typically only control utility-owned distribution voltage regulating equipment, changes to which affect all customers downstream of any specific device. As such, CVR benefits in practice are limited by the lowest customer voltage in any utility voltage regulation zone (often a portion of a distribution circuit), since dropping the voltage any further would violate ANSI standards for that customer. 

Since smart inverters can increase or decrease the voltage at any individual location, DERs with smart inverters can be used to more granularly control customer voltages in CVR schemes. For example, if the lowest customer voltage in a utility voltage regulation zone were to be increased by, say, 1 Volt by controlling a local smart inverter, the entire voltage regulation zone could then be subsequently lowered another Volt, delivering substantially increased CVR benefits. Such an example is depicted in the image below, where the green line represents a circuit voltage profile where smart inverters support CVR. Granular control of customer voltages through smart inverters can dramatically increase CVR benefits.

Equipment Life Extension 

Either through local generation, load shifting, and/or energy efficiency, DERs reduce the net load at individual customer premises. A portfolio of optimized DERs dispersed across a distribution circuit in turn reduces the net load for all equipment along that distribution circuit. Distribution equipment, such as substation transformers, operating at reduced loading will benefit from increased equipment life and higher operational efficiency.

Distribution equipment may operate at very high loading during periods of peak demand, abnormal configuration, or emergency operation. When the nominal rating of equipment is exceeded, or overloaded, the equipment suffers from degradation and reduction in operational life. The more frequently that equipment is overloaded, the more that such degradation occurs. Furthermore, the efficiency of transformers and other grid equipment falls as they perform under increased load. The higher the overload, the larger the efficiency losses. Utilities have significant portions of their grid equipment that regularly operate in overloaded fashion. DERs’ ability to reduce peak and average load on distribution equipment therefore leads to a reduction in the detrimental operation of the equipment and an increase in useful life, as shown in the following figure. The larger the peak load reduction, the larger the life extension and efficiency benefits.

To quantify these benefits, medium to large liquid-filled transformers were modeled with typical load and DER generation profiles. The magnitude of the reduced losses and resulting equipment degradation avoidance were calculated using IEEE C57.12.00-2000 standard per unit life calculation methodology. DERs such as energy storage are able to achieve an even greater avoided cost than solar alone, as storage dispatch can more closely match the distribution peak. Quantified benefits contributing to net societal benefits calculation include the deferred equipment investment due to extended equipment life and reduced energy losses through increased efficiency. 

Note that non-optimized DERs can be cited as having negative impact on equipment life. While highly variable generation and load can negatively impact equipment life – such as driving increased operations of line regulators – optimized and coordinated smart inverters mitigate this potential volatility impact on equipment life.

Resiliency and Reliability 

DERs such as energy storage can provide backup power to critical loads, improving customer reliability during routine outages and resiliency during major outages. The rapidly growing penetration of batteries combined with PV deployments will reduce the frequency and duration of customer outages and provide sustained power for critical devices, as depicted in the adjacent figure. 

Improved reliability and resiliency has been the goal of significant utility investments, including feeder reconductoring and distribution automation programs such as fault location, isolation, and service restoration (FLISR). Battery deployments throughout the distribution system can eventually reduce utility reliability and resiliency investments. However, this analysis utilizes a conservative approach, only considering average customer savings from reduced outages and excludes avoided utility investments.

To quantify near-term reliability and resiliency benefits, the value of lost load as calculated by Lawrence Berkeley National Lab was applied to the energy that could be supplied during outages. Outages were based on 2014 CPUC SAIFI statistics. 

Market Price Suppression Effect 

Wholesale electricity markets provide a competitive framework for electric supply to meet demand. In general, as electric demand increases market prices increase. DERs can provide value by reducing the electric demand in the market, leading to a reduction in the market clearing price for all consumers of electricity. This effect was recently validated in the U.S. Supreme Court’s decision to uphold FERC Order 745, noting that operators accept demand response bids if and only if they bring down the wholesale rate by displacing higher-priced generation. Notably, the court emphasized that “when this occurs (most often in peak periods), the easing of pressure on the grid, and the avoidance of service problems, further contributes to lower charges.” As a behind-the-meter resource, rooftop solar impacts wholesale markets in a similar way to demand response, effectively reducing demand and thus clearing prices for all resources during solar production hours. While the CPUC Public Tool attempts to consider the avoided cost of wholesale energy prices, it does not consider the benefits of reducing wholesale market clearing prices from what they would have been in the absence of solar.

This effect is illustrated in the adjacent figure. In the presence of DERs, energy prices are at the lower “P ∗ ” price which otherwise would have been at the higher “P” price absent the DERs. Market price suppression could then be quantified as the difference between prices multiplied by load, or (P − P ∗ ) ∗ L ∗ . 

To quantify the magnitude of cost reductions due to market price suppression, this report estimates the relationship between load and market prices based on historical data. It is important to isolate other driving factors to only capture the effect of load change on prices. One of these driving factors is natural gas prices, which directly impacts electric prices because the marginal supply resource in California is often a natural gas-fired power plant. This can be isolated by normalizing market prices over gas prices, known as Implied Heat Rate (IHR), and estimating the relationship between IHR and load, which is shown in figure below for PG&E DLAP prices and load.

Smart energy homes equipped with energy storage are able to achieve an even greater avoided cost than distributed solar alone. Storage devices that discharge in peak demand hours with high market clearing prices can take advantage of the stronger relationship between load and price at high loads. 

Results 

After establishing the 2016-2020 penetration scenario and defining the methodologies for each category of avoided cost, the CPUC Public Tool was utilized to estimate the benefits of achieving the 2020 penetration scenario. For avoided cost categories the CPUC Public Tool was not able to incorporate, calculations were completed externally using common penetration and operational assumptions for each technology type. In order to be consistent with the CPUC Public Tool outputs, levelized values are expressed in annual terms in 2015 dollars below.

Previous assessments of high penetration DERs have replicated existing methodologies that have often been applied to passive assets like energy efficiency; however, these approaches fail to recognize the potential value of advanced DERs that will be deployed during the 2016-2020 timeframe. When a more comprehensive suite of benefits that could be generated by DERs today is considered, total benefits of the 2016-2020 DER portfolio in California exceeds $2.5 billion per year.

C. The Costs of Distributed Energy Resources 

As presented above, distributed resources offer significant ratepayer benefits; however, these benefits are not available without incurring incremental costs to enable their deployment. In order to quantify the net societal benefit of DERs, these costs must be subtracted from the benefits. Costs for distributed energy resources include integration at the distribution and bulk system levels, utility program management, and customer equipment.

Distribution Integration Costs

DERs are a critical new asset class being deployed on the distribution grid which must be proactively planned for and integrated with existing assets. This integration process will sometimes require unavoidable additional investments. However, it is essential to separate incremental DER integration costs from business as usual utility investments. Recent utility funding requests for DER integration have included costs above those needed to successfully integrate DERs. This subsection will explore typical DER integration costs and evaluate the validity of each type. 

While new DER integration rules of thumb and planning guidelines are emerging, no established approach exists for identifying DER integration investments or estimating their cost. It is clear, however, that integration efforts and costs vary by DER penetration level. Generally, lower DER penetration requires fewer integration investments, while higher penetration may lead to increased investment. As depicted in the following chart, NEM PV penetration levels vary across the U.S.  Most states have very low (<5 %) penetrations, while only Hawaii experiences medium (10-20%) penetration. California exhibits low (5-10%) penetration overall, although individual circuits may experience much higher penetration.

For this analysis, DER integration costs were developed from estimates submitted by California utilities to the CPUC as part of their Distribution Resource Planning (DRP) filings. This analysis incorporates the specific cost categories and figures from Southern California Edison’s filing, since this filing alone included specific cost estimates. In assessing these costs, each proposed investment was reviewed to determine whether it was a required incremental cost resulting from the integration of DERs. If so, it should indeed be included in the cost/benefit calculation. If the investment (or a portion thereof) was determined to be a component of utility busines as usual operations, such investment was not included in the analysis. 

In order to determine whether a proposed utility investment is required, the following threshold question was asked:  

• Would these costs be incurred even in the absence of DER adoption?

If the costs would be incurred regardless of DER adoption, or if the utility had previously requested regulatory approval for the investment but justified the investment via a program unrelated to DER adoption, then the costs should not be classified as DER integration costs. For example, if a utility had previously requested approval to upgrade (i.e. cutover) 4kV circuits to a higher voltage in order to increase capacity and reliability before DERs were prevalent, yet now associates the upgrade costs to DERs, then the investment should not be attributed to DER integration. This threshold analysis eliminates from consideration or reduces some of the proposed utility integration costs. 

Of the remaining costs, each was further assessed by asking the following set of screening questions: 

• Do more cost effective mitigation measures exist for the proposed investment? Can advanced DER functionalities (e.g. volt/VAR support) mitigate or eliminate the need for the investment?  

• Are costs relevant for the forecasted DER penetration levels, or only for much higher penetrations?

• Do stated costs reflect realistic cost figures, or do they reflect inflated estimates?

Several utility integration investments are proposed to mitigate an integration challenge where more cost effective solutions exist. For example, voltage-related concerns due to PV variability are often used to justify replacement of capacitor banks on distribution feeders. However, the use of embedded voltage and reactive power capabilities in smart inverters make the deployment of new capacitor banks redundant and overly expensive in most instances. Furthermore, while some proposed costs may be relevant for high penetrations of DERs – such as bi-directional relays to deal with reverse power flows – these investments may not be necessary at low penetration levels. 

The following table presents the DER integration investment categories as identified in SCE’s DRP filing according to its Scenario 3 forecast for DER growth in California. SCE’s integration costs were scaled up in order to estimate total distribution integration costs for all California utilities; therefore, the table represents total California distribution integration costs over 2016-2020. For each investment, applicability to DER integration is assessed using the threshold and screening questions discussed above, resulting in a quantification of costs that are directly “Applicable to DERs”. An overview of the assessment of each high-level integration category is provided in the table, with more detailed technical discussion of each investment type and assessment rationale offered in the Appendix. This cost quantification is necessarily high-level due to the lack of details available for each investment type. As such, more specific assessment is necessary in order to evaluate integration investment plans. This exercise identifies 25% of SCE’s DER integration costs, or $1,450 million (or levelized to $189 million annually), as truly applicable to DER integration, which is the number utilized in the cost/benefit analysis in this paper.

Bulk System Integration Costs 

Integration of variable resources with the bulk power grid is expected to result in an increase in variable operating costs associated with the way the generation fleet is used to accommodate the variability. To quantify this cost, $/MWh values quantifying this cost for a 33% renewable portfolio standard were scaled per calculations adopted by the California PUC. 

Utility Program Management Costs 

To estimate the incremental utility program costs associated with DER adoption, the default inputs within the Public Tool were used, which include upfront installation and metering costs, as well as incremental billing costs. All told, these costs amounted to $26 million per year based on the level of adoption in the TASC base case scenario. 

Customer Equipment Costs 

The costs of DERs themselves must be considered, including the cost of equipment, labor, and financing. For solar, CPUC Energy Division staff’s reference case solar price forecast is used to determine the cost of deployed equipment in the 2016- 2020 timeframe, factoring in the December 2015 extension of the Federal Investment Tax Credit. For storage, the price forecast was based on Navigant Research’s projections; for controllable thermostats, current vendor prices were used. 

Based on these forecasts, deployments forecasted for the 2016-2020 timeframe yielded a blended average adoption cost of the installed base of $3.86/W for the 2016-2020 timeframe, or $2.70/W after reflecting the 30% Federal Investment Tax Credit (ITC). In absolute terms, the total cost of adoption to Californians translates to $12.1 billion (nominal) for 4.5GW of rooftop solar. For co-located storage and load control, total investment to meet adoption forecasts totals $259 million. 

Results 

Societal net benefits calculations require a comprehensive consideration of costs that society bears as a result of attaining the specified 2020 penetration levels, including the costs of administering customer programs, grid integration costs needed to accommodate new assets, and the cost of the assets themselves, which are borne by customers. In the table below, each category is quantified, totalling $1.1 billion per year.

D. Quantifying Net Benefits 

In this section, we complete EPRI’s Cost/Benefit analysis by comparing benefits and costs of DERs during the 2016-2020 deployment timeframe. For consistent comparisons, levelized costs and benefits are based on the year 2020, with all benefits and costs values translated to 2015 dollars.

Establishing a common DER penetration scenario and converting all benefits and costs to net present value terms allows simple summation of each category to provide indicative societal net benefit, suggesting a significant societal value for widespread DER adoption. In total, the benefits of the analyzed scenario are $2.5 billion per year, compared to costs of $1.1 billion per year, resulting in a net societal benefit to Californians of $1.4 billion per year by 2020.

E. Case Study: PG&E’s Planned Distribution Projects in 2017 General Rate Case

In the previous section, categories of avoided costs were described and the corresponding values were quantified for the state of California. In this section, the same methodology is applied to PG&E’s planned distribution projects from its most recent PG&E 2017 General Rate Case filing from September 2015.

Every three years, California utilities seek approval to recover expenses and investments, including a target profit level, that are deemed necessary for the prudent provision of utility services. For perspective, half of customer’s utility payments were driven by the “wires” component of the electric grid in 2014 and California’s investor owned utilities are expected to add $143 billion of new capital investment into their distribution rate bases through 2050.

Despite the significant size of this avoided cost category, DERs have historically been considered passive assets having little potential on the “wires” side of the business. While not all distribution investment can be avoided by DERs, some of the currently-planned projects are being implemented to accommodate demand growth and replacement of aging assets; these projects could instead be deferred or avoided by DERs. While the CPUC Public Tool uses a generalized treatment of distribution capacity avoided costs to estimate the potential value of deferrals across utilities, more specific values are used in this section sourced from publicly available documents. 

The table below summarizes the large capacity-related distribution projects detailed in PG&E’s General Rate Case. PG&E seeks approval of $353 million for these distribution system investments. 43 When this $353 million PG&E capital investment is adjusted to factor in the ratepayer perspective – which includes the lifetime cost of the utility’s target profit level and recovery of costs related to operations and maintenance, depreciation, interest and taxes from ratepayers – the net present societal cost to PG&E ratepayers of these distribution capacity projects is approximately $586 million. 44 This $586 million cost to ratepayers adds over 1GW of conventional distribution capacity but addresses only 256 MW of near-term capacity deficiencies on PG&E’s distribution system when deployed.

Based on this societal cost, we consider the net benefits of an alternative, DER-centric solution, which relies on solar with smart inverters, energy storage and controllable thermostats. Due to lack of sufficient detail from PG&E’s General Rate Case regarding the operational profiles of the electric distribution capacity projects in question, a simplifying assumption of 75% is used for the DER portfolio’s distribution load carrying capacity ratio, which is based on the CPUC’s Public Tool default peak capacity allocation factors (PCAF) for PG&E’s distribution planning areas. This load carrying capacity ratio reflects capabilities based on customer adoptions with a storage sizing ratio of 2 kWh of energy storage for every 1 kW of PV capacity, or approximately 10 kWh of energy storage for a customer with 5kW of solar installed, as well as a controllable thermostat. 

In order to accurately compare the DER solution, the full lifetime cost of the DER solution is considered, which includes the costs of additional DERs that would be needed to accommodate load growth over the lifetime of the conventional solution – assumed to be 25 years. This DER solution deployment schedule, which continuously addresses incremental capacity needs on the grid, contrasts with the traditional, bulky solution deployment schedule, which requires a large upfront investment for capacity to address a small, incremental near-term need. While a DER solution delivers sufficient capacity in each year to provide comparable levels of grid services, deployments occur steadily over time rather than in one upfront investment. 

This approach highlights one of the key potential benefits of utilizing a DER solution over a traditional, bulky grid asset: DERs can be flexibly deployed in small bundles over time, a benefit that is further explored in Section IV on the benefits of transitioning to more integrated distribution planning. 

Using these assumptions, the previous state-wide methodology is applied to DERs avoiding PG&E’s planned distribution capacity projects, but two conservative assumptions are made. First, the scope of benefits is limited to a subset of avoided cost categories that would be directly considered by utility planners today for these types of projects. Whereas conventional equipment used to meet distribution capacity projects are generally unidimensional resources providing a single source of value – distribution capacity – DERs provide multiple sources of value. Second, we base our calculations on PG&E’s lower avoided cost values, rather than our own, to demonstrate that there are net benefits even under a conservative scenario. 

In addition to avoiding the ratepayer cost of $586 million for planned distribution capacity projects, the DERs deployed to avoid PG&E’s distribution capacity projects also avoid $946 million in energy purchases and $79 million and $99 million in generation capacity and avoided renewable energy credit purchases, respectively, totaling $1,709 million in benefits. On the cost side, program costs, integration costs and equipment costs for the associated DERs total to $1,605 million, resulting in a net present value to PG&E ratepayers of $104 million. This net benefit result is particularly notable given the limited scope of benefits considered in this case study and the reliance on PG&E’s lower avoided cost values.

 

In this section, the data available to third-parties around distribution capacity projects from the most recent California Phase I General Rate Case (PG&E’s 2017 GRC filing) was used to explore the potential benefits of leveraging DERs to avoid conventional distribution capacity-related investments. Calculations were performed based on PG&E’s own avoided cost assumptions from NEM Successor Tariff filings and General Rate Case filings. Results indicate that deploying DER solutions in lieu of PG&E’s planned distribution capacity expansion projects in its 2017 GRC could yield net benefits, even looking only at the energy, capacity, and renewable energy compliance values of the DER solutions. While not preferred, simplified assumptions were used to fill missing sources of information and data (e.g. distribution peak capacity allocation factors and forecasted load growth) where necessary. That such simplifying assumptions are necessary highlights the need for additional data sharing on specific infrastructure projects in order to assess the potential of DERs to offset these investments.

Source: SolarCity Grid Engineering

Sign up via our free email subscription service to receive notifications when new information is available.