Skip to content

Commit 10e5d40

Browse files
committed
Merge branch 'improve_load_aug' into docs
2 parents 6f1b7d1 + 31b8a8f commit 10e5d40

File tree

15 files changed

+829
-91
lines changed

15 files changed

+829
-91
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
*.vcf
1616
*.xml
1717
*.pickle
18+
examples/**/temp/**
1819
examples/**/outputs/*.csv
1920
examples/**/outputs/*.txt
2021
examples/**/graphs/*.png
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
/*
2+
####################
3+
Add wind_to_solar_ratio column
4+
5+
Date applied: 2021-07-07
6+
Description:
7+
Adds a column called wind_to_solar_ratio to the database which is used by
8+
switch_model.policies.wind_to_solar_ratio
9+
#################
10+
*/
11+
12+
ALTER TABLE switch.scenario ADD COLUMN wind_to_solar_ratio real;

docs/Performance.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Performance
2+
3+
Memory use and solve time are two important factors that we try to keep to a minimum in our models. There are multiple
4+
things one can do to improve performance.
5+
6+
## Solving methods
7+
8+
By far the biggest factor that impacts performance is the method used by Gurobi. The fastest method is barrier solve
9+
without crossover (use `--recommended-fast`)
10+
however this method often returns a suboptimal solution. The next fastest is barrier solve followed by crossover and
11+
simplex (use `--recommended`) which almost always works. In some cases barrier solve encounters numerical issues (
12+
see [`Numerical Issues.md`](./Numerical%20Issues.md))
13+
in which case the slower Simplex method must be used (`--solver-options-string method=1`).
14+
15+
## Solver interface
16+
17+
Solver interfaces are how Pyomo communicates with Gurobi (or any solver).
18+
19+
There are two solver interfaces that you should know about: `gurobi` and `gurobi_direct`.
20+
21+
- When using `gurobi`, Pyomo will write the entire model to a temporary text file and then start a *separate Gurobi
22+
process* that will read the file, solve the model and write the results to another temporary text file. Once Gurobi
23+
finishes writing the results Pyomo will read the results text file and load the results back into the Python program
24+
before running post_solve (e.g. generate csv files, create graphs, etc). Note that these temporary text files are
25+
stored in `/tmp` but if you use `--recommended-debug` Pyomo and Gurobi will instead use a `temp` folder in your model.
26+
27+
- `gurobi_direct` uses Gurobi's Python library to create and solve the model directly in Python without the use of
28+
intermediate text files.
29+
30+
In theory `gurobi_direct` should be faster and more efficient however in practice we find that that's not the case. As
31+
such we recommend using `gurobi` and all our defaults do so. If someone has the time they could profile `gurobi_direct`
32+
to improve performance at which point we could make `gurobi_direct` the default (and enable `--save-warm-start` by default, see below).
33+
34+
The `gurobi` interface has the added advantage of separating Gurobi and Pyomo into separate threads. This means that
35+
while Gurobi is solving and Pyomo is idle, the operating system can automatically move Pyomo's memory usage
36+
to [virtual memory](https://serverfault.com/questions/48486/what-is-swap-memory)
37+
which will free up more memory for Gurobi.
38+
39+
## Warm starting
40+
41+
Warm starting is the act of using a solution from a previous similar model to start the solver closer to your expected
42+
solution. Theoretically this can help performance however in practice there are several limitations. For this section, *
43+
previous solution* refers to the results from an already solved model that you are using to warm start the solver. *
44+
Current solution* refers to the solution you are trying to find while using the warm start feature.
45+
46+
- To warm start a model use `switch solve --warm-start <path_to_previous_solution>`.
47+
48+
- Warm starting only works if the previous solution does not break any constraints of the current solution. This usually
49+
only happens if a) the model has the exact same set of variables b)
50+
the previous solution was "harder" (e.g. it had more constraints to satisfy).
51+
52+
- Warm starting always uses the slower Simplex method. This means unless you expect the previous solution and current
53+
solution to be very similar, it may be faster to solve without warm start using the barrier method.
54+
55+
- If your previous solution didn't use crossover (e.g. you used `--recommended-fast`) then warm starting will be even
56+
slower since the solver will need to first run crossover before warm starting.
57+
58+
- Our implementation of warm starting only works if your previous solution has an `outputs/warm_start.pickle`
59+
file. This file is only generated when you use `--save-warm-start`.
60+
61+
- `--save-warm-start` and `--warm-start` both use an extension of the `gurobi_direct` solver interface which is
62+
generally slower than the `gurobi` solver interface (see section above).
63+
64+
## Tools for improving performance
65+
66+
- [Memory profiler](https://pypi.org/project/memory-profiler/) for generating plots of the memory
67+
use over time. Use `mprof run --interval 60 --multiprocess switch solve ...` and once solving is done
68+
run `mprof plot -o profile.png` to make the plot.
69+
70+
- [Fil Profiler](https://pypi.org/project/filprofiler/) is an amazing tool for seeing which parts of the code are
71+
using up memory during peak memory usage.
72+
73+
- Using `switch_model.utilities.StepTimer` to measure how long certain code blocks take to run. See examples
74+
throughout the code.

switch_model/balancing/load_zones.py

Lines changed: 44 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -244,10 +244,11 @@ def get_component_per_year(m, z, p, component):
244244
title="Energy balance duals per period",
245245
note="Note: Outliers and zero-valued duals are ignored."
246246
)
247-
def graph(tools):
247+
def graph_energy_balance(tools):
248248
load_balance = tools.get_dataframe('load_balance.csv')
249249
load_balance = tools.transform.timestamp(load_balance)
250-
load_balance["energy_balance_duals"] = tools.pd.to_numeric(load_balance["normalized_energy_balance_duals_dollar_per_mwh"], errors="coerce") / 10
250+
load_balance["energy_balance_duals"] = tools.pd.to_numeric(
251+
load_balance["normalized_energy_balance_duals_dollar_per_mwh"], errors="coerce") / 10
251252
load_balance = load_balance[["energy_balance_duals", "time_row"]]
252253
load_balance = load_balance.pivot(columns="time_row", values="energy_balance_duals")
253254
# Don't include the zero-valued duals
@@ -259,3 +260,44 @@ def graph(tools):
259260
ylabel='Energy balance duals (cents/kWh)',
260261
showfliers=False
261262
)
263+
264+
265+
@graph(
266+
"daily_demand",
267+
title="Total daily demand",
268+
supports_multi_scenario=True
269+
)
270+
def demand(tools):
271+
df = tools.get_dataframe("loads.csv", from_inputs=True, drop_scenario_info=False)
272+
df = df.groupby(["TIMEPOINT", "scenario_name"], as_index=False).sum()
273+
df = tools.transform.timestamp(df, key_col="TIMEPOINT", use_timepoint=True)
274+
df = df.groupby(["season", "hour", "scenario_name", "time_row"], as_index=False).mean()
275+
df["zone_demand_mw"] /= 1e3
276+
pn = tools.pn
277+
278+
plot = pn.ggplot(df) + \
279+
pn.geom_line(pn.aes(x="hour", y="zone_demand_mw", color="scenario_name")) + \
280+
pn.facet_grid("time_row ~ season") + \
281+
pn.labs(x="Hour (PST)", y="Demand (GW)", color="Scenario")
282+
tools.save_figure(plot.draw())
283+
284+
285+
@graph(
286+
"demand",
287+
title="Total demand",
288+
supports_multi_scenario=True
289+
)
290+
def yearly_demand(tools):
291+
df = tools.get_dataframe("loads.csv", from_inputs=True, drop_scenario_info=False)
292+
df = df.groupby(["TIMEPOINT", "scenario_name"], as_index=False).sum()
293+
df = tools.transform.timestamp(df, key_col="TIMEPOINT", use_timepoint=True)
294+
df["zone_demand_mw"] *= df["tp_duration"] / 1e3
295+
df["day"] = df["datetime"].dt.day_of_year
296+
df = df.groupby(["day", "scenario_name", "time_row"], as_index=False)["zone_demand_mw"].sum()
297+
pn = tools.pn
298+
299+
plot = pn.ggplot(df) + \
300+
pn.geom_line(pn.aes(x="day", y="zone_demand_mw", color="scenario_name")) + \
301+
pn.facet_grid("time_row ~ .") + \
302+
pn.labs(x="Day of Year", y="Demand (GW)", color="Scenario")
303+
tools.save_figure(plot.draw())

switch_model/generators/core/build.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -666,14 +666,15 @@ def graph_capacity(tools):
666666
color=tools.get_colors(len(capacity_df.index)),
667667
)
668668

669+
tools.bar_label()
669670

670671
@graph(
671672
"buildout_gen_per_period",
672673
title="Built Capacity per Period",
673674
supports_multi_scenario=True
674675
)
675676
def graph_buildout(tools):
676-
build_gen = tools.get_dataframe("BuildGen.csv")
677+
build_gen = tools.get_dataframe("BuildGen.csv", dtype={"GEN_BLD_YRS_1": str})
677678
build_gen = build_gen.rename(
678679
{"GEN_BLD_YRS_1": "GENERATION_PROJECT", "GEN_BLD_YRS_2": "build_year", "BuildGen": "Amount"},
679680
axis=1

switch_model/generators/core/dispatch.py

Lines changed: 128 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -609,7 +609,6 @@ def graph_hourly_curtailment(tools):
609609
@graph(
610610
"total_dispatch",
611611
title="Total dispatched electricity",
612-
is_long=True,
613612
)
614613
def graph_total_dispatch(tools):
615614
# ---------------------------------- #
@@ -649,6 +648,134 @@ def graph_total_dispatch(tools):
649648
ylabel="Total dispatched electricity (TWh)"
650649
)
651650

651+
tools.bar_label()
652+
653+
@graph(
654+
"energy_balance",
655+
title="Energy Balance For Every Month",
656+
supports_multi_scenario=True,
657+
is_long=True
658+
)
659+
def energy_balance(tools):
660+
# Get dispatch dataframe
661+
cols = ["timestamp", "gen_tech", "gen_energy_source", "DispatchGen_MW", "scenario_name", "scenario_index",
662+
"Curtailment_MW"]
663+
df = tools.get_dataframe("dispatch.csv", drop_scenario_info=False)[cols]
664+
df = tools.transform.gen_type(df)
665+
666+
# Rename and add needed columns
667+
df["Dispatch Limit"] = df["DispatchGen_MW"] + df["Curtailment_MW"]
668+
df = df.drop("Curtailment_MW", axis=1)
669+
df = df.rename({"DispatchGen_MW": "Dispatch"}, axis=1)
670+
# Sum dispatch across all the projects of the same type and timepoint
671+
key_columns = ["timestamp", "gen_type", "scenario_name", "scenario_index"]
672+
df = df.groupby(key_columns, as_index=False).sum()
673+
df = df.melt(id_vars=key_columns, value_vars=["Dispatch", "Dispatch Limit"], var_name="Type")
674+
df = df.rename({"gen_type": "Source"}, axis=1)
675+
676+
discharge = df[(df["Source"] == "Storage") & (df["Type"] == "Dispatch")].drop(["Source", "Type"], axis=1).rename(
677+
{"value": "discharge"}, axis=1)
678+
679+
# Get load dataframe
680+
load = tools.get_dataframe("load_balance.csv", drop_scenario_info=False)
681+
load = load.drop("normalized_energy_balance_duals_dollar_per_mwh", axis=1)
682+
683+
# Sum load across all the load zones
684+
key_columns = ["timestamp", "scenario_name", "scenario_index"]
685+
load = load.groupby(key_columns, as_index=False).sum()
686+
687+
# Subtract storage dispatch from generation and add it to the storage charge to get net flow
688+
load = load.merge(
689+
discharge,
690+
how="left",
691+
on=key_columns,
692+
validate="one_to_one"
693+
)
694+
load["ZoneTotalCentralDispatch"] -= load["discharge"]
695+
load["StorageNetCharge"] += load["discharge"]
696+
load = load.drop("discharge", axis=1)
697+
698+
# Rename and convert from wide to long format
699+
load = load.rename({
700+
"ZoneTotalCentralDispatch": "Total Generation (excl. storage discharge)",
701+
"TXPowerNet": "Transmission Losses",
702+
"StorageNetCharge": "Storage Net Flow",
703+
"zone_demand_mw": "Demand",
704+
}, axis=1).sort_index(axis=1)
705+
load = load.melt(id_vars=key_columns, var_name="Source")
706+
load["Type"] = "Dispatch"
707+
708+
# Merge dispatch contributions with load contributions
709+
df = pd.concat([load, df])
710+
711+
# Add the timestamp information and make period string to ensure it doesn't mess up the graphing
712+
df = tools.transform.timestamp(df).astype({"period": str})
713+
714+
# Convert to TWh (incl. multiply by timepoint duration)
715+
df["value"] *= df["tp_duration"] / 1e6
716+
717+
FREQUENCY = "1W"
718+
719+
def groupby_time(df):
720+
return df.groupby([
721+
"scenario_name",
722+
"period",
723+
"Source",
724+
"Type",
725+
tools.pd.Grouper(key="datetime", freq=FREQUENCY, origin="start")
726+
])["value"]
727+
728+
df = groupby_time(df).sum().reset_index()
729+
730+
# Get the state of charge data
731+
soc = tools.get_dataframe("StateOfCharge.csv", dtype={"STORAGE_GEN_TPS_1": str}, drop_scenario_info=False)
732+
soc = soc.rename({"STORAGE_GEN_TPS_2": "timepoint", "StateOfCharge": "value"}, axis=1)
733+
# Sum over all the projects that are in the same scenario with the same timepoint
734+
soc = soc.groupby(["timepoint", "scenario_name"], as_index=False).sum()
735+
soc["Source"] = "State Of Charge"
736+
soc["value"] /= 1e6 # Convert to TWh
737+
738+
# Group by time
739+
soc = tools.transform.timestamp(soc, use_timepoint=True, key_col="timepoint").astype({"period": str})
740+
soc["Type"] = "Dispatch"
741+
soc = groupby_time(soc).mean().reset_index()
742+
743+
# Add state of charge to dataframe
744+
df = pd.concat([df, soc])
745+
# Add column for day since that's what we really care about
746+
df["day"] = df["datetime"].dt.dayofyear
747+
748+
# Plot
749+
# Get the colors for the lines
750+
colors = tools.get_colors()
751+
colors.update({
752+
"Transmission Losses": "brown",
753+
"Storage Net Flow": "cadetblue",
754+
"Demand": "black",
755+
"Total Generation (excl. storage discharge)": "black",
756+
"State Of Charge": "green"
757+
})
758+
759+
# plot
760+
num_periods = df["period"].nunique()
761+
pn = tools.pn
762+
plot = pn.ggplot(df) + \
763+
pn.geom_line(pn.aes(x="day", y="value", color="Source", linetype="Type")) + \
764+
pn.facet_grid("period ~ scenario_name") + \
765+
pn.labs(y="Contribution to Energy Balance (TWh)") + \
766+
pn.scales.scale_color_manual(values=colors, aesthetics="color", na_value=colors["Other"]) + \
767+
pn.scales.scale_x_continuous(
768+
name="Month",
769+
labels=["J", "F", "M", "A", "M", "J", "J", "A", "S", "O", "N", "D"],
770+
breaks=(15, 46, 76, 106, 137, 167, 198, 228, 259, 289, 319, 350),
771+
limits=(0, 366)) + \
772+
pn.scales.scale_linetype_manual(
773+
values={"Dispatch Limit": "dotted", "Dispatch": "solid"}
774+
) + \
775+
pn.theme(
776+
figure_size=(pn.options.figure_size[0] * tools.num_scenarios, pn.options.figure_size[1] * num_periods))
777+
778+
tools.save_figure(plot.draw())
652779

653780
@graph(
654781
"curtailment_per_period",

0 commit comments

Comments
 (0)