Timeseries data¶
In this example we calculate the data of a wind farm with 67 turbines in a time series containing 8000 uniform inflow states.
The required imports are:
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import foxes
import foxes.variables as FV
import foxes.constants as FC
from foxes.utils.runners import DaskRunner
First, we create the model book
, making sure it contains the desired turbine type model:
In [2]:
# we are only using models that are provided by default, hence
# no addition to the model book is required.
mbook = foxes.ModelBook()
# if you wish to add a model based on a specific file, do as follows:
#mbook.turbine_types["NREL5"] = foxes.models.turbine_types.PCtFile(
# "NREL-5MW-D126-H90.csv"
#)
Next, we create the states
. The data_source
can be any csv-type file (or pandas
readable equivalent), or a pandas.DataFrame
object. If it is a file path, then it will first be searched in the file system, and if not found, in the static data. If it is also not found there, an error showing the available static data file names is displayed.
In this example the static data file timeseries_8000.csv.gz
will be used, with content
Time,ws,wd,ti
2017-01-01 00:00:00,15.62,244.06,0.0504
2017-01-01 00:30:00,15.99,243.03,0.0514
2017-01-01 01:00:00,16.31,243.01,0.0522
2017-01-01 01:30:00,16.33,241.26,0.0523
...
Notice the column names, and how they appear in the Timeseries
constructor:
In [3]:
states = foxes.input.states.Timeseries(
data_source="timeseries_8000.csv.gz",
output_vars=[FV.WS, FV.WD, FV.TI, FV.RHO],
var2col={FV.WS: "ws", FV.WD: "wd", FV.TI: "ti"},
fixed_vars={FV.RHO: 1.225},
)
We can visualize the wind distribution via the StatesRosePlotOutput
. Here we display the ambient wind speed in a wind rose with 16 wind direction sectors and 5 wind speed bins:
In [4]:
o = foxes.output.StatesRosePlotOutput(states, point=[0., 0., 100.])
fig = o.get_figure(16, FV.AMB_WS, [0, 3.5, 6, 10, 15, 20])
fig.show("svg")
/data/jonas/gits/wakes/foxes/foxes/output/rose_plot.py:174: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning.
grp = data[[wd_var, lgd, "frequency"]].groupby([wd_var, lgd])
For the time series with uniform data, any choice of the point
argument will produce the same figure.
Next, we create the example wind farm with 67 turbines from static data. The file test_farm_67.csv
has the following structure:
index,label,x,y
0,T0,101872.70,1004753.57
1,T1,103659.97,1002993.29
2,T2,100780.09,1000779.97
3,T3,100290.42,1004330.88
...
For more options, check the API section foxes.input.farm_layout
.
We consider two turbine models in this example: the wind turbine type NREL5MW
and the turbine model kTI_02
, both from the default model book. The latter model adds the variable k
for each state and turbine, calculated as k = kTI * TI
, with constant kTI = 0.2
. The parameter k
will later be used by the wake model.
In [5]:
farm = foxes.WindFarm()
foxes.input.farm_layout.add_from_file(
farm, "test_farm_67.csv", turbine_models=["kTI_02", "NREL5MW"], verbosity=0
)
Next, we create the algorithm
, with further model selections. In particular, two wake models are invoked, the model Bastankhah_quadratic
for wind speed deficits and the model CrespoHernandez_max
for turbulence intensity:
In [6]:
algo = foxes.algorithms.Downwind(
mbook,
farm,
states=states,
rotor_model="centre",
wake_models=["Bastankhah_quadratic", "CrespoHernandez_max"],
wake_frame="rotor_wd",
partial_wakes_model="auto",
chunks={FC.STATE: 1000},
verbosity=0,
)
Also notice the chunks
parameter, specifying that always 1000 states should be considered in vectorized form during calculations. The progress is automatically visualized when invoking the DaskRunner
:
In [7]:
with DaskRunner() as runner:
farm_results = runner.run(algo.calc_farm)
fr = farm_results.to_dataframe()
print("\n", fr[[FV.WD, FV.AMB_REWS, FV.REWS, FV.AMB_P, FV.P]])
[########################################] | 100% Completed | 102.00 ms
[########################################] | 100% Completed | 4.26 s
WD AMB_REWS REWS AMB_P P
state turbine
2017-01-01 00:00:00 0 244.06 15.62 15.598951 5000.00 5000.000000
1 244.06 15.62 14.307949 5000.00 5000.000000
2 244.06 15.62 15.067607 5000.00 5000.000000
3 244.06 15.62 15.522240 5000.00 5000.000000
4 244.06 15.62 14.728003 5000.00 5000.000000
... ... ... ... ... ...
2017-06-16 15:30:00 62 299.19 11.70 9.208883 4868.75 2712.819583
63 299.19 11.70 11.435150 4868.75 4752.878044
64 299.19 11.70 11.700000 4868.75 4868.750000
65 299.19 11.70 11.607321 4868.75 4828.202797
66 299.19 11.70 9.769528 4868.75 3234.107125
[536000 rows x 5 columns]
Let’s evaluate the results:
In [8]:
# add capacity and efficiency to farm_results:
o = foxes.output.FarmResultsEval(farm_results)
o.add_capacity(algo)
o.add_capacity(algo, ambient=True)
o.add_efficiency()
# print results by turbine:
turbine_results = o.reduce_states(
{
FV.AMB_P: "mean",
FV.P: "mean",
FV.AMB_CAP: "mean",
FV.CAP: "mean",
FV.EFF: "mean",
}
)
turbine_results[FV.AMB_YLD] = o.calc_turbine_yield(algo=algo, annual=True, ambient=True)
turbine_results[FV.YLD] = o.calc_turbine_yield(algo=algo, annual=True)
print("\nResults by turbine:\n")
print(turbine_results)
# print power results:
P0 = o.calc_mean_farm_power(ambient=True)
P = o.calc_mean_farm_power()
print(f"\nFarm power : {P/1000:.1f} MW")
print(f"Farm ambient power: {P0/1000:.1f} MW")
print(f"Farm efficiency : {o.calc_farm_efficiency():.2f}")
print(f"Annual farm yield : {turbine_results[FV.YLD].sum():.2f} GWh")
Capacity added to farm results
Ambient capacity added to farm results
Efficiency added to farm results
Results by turbine:
AMB_P P AMB_CAP CAP EFF AMB_YLD \
turbine
0 3067.723397 2778.987770 0.613545 0.555798 0.825445 26.873257
1 3067.723397 2531.065918 0.613545 0.506213 0.713413 26.873257
2 3067.723397 2702.820210 0.613545 0.540564 0.780747 26.873257
3 3067.723397 2739.058250 0.613545 0.547812 0.805992 26.873257
4 3067.723397 2593.172270 0.613545 0.518634 0.737070 26.873257
... ... ... ... ... ... ...
62 3067.723397 2625.796799 0.613545 0.525159 0.748704 26.873257
63 3067.723397 2591.950988 0.613545 0.518390 0.731988 26.873257
64 3067.723397 2864.102273 0.613545 0.572820 0.860799 26.873257
65 3067.723397 2571.260829 0.613545 0.514252 0.726062 26.873257
66 3067.723397 2633.819847 0.613545 0.526764 0.755245 26.873257
YLD
turbine
0 24.343933
1 22.172137
2 23.676705
3 23.994150
4 22.716189
... ...
62 23.001980
63 22.705491
64 25.089536
65 22.524245
66 23.072262
[67 rows x 7 columns]
Farm power : 177.9 MW
Farm ambient power: 205.5 MW
Farm efficiency : 0.87
Annual farm yield : 1558.24 GWh
We can visualize the mean rotor equivalent wind speed as seen by each turbine and also the mean efficiency with respect to the time series data as colored layout plots:
In [9]:
fig, axs = plt.subplots(1,2,figsize=(14,5))
o = foxes.output.FarmLayoutOutput(farm, farm_results)
o.get_figure(fig=fig, ax=axs[0], color_by="mean_REWS", title="Mean REWS [m/s]", s=150, annotate=0)
o.get_figure(fig=fig, ax=axs[1], color_by="mean_EFF", title="Mean efficiency [%]", s=150, annotate=0)
plt.show()

For the fun of it, we can also run this example in parallel, on a local cluster. Depending on the system and the problem size, this is not neccessarily faster than the above implicitely used default dask scheduler, and it comes with overhead. But for complex calculations it is extremely useful and can really save the day. Read the docs for more details and parameters. The following invokes the default settings for the local cluster:
In [10]:
with DaskRunner(scheduler="distributed") as runner:
farm_results = runner.run(algo.calc_farm)
fr = farm_results.to_dataframe()
print("\n", fr[[FV.WD, FV.AMB_REWS, FV.REWS, FV.AMB_P, FV.P]])
o = foxes.output.FarmResultsEval(farm_results)
P0 = o.calc_mean_farm_power(ambient=True)
P = o.calc_mean_farm_power()
print(f"\nMean farm power: {P/1000:.1f} MW, Efficiency = {P/P0*100:.2f} %")
Launching local dask cluster..
LocalCluster(849c5264, 'tcp://127.0.0.1:39549', workers=16, threads=128, memory=251.53 GiB)
Dashboard: http://127.0.0.1:8787/status
Shutting down dask cluster
WD AMB_REWS REWS AMB_P P
state turbine
2017-01-01 00:00:00 0 244.06 15.62 15.598951 5000.00 5000.000000
1 244.06 15.62 14.307949 5000.00 5000.000000
2 244.06 15.62 15.067607 5000.00 5000.000000
3 244.06 15.62 15.522240 5000.00 5000.000000
4 244.06 15.62 14.728003 5000.00 5000.000000
... ... ... ... ... ...
2017-06-16 15:30:00 62 299.19 11.70 9.208883 4868.75 2712.819583
63 299.19 11.70 11.435150 4868.75 4752.878044
64 299.19 11.70 11.700000 4868.75 4868.750000
65 299.19 11.70 11.607321 4868.75 4828.202797
66 299.19 11.70 9.769528 4868.75 3234.107125
[536000 rows x 5 columns]
Mean farm power: 177.9 MW, Efficiency = 86.54 %
Notice the Dashboard
link, which is only valid during runtime and in this case is a localhost
address. The dashboard gives plenty of information of the progress during the run and is a very useful tool provided by dask.