class foxes.input.states.WeibullPointCloud(foxes.input.states.PointCloudData)[source]

Weibull sectors at point cloud support, e.g., at turbine locations.

Attributes

wd_coord: str

The wind direction coordinate name

ws_coord: str

The wind speed coordinate name, if wind speed bin centres are in data, else None

ws_bins: numpy.ndarray

The wind speed bins, including lower and upper bounds, shape: (n_ws_bins+1,)

Examples

Example of the NetCDF input files with point cloud data:

>>>    Dimensions:               (wind_turbine: 8, wind_direction: 2, wind_speed: 2)
>>>    Coordinates:
>>>    * wind_turbine          (wind_turbine) int64 64B 0 1 2 3 4 5 6 7
>>>    * wind_direction        (wind_direction) int64 16B 0 30
>>>    * wind_speed            (wind_speed) int64 16B 8 10
>>>    Data variables:
>>>        sector_probability    (wind_turbine, wind_direction) float64 128B ...
>>>        weibull_a             (wind_turbine, wind_direction) float64 128B ...
>>>        weibull_k             (wind_turbine, wind_direction) float64 128B ...
>>>        turbulence_intensity  (wind_turbine, wind_direction, wind_speed) float64 256B ...
>>>        x                     (wind_turbine) float64 64B ...
>>>        y                     (wind_turbine) float64 64B ...
>>>        height                (wind_turbine) float64 64B ...

Public members

WeibullPointCloud(*args, wd_coord, ws_coord=None, ws_bins=None, ...)[source]

Constructor.

__repr__()[source]

Return repr(self).

load_data(algo, verbosity=0)[source]

Load and/or create all model data that is subject to chunking.

interpolate_data(idims, icrds, d, pts, vrs, times)[source]

Interpolates data to points.

property data_source

The data source

preproc_first(algo, data, cmap, vars, bounds_extra_space, ...)[source]

Preprocesses the first file.

gen_states_split_size()[source]

Generator for suggested states split sizes for output writing.

set_running(algo, data_stash, sel=None, isel=None, verbosity=0)[source]

Sets this model status to running, and moves all large data to stash.

unset_running(algo, data_stash, sel=None, isel=None, verbosity=0)[source]

Sets this model status to not running, recovering large data from stash

output_point_vars(algo)[source]

The variables which are being modified by the model.

size()[source]

The total number of states.

index()[source]

The index list

get_calc_data(mdata, cmap, variables)[source]

Gathers data for calculations.

calculate(algo, mdata, fdata, tdata)[source]

The main model calculation.

reset(algo=None, states_sel=None, states_loc=None, verbosity=0)[source]

Reset the states, optionally select states

classmethod new(states_type, *args, **kwargs)[source]

Run-time states factory.

output_coords()[source]

Gets the coordinates of all output arrays

ensure_output_vars(algo, tdata)[source]

Ensures that the output variables are present in the target data.

run_calculation(algo, *data, out_vars, **calc_pars)[source]

Starts the model calculation in parallel.

property model_id

Unique id based on the model type.

var(v)[source]

Creates a model specific variable name.

unvar(vnm)[source]

Translates model specific variable name to origninal variable name.

property initialized

Initialization flag.

sub_models()[source]

List of all sub-models

initialize(algo, verbosity=0, force=False)[source]

Initializes the model.

property running

Flag for currently running models

finalize(algo, verbosity=0)[source]

Finalizes the model.

get_data(variable, target, lookup='smfp', mdata=None, ...)[source]

Getter for a data entry in the model object or provided data sources