PytorchForecastingDatasetBuilder#
- class PytorchForecastingDatasetBuilder(max_encoder_length: int = 30, min_encoder_length: int | None = None, min_prediction_idx: int | None = None, min_prediction_length: int | None = None, max_prediction_length: int = 1, static_categoricals: List[str] | None = None, static_reals: List[str] | None = None, time_varying_known_categoricals: List[str] | None = None, time_varying_known_reals: List[str] | None = None, time_varying_unknown_categoricals: List[str] | None = None, time_varying_unknown_reals: List[str] | None = None, variable_groups: Dict[str, List[int]] | None = None, constant_fill_strategy: Dict[str, str | float | int | bool] | None = None, allow_missing_timesteps: bool = True, lags: Dict[str, List[int]] | None = None, add_relative_time_idx: bool = True, add_target_scales: bool = True, add_encoder_length: bool | str = True, target_normalizer: TorchNormalizer | NaNLabelEncoder | EncoderNormalizer | str | List[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer] | Tuple[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer] = 'auto', categorical_encoders: Dict[str, NaNLabelEncoder] | None = None, scalers: Dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None = None)[source]#
Bases:
BaseMixin
Builder for PytorchForecasting dataset.
Init dataset builder.
Parameters here is used for initialization of
pytorch_forecasting.data.timeseries.TimeSeriesDataSet
object.Methods
create_inference_dataset
(ts, horizon)Create inference dataset.
Create train dataset.
set_params
(**params)Return new object instance with modified parameters.
to_dict
()Collect all information about etna object in dict.
Attributes
This class stores its
__init__
parameters as attributes.- Parameters:
max_encoder_length (int) –
min_encoder_length (int | None) –
min_prediction_idx (int | None) –
min_prediction_length (int | None) –
max_prediction_length (int) –
constant_fill_strategy (Dict[str, str | float | int | bool] | None) –
allow_missing_timesteps (bool) –
add_relative_time_idx (bool) –
add_target_scales (bool) –
target_normalizer (TorchNormalizer | NaNLabelEncoder | EncoderNormalizer | str | List[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer] | Tuple[TorchNormalizer | NaNLabelEncoder | EncoderNormalizer]) –
categorical_encoders (Dict[str, NaNLabelEncoder] | None) –
scalers (Dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None) –
- create_inference_dataset(ts: TSDataset, horizon: int) TimeSeriesDataSet [source]#
Create inference dataset.
This method should be used only after
create_train_dataset
that is used during model training.- Parameters:
- Raises:
ValueError: – if method was used before
create_train_dataset
- Return type:
- create_train_dataset(ts: TSDataset) TimeSeriesDataSet [source]#
Create train dataset.
- Parameters:
ts (TSDataset) – Time series dataset.
- Return type:
- set_params(**params: dict) Self [source]#
Return new object instance with modified parameters.
Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a
model
in aPipeline
.Nested parameters are expected to be in a
<component_1>.<...>.<parameter>
form, where components are separated by a dot.- Parameters:
**params (dict) – Estimator parameters
- Returns:
New instance with changed parameters
- Return type:
Self
Examples
>>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = model=NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )