DeepStateModel#

class DeepStateModel(ssm: CompositeSSM, input_size: int, encoder_length: int, decoder_length: int, num_layers: int = 1, n_samples: int = 5, lr: float = 0.001, train_batch_size: int = 16, test_batch_size: int = 16, optimizer_params: dict | None = None, trainer_params: dict | None = None, train_dataloader_params: dict | None = None, test_dataloader_params: dict | None = None, val_dataloader_params: dict | None = None, split_params: dict | None = None)[source]#

Bases: DeepBaseModel

DeepState model.

Init Deep State Model.

Parameters:
  • ssm (CompositeSSM) – state Space Model of the system

  • input_size (int) – size of the input feature space: features for RNN part.

  • encoder_length (int) – encoder length

  • decoder_length (int) – decoder length

  • num_layers (int) – number of layers in RNN

  • n_samples (int) – number of samples to use in predictions generation

  • num_layers – number of layers

  • lr (float) – learning rate

  • train_batch_size (int) – batch size for training

  • test_batch_size (int) – batch size for testing

  • optimizer_params (dict | None) – parameters for optimizer for Adam optimizer (api reference torch.optim.Adam)

  • trainer_params (dict | None) – Pytorch ligthning trainer parameters (api reference pytorch_lightning.trainer.trainer.Trainer)

  • train_dataloader_params (dict | None) – parameters for train dataloader like sampler for example (api reference torch.utils.data.DataLoader)

  • test_dataloader_params (dict | None) – parameters for test dataloader

  • val_dataloader_params (dict | None) – parameters for validation dataloader

  • split_params (dict | None) –

    dictionary with parameters for torch.utils.data.random_split() for train-test splitting
    • train_size: (float) value from 0 to 1 - fraction of samples to use for training

    • generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting

    • torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing __len__

Methods

fit(ts)

Fit model.

forecast(ts, prediction_size[, ...])

Make predictions.

get_model()

Get model.

load(path)

Load an object.

params_to_tune()

Get grid for tuning hyperparameters.

predict(ts, prediction_size[, return_components])

Make predictions.

raw_fit(torch_dataset)

Fit model on torch like Dataset.

raw_predict(torch_dataset)

Make inference on torch like Dataset.

save(path)

Save the object.

set_params(**params)

Return new object instance with modified parameters.

to_dict()

Collect all information about etna object in dict.

Attributes

This class stores its __init__ parameters as attributes.

context_size

Context size of the model.

fit(ts: TSDataset) DeepBaseModel[source]#

Fit model.

Parameters:

ts (TSDataset) – TSDataset with features

Returns:

Model after fit

Return type:

DeepBaseModel

forecast(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make autoregressive predictions.

Parameters:
  • ts (TSDataset) – Dataset with features and expected decoder length for context

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.

  • return_components (bool) – If True additionally returns forecast components

Returns:

Dataset with predictions

Return type:

TSDataset

get_model() DeepBaseNet[source]#

Get model.

Returns:

Torch Module

Return type:

DeepBaseNet

classmethod load(path: Path) Self[source]#

Load an object.

Parameters:

path (Path) – Path to load object from.

Returns:

Loaded object.

Return type:

Self

params_to_tune() Dict[str, BaseDistribution][source]#

Get grid for tuning hyperparameters.

This is default implementation with empty grid.

Returns:

Empty grid.

Return type:

Dict[str, BaseDistribution]

predict(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.

Parameters:
  • ts (TSDataset) – Dataset with features and expected decoder length for context

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.

  • return_components (bool) – If True additionally returns prediction components

Returns:

Dataset with predictions

Return type:

TSDataset

raw_fit(torch_dataset: Dataset) DeepBaseModel[source]#

Fit model on torch like Dataset.

Parameters:

torch_dataset (Dataset) – Torch like dataset for model fit

Returns:

Model after fit

Return type:

DeepBaseModel

raw_predict(torch_dataset: Dataset) Dict[Tuple[str, str], ndarray][source]#

Make inference on torch like Dataset.

Parameters:

torch_dataset (Dataset) – Torch like dataset for model inference

Returns:

Dictionary with predictions

Return type:

Dict[Tuple[str, str], ndarray]

save(path: Path)[source]#

Save the object.

Parameters:

path (Path) – Path to save object to.

set_params(**params: dict) Self[source]#

Return new object instance with modified parameters.

Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a model in a Pipeline.

Nested parameters are expected to be in a <component_1>.<...>.<parameter> form, where components are separated by a dot.

Parameters:

**params (dict) – Estimator parameters

Returns:

New instance with changed parameters

Return type:

Self

Examples

>>> from etna.pipeline import Pipeline
>>> from etna.models import NaiveModel
>>> from etna.transforms import AddConstTransform
>>> model = model=NaiveModel(lag=1)
>>> transforms = [AddConstTransform(in_column="target", value=1)]
>>> pipeline = Pipeline(model, transforms=transforms, horizon=3)
>>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2})
Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )
to_dict()[source]#

Collect all information about etna object in dict.

property context_size: int[source]#

Context size of the model.