NBeatsGenericModel#

class NBeatsGenericModel(input_size: int, output_size: int, loss: Literal['mse'] | Literal['mae'] | Literal['smape'] | Literal['mape'] | Module = 'mse', stacks: int = 30, layers: int = 4, layer_size: int = 512, lr: float = 0.001, window_sampling_limit: int | None = None, optimizer_params: dict | None = None, train_batch_size: int = 1024, test_batch_size: int = 1024, trainer_params: dict | None = None, train_dataloader_params: dict | None = None, test_dataloader_params: dict | None = None, val_dataloader_params: dict | None = None, split_params: dict | None = None, random_state: int | None = None)[source]#

Bases: NBeatsBaseModel

Generic N-BEATS model.

Paper: https://arxiv.org/pdf/1905.10437.pdf

Official implementation: ServiceNow/N-BEATS

Init generic N-BEATS model.

Parameters:
  • input_size (int) – Input data size.

  • output_size (int) – Forecast size.

  • loss (Literal['mse'] | ~typing.Literal['mae'] | ~typing.Literal['smape'] | ~typing.Literal['mape'] | torch.nn.Module) – Optimisation objective. The loss function should accept three arguments: y_true, y_pred and mask. The last parameter is a binary mask that denotes which points are valid forecasts. There are several implemented loss functions available in the etna.models.nn.nbeats.metrics module.

  • stacks (int) – Number of block stacks in model.

  • layers (int) – Number of inner layers in each block.

  • layer_size (int) – Inner layers size in blocks.

  • lr (float) – Optimizer learning rate.

  • window_sampling_limit (int | None) – Size of history for sampling training data. If set to None full series history used for sampling.

  • optimizer_params (dict | None) – Additional parameters for the optimizer.

  • train_batch_size (int) – Batch size for training.

  • test_batch_size (int) – Batch size for testing.

  • optimizer_params – Parameters for optimizer for Adam optimizer (api reference torch.optim.Adam).

  • trainer_params (dict | None) – Pytorch ligthning trainer parameters (api reference pytorch_lightning.trainer.trainer.Trainer).

  • train_dataloader_params (dict | None) – Parameters for train dataloader like sampler for example (api reference torch.utils.data.DataLoader).

  • test_dataloader_params (dict | None) – Parameters for test dataloader.

  • val_dataloader_params (dict | None) – Parameters for validation dataloader.

  • split_params (dict | None) –

    Dictionary with parameters for torch.utils.data.random_split() for train-test splitting
    • train_size: (float) value from 0 to 1 - fraction of samples to use for training

    • generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting

    • torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing __len__

  • random_state (int | None) – Random state for train batches generation.

Methods

fit(ts)

Fit model.

forecast(ts, prediction_size[, ...])

Make predictions.

get_model()

Get model.

load(path)

Load an object.

params_to_tune()

Get default grid for tuning hyperparameters.

predict(ts, prediction_size[, return_components])

Make predictions.

raw_fit(torch_dataset)

Fit model on torch like Dataset.

raw_predict(torch_dataset)

Make inference on torch like Dataset.

save(path)

Save the object.

set_params(**params)

Return new object instance with modified parameters.

to_dict()

Collect all information about etna object in dict.

Attributes

This class stores its __init__ parameters as attributes.

context_size

Context size of the model.

fit(ts: TSDataset) DeepBaseModel[source]#

Fit model.

Parameters:

ts (TSDataset) – TSDataset with features

Returns:

Model after fit

Return type:

DeepBaseModel

forecast(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make autoregressive predictions.

Parameters:
  • ts (TSDataset) – Dataset with features and expected decoder length for context

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.

  • return_components (bool) – If True additionally returns forecast components

Returns:

Dataset with predictions

Return type:

TSDataset

get_model() DeepBaseNet[source]#

Get model.

Returns:

Torch Module

Return type:

DeepBaseNet

classmethod load(path: Path) Self[source]#

Load an object.

Parameters:

path (Path) – Path to load object from.

Returns:

Loaded object.

Return type:

Self

params_to_tune() Dict[str, BaseDistribution][source]#

Get default grid for tuning hyperparameters.

This grid tunes parameters: stacks, layers, lr, layer_size. Other parameters are expected to be set by the user.

Returns:

Grid to tune.

Return type:

Dict[str, BaseDistribution]

predict(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.

Parameters:
  • ts (TSDataset) – Dataset with features and expected decoder length for context

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.

  • return_components (bool) – If True additionally returns prediction components

Returns:

Dataset with predictions

Return type:

TSDataset

raw_fit(torch_dataset: Dataset) DeepBaseModel[source]#

Fit model on torch like Dataset.

Parameters:

torch_dataset (Dataset) – Torch like dataset for model fit

Returns:

Model after fit

Return type:

DeepBaseModel

raw_predict(torch_dataset: Dataset) Dict[Tuple[str, str], ndarray][source]#

Make inference on torch like Dataset.

Parameters:

torch_dataset (Dataset) – Torch like dataset for model inference

Returns:

Dictionary with predictions

Return type:

Dict[Tuple[str, str], ndarray]

save(path: Path)[source]#

Save the object.

Parameters:

path (Path) – Path to save object to.

set_params(**params: dict) Self[source]#

Return new object instance with modified parameters.

Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a model in a Pipeline.

Nested parameters are expected to be in a <component_1>.<...>.<parameter> form, where components are separated by a dot.

Parameters:

**params (dict) – Estimator parameters

Returns:

New instance with changed parameters

Return type:

Self

Examples

>>> from etna.pipeline import Pipeline
>>> from etna.models import NaiveModel
>>> from etna.transforms import AddConstTransform
>>> model = model=NaiveModel(lag=1)
>>> transforms = [AddConstTransform(in_column="target", value=1)]
>>> pipeline = Pipeline(model, transforms=transforms, horizon=3)
>>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2})
Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )
to_dict()[source]#

Collect all information about etna object in dict.

property context_size: int[source]#

Context size of the model.