TFTModel#

class TFTModel(decoder_length: int | None = None, encoder_length: int | None = None, dataset_builder: PytorchForecastingDatasetBuilder | None = None, train_batch_size: int = 64, test_batch_size: int = 64, lr: float = 0.001, hidden_size: int = 16, lstm_layers: int = 1, attention_head_size: int = 4, dropout: float = 0.1, hidden_continuous_size: int = 8, loss: MultiHorizonMetric | None = None, trainer_params: Dict[str, Any] | None = None, quantiles_kwargs: Dict[str, Any] | None = None, **kwargs)[source]#

Bases: _DeepCopyMixin, PytorchForecastingMixin, SaveNNMixin, PredictionIntervalContextRequiredAbstractModel

Wrapper for pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer.

Notes

We save pytorch_forecasting.data.timeseries.TimeSeriesDataSet in instance to use it in the model. It`s not right pattern of using Transforms and TSDataset.

Initialize TFT wrapper.

Parameters:
  • decoder_length (int | None) – Decoder length.

  • encoder_length (int) – Encoder length.

  • dataset_builder (etna.models.nn.utils.PytorchForecastingDatasetBuilder) – Dataset builder for PytorchForecasting.

  • train_batch_size (int) – Train batch size.

  • test_batch_size (int) – Test batch size.

  • lr (float) – Learning rate.

  • hidden_size (int) – Hidden size of network which can range from 8 to 512.

  • lstm_layers (int) – Number of LSTM layers.

  • attention_head_size (int) – Number of attention heads.

  • dropout (float) – Dropout rate.

  • hidden_continuous_size (int) – Hidden size for processing continuous variables.

  • loss (MultiHorizonMetric) – Loss function taking prediction and targets. Defaults to pytorch_forecasting.metrics.QuantileLoss.

  • trainer_kwargs – Additional arguments for pytorch_lightning Trainer.

  • quantiles_kwargs (Dict[str, Any] | None) – Additional arguments for computing quantiles, look at to_quantiles() method for your loss.

  • trainer_params (Dict[str, Any]) –

Methods

fit(ts)

Fit model.

forecast(ts, prediction_size[, ...])

Make predictions.

get_model()

Get internal model that is used inside etna class.

load(path)

Load an object.

params_to_tune()

Get default grid for tuning hyperparameters.

predict(ts, prediction_size[, ...])

Make predictions.

save(path)

Save the object.

set_params(**params)

Return new object instance with modified parameters.

to_dict()

Collect all information about etna object in dict.

Attributes

This class stores its __init__ parameters as attributes.

context_size

Context size of the model.

trainer_params

dataset_builder

train_batch_size

test_batch_size

encoder_length

fit(ts: TSDataset)[source]#

Fit model.

Parameters:

ts (TSDataset) – TSDataset to fit.

Returns:

model

forecast(ts: TSDataset, prediction_size: int, prediction_interval: bool = False, quantiles: Sequence[float] = (0.025, 0.975), return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make autoregressive predictions.

Parameters:
  • ts (TSDataset) – Dataset with features

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context for models that require it.

  • prediction_interval (bool) – If True returns prediction interval for forecast

  • quantiles (Sequence[float]) – Levels of prediction distribution. By default 2.5% and 97.5% are taken to form a 95% prediction interval

  • return_components (bool) – If True additionally returns forecast components

Returns:

TSDataset with predictions.

Return type:

TSDataset

get_model() Any[source]#

Get internal model that is used inside etna class.

Model is the instance of pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer.

Returns:

Internal model

Return type:

Any

classmethod load(path: Path) Self[source]#

Load an object.

Parameters:

path (Path) – Path to load object from.

Returns:

Loaded object.

Return type:

Self

params_to_tune() Dict[str, BaseDistribution][source]#

Get default grid for tuning hyperparameters.

This grid tunes parameters: hidden_size, lstm_layers, dropout, attention_head_size, lr. Other parameters are expected to be set by the user.

Returns:

Grid to tune.

Return type:

Dict[str, BaseDistribution]

predict(ts: TSDataset, prediction_size: int, prediction_interval: bool = False, quantiles: Sequence[float] = (0.025, 0.975), return_components: bool = False) TSDataset[source]#

Make predictions.

This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.

Parameters:
  • ts (TSDataset) – Dataset with features

  • prediction_size (int) – Number of last timestamps to leave after making prediction. Previous timestamps will be used as a context.

  • prediction_interval (bool) – If True returns prediction interval for forecast

  • quantiles (Sequence[float]) – Levels of prediction distribution. By default 2.5% and 97.5% are taken to form a 95% prediction interval

  • return_components (bool) – If True additionally returns prediction components

Returns:

TSDataset with predictions.

Return type:

TSDataset

save(path: Path)[source]#

Save the object.

Parameters:

path (Path) – Path to save object to.

set_params(**params: dict) Self[source]#

Return new object instance with modified parameters.

Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a model in a Pipeline.

Nested parameters are expected to be in a <component_1>.<...>.<parameter> form, where components are separated by a dot.

Parameters:

**params (dict) – Estimator parameters

Returns:

New instance with changed parameters

Return type:

Self

Examples

>>> from etna.pipeline import Pipeline
>>> from etna.models import NaiveModel
>>> from etna.transforms import AddConstTransform
>>> model = model=NaiveModel(lag=1)
>>> transforms = [AddConstTransform(in_column="target", value=1)]
>>> pipeline = Pipeline(model, transforms=transforms, horizon=3)
>>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2})
Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )
to_dict()[source]#

Collect all information about etna object in dict.

property context_size: int[source]#

Context size of the model.