17 Mar 2021
Release
6 minute read

Simplifying data: IBM’s AutoAI automates time series forecasting

AutoAI Time Series for Watson Studio incorporates the best-performing models from all possible classes.

AutoAI Time Series for Watson Studio incorporates the best-performing models from all possible classes.

Creating AI models is not a walk in the park. So why not get AI to… build AI?

Sounds simple, but with the ever-growing variety of models, data scientists first have to have the tools to better automate the model building process. In time series forecasting – models that predict future values of a time series, based on past data or features – the problem is even harder. There are just too many domains that generate time series data, with different and complex modeling approaches.

We think we can help.

On March 1, Watson Studio, IBM’s AI automation modeling system, rolled out AutoAI Time Series under closed beta. IBM Watson users can already tap into our research to automate time series forecasting. And now our paper “AutoAI-TS: AutoAI for Time Series Forecasting” is out, too – we’ll present it at the 2021 ACM SIGMOD/PODS Conference in China in June.1

The paper details how the AutoAI Time Series for Watson Studio incorporates the best-performing models from all possible classes — because often, there is simply no single technique that performs best across all datasets.

Architecture diagram of the AutoAI-TS systemFigure 1: AutoAI-TS Overall Architecture

Suppose a building management company is forecasting energy and resource consumption in a smart building. It can use AutoAI Time Series to quickly train models with less effort than conventional approaches. With just a few mouse clicks, AutoAI Time Series can quickly identify the best features, transforms and models, as well as train the models, tune the hyperparameters and rank the best-performing pipelines — specifically for the company’s resource consumption data.

Combing through the data

While some approaches perform automated tuning and selection of models, these approaches often focus on one class of models, such as autoregressive approaches, which learn from past values of a time series in order to predict future values.

Instead, AutoAI Time Series performs automation across several different model classes, incorporating a variety of models from each class.

Our AutoAI Time Series system is different. It achieves leading benchmark performance and accuracy across a variety of univariate datasets — be it social networks such as Twitter, phone call data logs, the weather, travel times and production volumes. It even works with multivariate datasets, such as exchange rates, household energy use, retail sales, and traffic data. Figures 2 and 3 show the performance results of AutoAI Time Series and several state of the art (SOTA) toolkits on more than 62 univariate and more than 9 multivariate data sets from a variety of application domains and ranging in size from dozens to more than 1,400,000 samples.

Bar chart displaying SMAPE based comparison of AutoAI-TS and SOTA toolkits for univariate data sets with average ranking on the y-axis and model on the x-axis. 11 models are compared. AutoAI-TS has the lowest value.Figure 2: SMAPE based rank comparison of AutoAI Time Series and SOTA toolkits for univariate data sets (lower rank is better)Bar chart showing SMAPE based comparison of AutoAI-TS and SOTA toolkits for multivariate data sets. Average ranking is on the y-axis and 10 models are shown on the x-axis. AutoAI-TS has the lowest average ranking.Figure 3: SMAPE based rank comparison of AutoAI Time Series and SOTA toolkits for multivariate data sets (lower rank is better)

The system, which provides the necessary preprocessing components to orchestrate the building of pipelines for several different modeling approaches, relies on three primary components.

The first is lookback window generation. Many time series forecasting techniques are based on extracting a segment of the historical data – the so-called lookback window – and using it, or its derived features, as inputs to a model. Our lookback window generation approach uses signal processing techniques to estimate an appropriate lookback window. The conventional approach is to repeat the modeling exercise for a variety of lookback window sizes and identify what is best – but this can be very time consuming. Using our lookback window approach, we can dramatically reduce the number of models that need to be built, resulting in a faster modelling process.

Take the earlier example of forecasting energy consumption of a building. Such data would vary seasonally across multiple time periods — daily, weekly, and yearly. Applying the automated lookback window tool could help identify the best candidate windows.

The next component is pipeline generation – where the system produces candidate pipelines to solve a modeling problem. A pipeline is a series of steps that define the process of transforming data and building an AI model. The AutoAI Time Series system has several pre-built candidate pipelines adapted to data characteristics that use a variety of models. For our users, the pipelines include transformers for unary transformations, flattening, and normalizing the data. Models include Holt-Winters Seasonal Additive and Multiplicative, ARIMA, BATS, Random Forest Regression, Support Vector Regression, Linear Regression, Trend to Residual Regressor, Ensemble Methods, among others.

Finally, there is pipeline selection — making use of a reverse progressive data allocation technique to efficiently train and choose only the most promising pipelines. This is an incremental technique in AutoAI Time Series that continually ranks pipelines based on their expected performance, minimizing overall training time. In our example, we have several years’ worth of data. Training all pipelines on all this data would be very time consuming – but our technique could help to identify the best pipelines faster.

After modeling is completed, the system can also generate back-test results for a selected model. Users can flexibly configure multiple back-test periods to provide insights into the temporal behavior of the model performance.

Automating AI Time Series

Our current research focuses on refining AutoAI Time Series to enhance its capabilities. In the future, we aim to incorporate what’s known as imputation to handle missing values, enhance pipelines and models to use exogenous features, and produce prediction intervals to provide a measure of uncertainty in the predictions.

To cope with increasing pipeline complexity and add support for larger datasets, we are also looking into strategies to better scale our system. These strategies include ways to sample data, develop scalable pipeline components to handle large data, and parallelization of the modeling workload.

Our AutoAI Time Series is just the first step towards a more efficient and advanced system that will enable data scientists to easily build AI models for broader use case scenarios.

Learn more about:

Automated AI: We're building tools to help AI creators reduce the time they spend designing their models. Our goal is to allow non-experts across industries to build their own AI solutions, without writing complex code or performing tedious tuning and optimization."

References

  1. Shah, S. Y. et al. AutoAI-TS: AutoAI for Time Series Forecasting. arXiv:2102.12347 [cs] (2021).