CNN vs. Prophet: Forecasting the Copper Producer Price Index

Which model does better at forecasting copper prices?

Michael Grogan

In a previous article on Towards Data Science, I attempted to use the Prophet model to forecast the copper producer price index. The data was sourced from FRED Economic Data using Quandl.

Here is a plot of the data (note that all prices are expressed in logarithmic terms for the purpose of this exercise):

The time series in question ranged from September 1985 to July 2020 — and the mean absolute error came in at 0.257 (compared to a mean of 5.917 across the test set).

Here is a plot of the forecasted vs. actual values:

While the Prophet model is effective at detecting seasonality and trend components — and also offers the ability to improve on forecasts by modifying change points appropriately — such a model does not necessarily do particularly well when it comes to capturing volatility from one time period to the next.

In this regard, I decided to model this data using a Convolutional Neural Network (hereby referred to as CNN) model, in order to investigate whether this model would prove more effective at forecasting this time series.

As a caveat, the below example is more of an academic exercise rather than a real-life attempt to predict asset prices. Given that we are working with monthly data — it is easier for a neural network to pick up the fluctuations from one time period to another.

In the context of a real-life scenario whereby a time series is being predicted over much shorter intervals (hours, minutes, even seconds), there would be much more stochasticity (or randomness) present in the dataset. This would likely affect the forecast accuracy significantly. Furthermore, a multivariate time series would likely prove more effective at forecasting an economic time series more generally — given that such time series are subject to many interventions.

However, using the CNN to model monthly data will be used as a starting point for this purpose.

Note that the below example uses the model template from the Intro to TensorFlow for Deep Learning course from Udacity — this particular topic is found in Lesson 8: Time Series Forecasting by Aurélien Géron.

Additionally, the original Jupyter Notebook (Copyright 2018, The TensorFlow Authors) can also be found here.

I previously elaborated on the building blocks of a CNN model in my last article titled, “CNN-LSTM: Predicting Daily Hotel Cancellations”. As a result, I will not go through this in much more detail again here, but suffice to say that a CNN works by using previous time steps (based on a specific window size) to produce an output of data points as follows:

CNNs have an inherent ability to learn both short and long-term dependencies in a time series through the WaveNet architecture, whereby several Conv1D (one-dimensional convolutional) layers are stacked together. This allows the lower layers to learn short-term dependencies, while the higher layers learn long-term dependencies. In the context of the time series we are trying to predict, this model seems to be suitable for this purpose.

An 80/20 split is performed on the data, with the first 334 data points in the time series used for training the CNN model, with the rest of the points used for validation purposes.

split_time = 334
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]

A window size of 64 is chosen, with a batch size of 128, trained over 500 epochs. A Huber loss is used as the loss function, in order to ensure that the model accuracy reading is robust to the effects of outliers.

keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 64
train_set = seq2seq_window_dataset(x_train, window_size,
batch_size=128)
valid_set = seq2seq_window_dataset(x_valid, window_size,
batch_size=128)
model = keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape=[None, 1]))
for dilation_rate in (1, 2, 4, 8, 16, 32):
model.add(
keras.layers.Conv1D(filters=32,
kernel_size=2,
strides=1,
dilation_rate=dilation_rate,
padding="causal",
activation="relu")
)
model.add(keras.layers.Conv1D(filters=1, kernel_size=1))
optimizer = keras.optimizers.Adam(lr=3e-4)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint.h6", save_best_only=True)
early_stopping = keras.callbacks.EarlyStopping(patience=50)
history = model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint])

Note in the above example that a doubling dilation rate is being used when training the model. The reason for this, as mentioned, is to allow the network to learn both short and long-term patterns in the time series.

cnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
cnn_forecast = cnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, cnn_forecast)

The mean absolute error is calculated as below:

>>> keras.metrics.mean_absolute_error(x_valid, cnn_forecast).numpy()
0.027838342

A useful feature of the Prophet time series model by Facebook is the ability to identify changepoints, or periods of significant structural change in a time series. Accurate identification of such points can in turn improve a time series forecast.

As above, the MAE for the Prophet model came in at 0.257 which is significantly higher in comparison to that yielded by the CNN.

12 changepoints (or significant deviations in trend) were defined in the Prophet model.

pro_change= Prophet(n_changepoints=12)
forecast = pro_change.fit(train_dataset).predict(future)
fig= pro_change.plot(forecast);
a = add_changepoints_to_plot(fig.gca(), pro_change, forecast)future_data = pro_change.make_future_dataframe(periods=43, freq = 'm')

#forecast the data for future data
forecast_data = pro_change.predict(future_data)
pro_change.plot(forecast_data);

That said, let’s have a look at a plot of the predicted versus actual values once again:

Clearly, the CNN performed better at predicting the monthly fluctuations of copper prices — while the MAE for the Prophet model remains low relative to the mean value — we see that there is significantly more volatility in the actual time series which the predicted time series is not picking up.

As mentioned, forecasting a time series such as this does come with inherent limitations in that it is not known whether a CNN would perform as well across shorter time periods with greater stochasticity. Additionally, such time series are subject to a wide range of interventions which are not accounted for by past values.

However, in this instance the CNN has done quite well in capturing monthly fluctuations for copper prices.

In this article, we have covered:

  • How a CNN can be configured to forecast a time series
  • Differences between the CNN and Prophet models
  • Limitations of such models in forecasting economic time series

Many thanks for your time, and any questions or feedback are greatly appreciated.

The Jupyter Notebooks for both the CNN and Prophet models can be found here.

Disclaimer: This article is written on an “as is” basis and without warranty. It was written with the intention of providing an overview of data science concepts, and should not be interpreted as investment advice, or any other sort of professional advice.

Latest posts