Dabbling in time series forecasting

  • Articles
  • Data Science & IA


At the beginning of this semester, we started a project that sought to improve the sales projection with a more robust method than the current one. The objective was to estimate digital channel sales for the year 2023, for the four markets in which this client had a presence, posing challenges given the different realities of each country and the coordination efforts with the four budget holders during the coming year.


Before moving on to the technical/practical part of the project, I would like to digress for a moment. In the time I have been working, I have been involved in more strategic projects, mainly focused on helping the client to manage, design and develop initiatives. So, before joining this project, I had not had the opportunity to develop advanced predictive algorithms. However, I joined this project given my previous experience with the client and knowledge of the business that a new resource joining at that time would not have had. In this context I faced the great challenge of venturing for the first time into a more technical project. Thanks to the help of the tremendous team at Brain Food we managed to pull it off and I learned many “soft” things along the way (I wrote about that in a past post) and other technical things that I want to detail below.

Disclaimer: this post is not intended to be a tutorial on how to do time series forecasting.

In talking and consulting with different people at Brain Food about how best to approach this challenge, we determined to take a top-down approach, to take advantage of the existing hierarchical relationship in the datasets. This involved using time series (time series are data sets of observations of a certain phenomenon that are structured by a continuous date), conversion rates, and reconciliation based on the hierarchy in which the data was constructed.

To predict time series, there are a number of ad-hoc predictive models: SARIMA, ARIMA, Prophet, among others. Analyzing each one, we decided to use Prophet, because of its fit with the nature of the prediction. Prophet is an algorithm that looks for the model that best fits according to the seasonality and trend of the data, and based on this model, predicts the behavior of the series. This algorithm has better results in time series that have a strong seasonal effect in their figures.

Figure 1: Examples of forecasts with different sets of training data, source: “Forecasting at Scale”.

After defining together with the team the methodology to develop the forecasts, it was time to get down to work.

Understanding and preparing the data

The first thing we did was to “play” with the data for a while in order to better understand what we wanted to predict. We found it very useful to make historical graphs in order to visualize the behaviors, trends and seasonalities of each data set. By checking these figures against our experience in previous projects, we were able to determine if the data made sense or if it needed double checking with the counterpart.

As an example, in the graph below we can see the traffic coming from Display over time for the certain market. We can see that the first months are almost zero, because it was not until June 2022 that digital strategies were promoted to encourage traffic in this channel. However, since August of that same year, we can see how traffic begins to decrease significantly. A priori, we could assume a downward trend due to the behavior of data in recent months, or conversely, we could think that every few months, there is a strong investment in this channel that causes traffic to increase. Both scenarios could be correct predictions, however, we could not know which one is correct without knowledge of the context, the market and the business (Spoiler alert: the final prediction of this traffic was to leave it at zero for the following months, because the market decided to stop investing efforts in this channel).

Figure 2: Graph of Display Traffic versus Time for Market A

Once we gained some understanding of the data, we started to structure it according to the time series format, i.e. a table with two columns: one with date data and the other with the value of the variable on that date.

In this first stage, we managed to notice several inconsistencies in the multiple data sets. We identified several problems, from data gaps between dates to inconsistency in the figures according to business rules. At this point it is essential to be rigorous in reviewing the quality of the data, so as not to waste time iterating with the client until we have the correct data. As a learning process, data is never right the first time (most of the time), so it is essential to plan taking into account this problem.

Applying the model

Prophet is characterized as a forecasting algorithm that is quite simple and intuitive to apply. Like any forecasting algorithm, Prophet can be used to define parameters to describe the time series. These parameters can be used to define the effect of seasonality on the trend in the prediction: whether it should be additive (the effect is added to the trend) or multiplicative (the effect is multiplied to create a constant growth). It is possible to describe the growth rate of the time series, which can be linear, logistic or flat. It is also possible to specify whether the algorithm should adjust for seasonality based on annual, weekly or daily granularity. Based on the above, the model is able to identify by itself the trend, the seasonality per week and per year, as shown in the following image.

Figure 3: Time series decomposition, source: “Forecasting at Scale”.

Beyond characterizing the time series, exogenous effects that may be affecting the data can be shaped in Prophet. These phenomena can be special days (such as Black Friday, Christmas, World Cups), climatic seasons, technological advances (for example, how changes in the website can make it incentivize purchases), changes in the political context of the country, among others. You can specify each of these phenomena in the algorithm, and even add more than one and the combination of these, generating multi-causal phenomena (for example, how a World Cup and a Black Friday can increase purchases of soccer balls). If you want to go even deeper into all the parameters that Prophet has, I invite you to review the documentation on the following web page: documentation.

Studying all the possibilities that the algorithm offers, we defined together with the team a 4-step methodology to apply it to the time series. By creating this iterative methodology we were able to obtain better results and be more rigorous in the development of each model. In each of these steps, we ran the algorithm, visualized the prediction result and based on business definitions, determined the “quality” of the prediction. If the results were not as expected, we moved on to the next step and so on, until we obtained a reliable and accurate prediction. The 4 steps were as follows:

  1. Use all historical data
  2. Eliminate data that are visualized as outliers, under our understanding of the business.
  3. Apply an exogenous variable to better model the behavior of the data over time.
  4. Apply a reconciliation algorithm to match the prediction to the hierarchical relationship under which the data was constructed.

Understanding the prediction

At the end of these 4 steps, we ended up with 4 prediction results for each time series. These results were presented to the client to define together which was the most accurate and precise prediction. We interrelated the predictions with the input of the leaders of each market since they are the ones who have the know-how and expertise on how the figures will behave (at least at a high level). In this way, we are able to understand the prediction based on a strategic view, beyond calculating the percentage of error of the prediction.

By way of summary, I wanted to leave three points that I think are important to take into account when making a projection using time series:

  1. Data quality: data almost never comes clean the first time, there is a constant iteration with the counterpart of data revision. It is very important to formalize the data review process, as soon as the data arrives, review it carefully and if there are inconsistencies or incorrect data, schedule a review with the client, to determine where and why there are errors. In the worst case scenario, the client will have to resend the data (hopefully now clean) which will generate delays in the project planning.
  2. Preliminary analysis: you have to study the data carefully, making a graph is not enough. It is good to give a meaning to the numbers: what does it mean that on certain dates the data change a lot, what are the ranges in which the data ideally move, among others. With this, it becomes essential to have constant reviews or meetings with the client, in order to acquire the business expertise that the client has.
  3. Link with the business: both to understand the historical data and to make sales predictions, it is important to link this data with the vision of the business. In this way, we can understand changes in trends, seasonality, explain exogenous phenomena that may affect sales, among others.