## Summary

This post predicts appliance energy usage.## Table of Contents

# Overview

The purpose of the post is to mimic the analysis of Candendo et. al. with a variety of statistical models using the `tidymodels`

and `modeltime`

packages in R. Candendo predicts appliance energy usage. The post will provide an (1) overview of energy prediction, (2) construct statistical models and (3) evaluate their effectiveness.

# Background

## Building Energy Usage

Energy usage has been explored extensively concerning buildings. “The buildings and building construction sectors consumed 36% of the global final energy and nearly 40% of total CO2 emission in 2018”. [1] Building energy usage may be reduced by improving efficiency. Less energy usage would lower buildings’ environmental and economic burden. [1] Effective prediction techniques could quantify savings and improve conservation. [1] Building designs could be chosen based on the expected energy usage, and future costs could be discounted to present value over the building’s expected useful life. [1]

In a 2020 metanalysis, Sun and coauthors performed an extensive review on building energy prediction compiling 105 papers.[1] The effort detailed “the entire data-driven process that includes feature engineering, potential data-driven models and expected outputs.” [1]

Sun grouped energy conservation strategies into three boxes: white, grey, and black. “White box” models predict the thermal behavior by numerical equations. [1]. “Grey box” or hybrid methods “combine physical models and data-driven approaches to simulate building energy.” [1]. “Black box” use machine learning methods to discover statistical patterns with the data set. [1]

Frequently, data sets contain (1) meteorological/outside, (2) indoor, (3) occupancy, (4) time, (5) building characteristics, (6) socioeconomic, and (7) historic features. [1] With regard to available features, meteorological information, historical data and time index are the top-3 important factors for building energy prediction. [1] Popular statistical models are artificial neural network (“ANN”), support vector regression (“SVR”), and linear regression (“LR”), while the less popular models are time series analysis and regression trees (“RT”). [1]

## Appliance Energy Use

Appliances represent a significant portion (between 20 and 30% of the electrical energy demand[2] electricity consumption in domestic buildings is explained by two main factors: the type and number of electrical appliances and the use of the appliances by the occupants.[2] Buildings and, specifically residential buildings, are candidates in reducing energy as “approximately 74% of this electricity use is from all buildings, and 38% is from residential buildings. [3]

“Refrigeration and freezer loads tended to be very flat, while cooking, dishwasher, lights and small appliances showed distinct evening peaks.” [4] “Refrigerators have a uniform load profile, while clothes washers, cloth dryers, and dishwashers are very user-dependent and thus vary from household to household and time of day.”[3]

The variability in energy usage is a challenge for power operators. Electric grids are vulnerable to failure at peak load times. A Texas study attributed 75% of the peak demand to buildings, with 50% to residential and 25% to business. [3] Economizing appliance use during peak times has received more attention than HVAC systems because appliances do not affect “the comfort of the indoor environment.” [3]

# Current Data Set

The data set originally served as the basis for an article entitled “Data-driven prediction models of energy use of appliances in a low-energy house.” [2] Since the publication of the article, the data set is available at the UCI Machine Learning Repository and can be found here. According to the website,

“the data set is time series measured at 10-minute intervals for 4.5 months. The house temperature and humidity conditions were monitored with a ZigBee wireless sensor network. Each wireless node transmitted the temperature and humidity conditions around 3.3 min. Then, the wireless data was averaged for 10 minutes periods. The energy data was logged every 10 minutes with m-bus energy meters. Weather from the nearest airport weather station (Chievres Airport, Belgium) was downloaded from a public data set from Reliable Prognosis (rp5.ru), and merged together with the experimental data sets using the date and time column. Two random variables have been included in the data set for testing the regression models and to filter out non-predictive attributes (parameters). GitHub Repo.”

The data set consists of a single, residential house and was simplified to reduce the file size by filtering the measurements to those taken at the start of each hour. The resulting dataset contained 1/6 of the rows of the original. An exploratory analysis of the data set was previously conducted here.

The 2017 paper used four statistical models which “were trained with repeated cross-validation and evaluated on a testing set: (a) multiple linear regression, (b) support vector machine with radial kernel, (c) random forest and (d) gradient boosting machines (GBM). The best model (GBM) was able to explain **97%** of the variance (R2) in the training set and with **57%** in the testing set when using all the predictors.

# Data and model

## Plot

## Split

```
# Split Data 90/10
set.seed(1)
# note the "initial_time_split" from "resamples" package
df_splits <- initial_time_split(df, prop = 0.9)
df_train <- training(df_splits)
df_test <- testing(df_splits)
```

## Build Models

```
# Model 1: auto_arima ----
model_fit_arima_no_boost <- arima_reg() %>%
set_engine(engine = "auto_arima") %>%
fit(appliances ~ date_time + t1, data = df_train)
# Model 2: arima_boost ----
model_fit_arima_boosted <- arima_boost(
min_n = 2,
learn_rate = 0.015
) %>%
set_engine(engine = "auto_arima_xgboost") %>%
fit(appliances ~ date_time + as.numeric(date_time) + factor(hour(date_time), ordered = F),
data = df_train)
# Model 3: ets ----
model_fit_ets <- exp_smoothing() %>%
set_engine(engine = "ets") %>%
fit(appliances ~ ., data = df_train)
# Model 4: prophet ----
model_fit_prophet <- prophet_reg() %>%
set_engine(engine = "prophet") %>%
fit(appliances ~ date_time, data = training(df_splits))
# Model 5: lm ----
model_fit_lm <- linear_reg() %>%
set_engine("lm") %>%
fit(appliances ~ date_time,
data = training(df_splits))
# Add fitted models to model table
models_tbl <- modeltime_table(
model_fit_arima_no_boost,
model_fit_arima_boosted,
model_fit_ets,
model_fit_prophet,
model_fit_lm
)
# calibrate
calibration_tbl <- models_tbl %>%
modeltime_calibrate(new_data = testing(df_splits))
```

## Visualize

## Accuracy

Accuracy Table | ||||||||

.model_id | .model_desc | .type | mae | mape | mase | smape | rmse | rsq |
---|---|---|---|---|---|---|---|---|

1 | REGRESSION WITH ARIMA(2,0,1)(2,0,0)[24] ERRORS | Test | 51.90 | 64.83 | 1.11 | 48.86 | 74.25 | 0.09 |

2 | ARIMA(2,0,0)(2,0,0)[24] WITH NON-ZERO MEAN W/ XGBOOST ERRORS | Test | 46.32 | 53.67 | 0.99 | 43.68 | 72.73 | 0.11 |

3 | ETS(M,AD,M) | Test | 44.09 | 42.61 | 0.94 | 45.80 | 74.90 | 0.12 |

4 | PROPHET | Test | 39.95 | 40.75 | 0.85 | 36.45 | 67.24 | 0.21 |

5 | LM | Test | 45.92 | 50.35 | 0.98 | 43.07 | 75.41 | 0.01 |