# FOUNDATIONS OF MACHINE LEARNING

# Course info

# Course materials

# FAQ

Course materials

/

Section

# Introduction

We have at this point covered some (basic) machine learning models. Before moving on to more complex models, we will stop for a moment to think about how we can *evaluate* and, subsequently, *improve* our models.

It is not the case that one type of model is always the best, regardless of the situation. In this section, we will discuss how we, in a systematic manner, can determine which type of model is the most suitable for the problem at hand. Related to this, we will also cover how to choose hyperparameter values for a given model structure. Hyperparameters can for example refer to the value of $k$ in our k-NN model or the order of polynomial features in a polynomial regression model.

Before moving on to the problem section, read chapter 4.1-4.2 and 4.5 in the course book. Chapter 4.1 introduces the concept of an *error function*
and gives a background to the aim of model selection. It describes how we (conceptually) want to minimize the *expected new data error* based on the *data distribution*
$p(\mathbf{x}, y)$. In all practical situations, however, it is not possible to calculate the expected new data error exactly. Chapter 4.2 therefore describes how we can estimate it from data,
for the purpose of model evaluation and selection.
Among other methods, it describes how we can use *cross-validation* to train and evaluate a model. Chapter 4.5 discusses how we can evaluate our classification models when we have
imbalanced or asymmetric data.

After you have read the referenced chapters you are ready to move on to the three example problems. The first and second examples are classification problems covering the subjects of model selection using a separate hold-out validation set and imbalanced classification, respectively. In the third example, you will be asked to use cross-validation to select a model for predicting diabetes progression.

This webpage contains the course materials for the course ETE370 Foundations of Machine Learning.

The content is licensed under Creative Commons Attribution 4.0 International.

Copyright © 2021, Joel Oskarsson, Amanda Olmin & Fredrik Lindsten