# FOUNDATIONS OF MACHINE LEARNING

# Course info

# Course materials

# FAQ

Course materials

/

Section

# Introduction

In section 3 we will take a slightly more structured approach to learning, by introducing the class of parametric models.
In particular we will look at perhaps the most basic (and arguably most important) parametric model, the linear regression model.
We will also introduce the logistic regression model. Logistic regression is similar to linear regression, but despite its name it is applied to classification problems.
This section also dives into the statistical perspective on machine learning.
We will see how machine learning models typically do not just output single predictions, but rather specify entire probability distributions over possible outputs.
The course book will also introduce the concept of *maximum likelihood* estimation, which is a central statistical concept underlying many machine learning methods.
In the end of the section we will briefly touch on the concepts of overfitting and regularization.

Start out by reading chapter 3.1, 3.2 and 3.3 in the course book. The mathematically inclined might also find the chapter appendix (3.A) interesting. It contains two different explanations for how the normal equations, used for finding the model parameters in linear regression, can be derived.

You will then put your new knowledge into use in the upcoming sections. First, you will be tasked with implementing a linear regression model for the values of houses in California, US. Secondly we will look at the problem of classifying penguins using a logistic regression model.

This webpage contains the course materials for the course ETE370 Foundations of Machine Learning.

The content is licensed under Creative Commons Attribution 4.0 International.

Copyright © 2021, Joel Oskarsson, Amanda Olmin & Fredrik Lindsten