FOUNDATIONS OF MACHINE LEARNING

Course info

Course materials

FAQ

Course materials

/

Section

Outlook


Outlook

What happens next, now that you have acquired these new skills in machine learning? Perhaps you are curious to learn more about the methods covered in this course and of other machine learning methods not touched upon. For a start, you might have noticed that each chapter in the course book has a section on “Further Reading”, directing you to additional resources where you can read more about each topic. We encourage you to have a look at some of these additional resources. Apart from that, you can read about several interesting topics in the course book that were not covered in this course. Chapter 7 covers the topic of ensemble methods and how prediction performance can be boosted by combining multiple models. Chapter 9 of the course book introduces Bayesian modeling. In contrast to the models that you have implemented in this course, a Bayesian model learns a distribution over the model parameters in place of single point estimates. Because of this, such a model can be used to reason about uncertainty in the parameters and how this affect the uncertainty in the predictions.

Machine learning is a broad field that attracts a great deal of research interest and new machine learning methods are continiously developed. Not only in academia but also in the industry, a lot of attention is nowadays placed on deep learning models. In this course, you have learnt about neural networks and the class of convolutional neural networks as well as deep generative models. It is probably not surprising that there are yet a wide range of deep learning models available and adapted for different types of tasks. For example, you might have heard about other deep learning models such as graph neural networks (Kipf & Welling, 2017) (Scarselli et. al., 2008) (Zhou et. al., 2020). Another example are transformers (Vaswani et. al., 2017), which you got a brief description of if you went trough the optional section on generative language models. If you are interested to learn more about these particular topics, you can follow the provided citations (note, however, that some of the referenced material is quite research-focused).

The focus of this course has been on supervised learning. You also got a glance into the worlds of unsupervised and semi-supervised learning in section 6. Other classes of learning algorithms, that we did not talk about in this course, include reinforcement learning and self-supervised learning. Both of these learning paradigms are similar to unsupervised learning in the sense that they do not require manually provided labels. Reinforcement learning is based on a “learning-by-doing” principle where the model gets continuous feedback through a reward system. For example, a chess playing model could improve at chess by repeatedly playing games, and get a reward when it wins a game. In self-supervised learning, the model learns to solve some alternate, artificial task that is related to the actual intended end-use. For example, an image classifier could be trained on the task of filling in masked pixels in images. If you are interested in learning more, the topic of reinforcement learning is covered in detail in (Sutton & Barto, 2018) and a (more technical) overview of self-supervised learning can be found in (Ericsson et. al., 2022).

We hope that this course has inspired you to further explore the topic of machine learning and perhaps create your own machine learning project. If you are interested in taking more courses on the topic, Linköping University offers other courses in statistics and machine learning, both at the bachelor’s and the master’s level. You can find all of our courses at this link.


The end of the course

Congratulations! You have now reached the end of this course. If you have managed to finish all of the questions and exercises you can proceed and sign up for the next examination opportunity as explained on the FAQ page. As soon as we have validated your course completion, the credits will be reported to Ladok. If you need to go back to some previous section to work a bit more on the exercises, that is of course possible.

We hope that you have enjoyed the course, and wish you the best of luck in applying your newly acquired skills in practice!

Ericsson, L., Gouk, H. & Loy, C. C. & Hospedales, T. M. (2022). Self-Supervised Representation Learning: Introduction, advances, and challenges. IEEE Signal Processing Magazine, 39(3), 42-62.

Kipf, T. N., Welling, M. (2017). Semi-Supervised Classification with Graph Convolutional Networks. International Conference on Learning Representations.

Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M. & Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80.

Sutton, R. S. & Barto, A. G. (2018). Reinforcement Learning - an introduction. The MIT Press.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.

Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C. & Sun, M. (2020). Graph neural networks: A review of methods and applications. AI Open, 1, 57-81.

This webpage contains the course materials for the course ETE370 Foundations of Machine Learning.
The content is licensed under Creative Commons Attribution 4.0 International.
Copyright © 2021, Joel Oskarsson, Amanda Olmin & Fredrik Lindsten