Understanding Artificial Intelligence through Algorithmic Information Theory

Can we characterise intelligent behaviour?
Are there theoretical foundations on which Artificial Intelligence can be grounded?

This course on Algorithmic Information will offer you such a theoretical framework.

  • You will be able to see machine learning, reasoning, mathematics, and even human intelligence as abstract computations aiming at compressing information.
  • This new power of yours will not only help you understand what AI does (or can’t do!) but also serve as a guide to design AI systems.

 

✅ 5 weeks – 2–3 hours per week

✅ Self-paced – Progress at your own speed

✅ Free Limited Access Optional upgrade available.

On completion of this course, you should be able to:

✅ How to measure information through compression

✅ How to compare algorithmic information with Shannon’s information

✅ How to detect languages through joint compression

✅ How to use the Web to compute meaning similarity

✅ How probability and randomness can be defined in purely algorithmic terms

✅ How algorithmic information sets limits to the power of AI (Gödel’s theorem)

✅ A criterion to make optimal hypotheses in learning tasks

✅ A method to solve analogies and detect anomalies

✅ A new understanding of machine learning as a way to achieve compression

✅ Why unexpected means abnormally simple

✅ Why coincidences are unexpected

✅ Why subjective information and interest are due to complexity drop and why relevance, aesthetics, emotional intensity and humour rely on coding.

Caveat: This course DOES NOT address the notion of “computational complexity” which measures the speed of algorithms.