The *Applied Computational Linear Algebra for Everyone course* is designed to equip you with the knowledge you need to link the math of linear algebra to code with a few "must know" applications centered around different ways of casting and fitting a system of equations and revealing structure in a matrix.

Mastering computational linear algebra by linking math with code will help you in any/all of the computational sciences -- see here for how it can help in many fields, including computer science.

You’ll learn by programming from scratch in a hands-on manner using a one-of-a-kind cloud-based interactive computational textbook that will guide you, and check your progress, step-by-step. Using real-world datasets and datasets of your choosing, you will understand, and we will discuss, via computational discovery and critical reasoning, the strengths and limitations of the algorithms and how they can or cannot be overcome.

By the end of the course, you will be able to recognize and use linear algebra concepts as they arise in machine learning and data science.

Since you’ll learn by doing (via coding), you’ll spend quite a bit of time coding and debugging not-working code. So a basic facility with (language agnostic) programming syntax and computational reasoning is invaluable. The rest you will learn in the course itself, *i.e.*, you don’t have to be a Java whiz but you do need to have used Python, MATLAB or R.

This course offers the opportunity to work in groups, remotely, or completely on your own. The choice is yours.

is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. He received his Masters and PhD in Electrical Engineering and Computer Science at MIT as part of the MIT/WHOI Joint Program in Ocean Science and Engineering.

In addition to receiving the Jon R. and Beverly S. Holt Award for Excellence in Teaching, Prof. Nadakuditi has received the DARPA Directors Award, DARPA Young Faculty Award, IEEE Signal Processing Society Best Young Author Paper Award, Office of Naval Research Young Investigator Award, and the Air Force Research Laboratory Young Faculty Award.

His graduate level course, Computational Data Science and Machine Learning, attracts hundreds of students from 80+ disciplines across the University. He loves making machine learning accessible to learners from all disciplines and enjoys seeing how students adapt the underlying ideas and develop creative, new applications in their own scientific of engineering area of expertise.

This offering has evolved from many years of the instructor teaching Computational Data Science and Machine Learning at the University of Michigan, MIT Lincoln Laboratory and the Air Force Research Laboratory (AFRL).

The syllabus distills elements the linear algebra elements necessary so that one may take more advanced courses in computational science and engineering that require linear algebra as a pre-requisite.

Over the years of teaching this course at U-M, the instructor has derived tremendous satisfaction from seeing students from a wide range of disciplines seeing how the beautiful math leads to beautiful code and applications that seem magical the first time the math and code come together to do something remarkable, as in the many applications we will showcase. That's a bit part of the fun of the underlying subject matter and we hope you leave with that sense of wonder, too.

An introduction to the Julia programming language. Introduces students to variables, arrays, functions, and everything else that they need to succeed!

Introduction to matrix math and linear algebra. Learn about vectors, matrices, arrays, and various operations on these objects.

Intro to convolution and expressing convolution as a matrix-vector product.

Introduction to normal and non-normal matrices and the spectral theorem for normal matrices. Introduction the eigenvalue decomposition and the singular value decomposition and their variational characterizations via eigshow and svdshow.

Introduction to vector spaces, subspaces and the four fundamental subspaces of a matrix. Discussion of bases vectors for subspaces and how the SVD of a matrix reveals these bases. Orthogonal projection matrices and how to efficiently compute projection of a vector onto a matrix subspace without first computing and storing the associated projection matrix.

First difference matrix construction and the role of the Kronecker product and sparse constructions thereof.

Stochastic descent, Nesterov's accelerated method. Application: photometric stereo reconstruction using these algorithms.

How to setup and solve least squares problems of the form Ax = b. Applications include fitting data to a higher order polynomial function, predicting search query time series results after an appropriate non-linear transformation.

The Eckart-Young theorem and its consequences. Applications include image compression and image denoising.

How and why we need to regularize the solution of a system of equations of the form Ax = b. Applications include better fitting data to a higher order polynomial function, image in-painting/graffiti removal with a first difference regularizer. Discussion of how the optimal regularization coefficient is selected.

Learning to recognize sparsity in its canonical and transformed manifestations and seeing (computationally) how that helps regularize the solution of a system of equations of the form Ax = b in a regime where minimum norm least squares does not work well. Applications include compressed sensing, image in-painting/graffiti removal with a first difference regularizer and a discussion of how the optimal regularization coefficient is selected.

Convolution plays a role, directly or indirectly, in many data science techniques. Many seemingly complex image filters, for example, may be expressed elegantly using convolution.

Re-factoring matrix multiplication for the setting where the matrices are too huge to fit in memory.

There is so much to learn in machine learning and so much about it is fun and exciting! Instructors usually run out of time before they run out of topics. The codices are designed to augment your (the instructor's) voice. Machine learning is tricky because it links math with code. Students need the instructor and the instructional staff's help when they get stuck -- and they will get stuck because computational machine learning algorithms require mastery math and programming and linking math to code and vice-versa.

The codices are self-contained and in that sense they are like a textbook. The computational component makes them more than that. They can be used as homework assignments to reinforce concepts in an instructor's lecture/notes or another textbook. We use it at Michigan as a lab component to the course. The instructor has peace of mind knowing that the codices have been tested on over a thousand students -- the platform allows an instructor to scale their to hundreds. The backend support for autograding programming assignments allows the instructional staff to engage a student at the moment when they are stuck when the learner is eager and ready to learn more. Codices will amplify the instructor's voice by linking in-class theory to computational practice.

YES! The reason we built Mynerva was to scale our ability to effectively teach students. We've taught 260+ students in class working on the codex live in class. (See below) The codices and platform help amplify an instructor's voice while helping solve the "assignment grading" problem for computational machine learning so the class can scale easily into the hundreds.

We are actively recruiting instructors to try the platform and the codices in their own courses as a supplement for existing course material either as homeworks or as in-class computational labs. You can express your interest using this Google form and we will get back to you as soon as possible.

The Mynerva platform is currently in beta. The price for the book includes the cost of cloud-computing, hosting and storage resources since every codex involves the learning learning by doing via computing on the platform.

The learner personalizes their book via their answers for the auto-graded programming assignments, the various free form question prompts and their ability to take notes within the platform. There is a "expert" mode for each codex which allows the learner to deepen their mastery of a code that they have already solved. Thus learners can continue learning (for a nominal fee) even when the course is completed.

You may apply for the course here

See the Continuum Linear Algebra for Everyone page for more information

There are several additional resources that we recommend. These resources may be used as a companion book or simply to supplement the concept presented here.

- Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares by Stephen Boyd and Lieven Vandenberghe
- Linear Algebra and Learning from Data by Gilbert Strang
- Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (MIT OCW)
- 3Blue 1Brown by Grant Sanderson

There is so much to learn and we are delighted that there so many resources that present the material in slightly different ways -- all come together to help a learner form a more complete picture of the material. One can never really stop learning with how much there is to learn! (That's part of the fun for this author!)

Thanks in particular to Gil Strang for his encouragement, feedback and support and for inspiring the idea behind the codices during the very special semester of Spring 2017 when we launched and taught 18.065 at MIT. Multiple thanks to Alan Edelman for years of encouragement and inspiration and for teaching me so much (including Julia). A learner experiencing this book by doing/coding might sometimes recognize their voice in the way I write and speak about the underlying math and code. That's no accident. This course is infused with their DNA and years of me soaking in their thoughts and ideas on so many matters, particularly on how elegant math produces elegant codes and vice versa. All they taught me about how to see math and linear algebra makes me love it, and to want to share with you in the codex way, even more.