# Computational Machine Learning & Data Science

Introduction to computational methods for identifying patterns and outliers in large data sets. Topics include the singular and eigenvalue decomposition, independent component analysis, graph analysis, clustering, linear, regularized, sparse and non-linear model fitting, deep, convolutional and recurrent neural networks. The computational textbook teaches the material to students step by step, by doing via autograded programming exercises and conceptual multiple-choice quizzes. Every codex contains an application that illustrates the ideas behind the algorithm, an exploration of why the algorithm works and when it fails (and can or cannot be fixed) as a way to understand, via mathematical principles, the strengths and weakness of the algorithms.

Raj is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor.

He received his Masters and PhD in Electrical Engineering and Computer Science at MIT as part of the MIT/WHOI Joint Program in Ocean Science and Engineering. His work is at the interface of statistical signal processing and random matrix theory with applications such as sonar, radar, wireless communications and machine learning in mind. Developing this 'living' computational book and teaching EECS 505 with it has been a career highlight.

## syllabus

Over the past few years of teaching the Computational Machine Learning & Data Science course at the University of Michigan, we've conceived and iterated on over two dozen distinct codices. Each codex is like a chapter of the computational book and have been organized into several units below. Select codices are available as a demo of the Mynerva platform.

### where has this been used?

The codices for this course were initially developed and used to teach EECS 505 at the University of Michigan. It has also been used to teach classes at MIT and for private training camps across the country.

### acknowledgements

Special thanks to Travis DePrato for building Mynerva from scratch. Thanks to Don Winsor for the introduction and thinking (correctly) that Travis was capable of way more than he was doing then. This way of writing and publishing a computational book would not have been possible without him.

Thanks to Jonas Kersulis for editing and proof-reading the codices. Thanks to Brian Moore for helping create an early version of the autograder -- they are what give the assignments in the codices an extra "oomph!". Thanks to Adrian Stoll r creating the API for the autograder -- this was a big step towards Mynerva.

This book would not be possible without them and the various graduate student instructors (Arvind Prasadan, David Hong, Hao Wu, Rishi Sonthalia, Dipak Narayan, Yash Sanjay Bhalgat and Raj Tejas Suryaprakash) who helped edit, test and refine the codices, right before they were about to go live to hundreds of students. Thanks to Simon Danisch for the helping start this journey in 2017 by porting my MATLAB demos to Julia and to David Sanders for the gitter post on a Jupyter forum whose response by Min Ragan-Kelley, I used to initiate the first conversation with Travis.

Thanks in particular to Gil Strang for his encouragement, feedback and support and for inspiring the idea behind the codices during the very special semester of Spring 2017 when we launched and taught 18.065 at MIT. Thanks to Jeremy Kepner for making that semester happening and for the opportunity to teach the course at the MIT Lincoln Laboratories -- seeing the excitement there for the ideas and math gave me the impetus to want to do, and reach out, more to a more diverse audience than the "typical" grad students who had taken my class. Thanks to Muralidhar Rangaswamy for the opportunity to reprise the course at AFRL Dayton and the many scientists and engineers there whose enthusiasm for the material in the codex format gave me hope that this format could succeed beyond just at Michigan.

Multiple thanks to Alan Edelman for years of encouragement and inspiration and for teaching me so much (including Julia). A learner experiencing this book by doing/coding might sometimes recognize their voice in the way I write and speak about the underlying math and code. That's no accident. This book is infused with their DNA and years of me soaking in their thoughts and ideas on so many matters, particularly on how elegant math produces elegant codes and vice versa. All they taught me about how to see math and linear algebra makes me love it, and to want to share with you in the codex way, even more.

### supplemental resources

There are several additional resources that we recommend. These resources may be used as a companion book or simply to supplement the concept presented here.

There is so much to learn and we are delighted that there so many resources that present the material in slightly different ways -- all come together to help a learner form a more complete picture of the material. One can never really stop learning with how much there is to learn! (That's part of the fun for this author!)