This course is a very useful experience given you have a fair amount of background in computer science theory and statistics. Because they can be completed at your own pace, you can review key and/or interesting concepts several times over (which I would recommend). My background in in modeling and adaptive computing with a interest in staitstical analysis. So while I found the content to be educational, I also saw opportunities to extend some of the theory to my own interests.
One thing I noticed was how Andrew kept stressing that the industry people (e.g. Silicon Valley types) he visits on occasion quite often misapply the tools and concepts that make ML such a potentially powerful technique. This is especially interesting in light of two recent blog posts by Cosma Shalizi and Cathy O'Neil  on the unintentional misapplication and lack of appreciation for the shortcomings of ML models by many data analysts. In this case, the goal is to apply a specific model to a specific problem. However, this is often done without a formal consideration of experimental design or how well the model fits the phenomenon being observed.
The Stanford ML course also addressed the philosophical implications of ML (as opposed to simply learning the practical aspects). There was a subtle emphasis on why ML techniques are implemented in the way that they are . In particular, the lectures on gradient descent and statistical learning were the most enlightening in this regard. However, I believe there is still a niche for a course on the philosophical implications of machine learning techniques, something that teaches "why" rather than "how" we decide to apply a model to a given problem.
The course also featured many demos which provided examples of how ML can be applied to statistical analysis and control problems. Autonomous control seemed to be a favorite topic . One assumption I had going into the course was that normally-distributed (e.g. Gaussian) statistical models are required for training and deploying a predictive ML model. However, it was suggested that various classes of Lagrangian model  could also be deployed with reasonable rates of learning. This is an area deserving further investigation......
The whole notion of online learning has been the subject of myriad commentary, blog posts, and media speculation . It is currently in fashion to think of online learning as a highly disruptive technology with regard to higher education: ideally, online learning will the eliminating market inefficiencies of the current higher education pricing model. It is of note that many of the most popular online courses (such as Sebastian Thrun's AI course at Udacity/Stanford) are hosted by "elite" Universities, and taught by the same people who write authoritative textbooks about the field they teach. What are the role of online college-level courses and services such as the Khan Academy? I am a tempered optimist, but it is worth noting that hype always surrounds the emergence of new technologies (or, in this case, new ways of delivering a service).
To cut through the hype, a little perspective is in order. The exposure one gets to the field of ML in a class like the Stanford offering is cursory. The catalog at Coursera (an online course clearinghouse sponsored by major US universities) are likewise meant to be introductory offerings . Courses such as these are most useful for continuing education, particularly in a fast-moving field like computational science. I think of the ML course (and others like it) as a distributed digital textbook. These courses are certainly something that can open up professional opportunities and expand the mind, but are not intended to and indeed cannot replace traditional college degree programs.
By contrast to putting non-elite CS departments out of business, courses such as this may well provide opportunities for niche course offerings. If basic courses could be provided by online services, the resources of local faculty could be spent on more specialized and esoteric geared towards the specific strengths of the institution and faculty.
 Machine Learning is a set of techniques and tools that used to fall under the name Artificial Intelligence (AI). While Artificial Intelligence is generally associated by most people with GOFAI, Machine Learning, a subfield of AI, is a more limited attempt to apply advanced statistical techniques to classification and inferential problems.
 here is a link to Andrew's course courtesy of Academic Earth.
 links to Three-Toed Sloth article and Naked Capitalism article.
 since Machine Learning is largely about learning categorization schemes, the "epistomology" of machine learning is a matter of understanding the bases of categorization and learning itself. This might take inspiration from animal/human models, or perhaps models of collective behavior, neither of which are stressed in modern approaches to machine learning.
 the Stanford group has built a proof-of-concept autonomous helicopter that has learned to self-operate using reinforcement learning: YouTube video 1, YouTube video 2. For a more general review article (from 2001), please see: "The Roles of Machine Learning in Robust Autonomous Systems" (David Kortenkamp,
Proceedings of the AAAI).
 there is no wiki page for "Lagrangian Probability Distributions", but suffice it to say they include various non-uniform distributions such as the Poisson and the Exponential. As a reference for understanding the formalisms and minutiae of Lagrangian distributions, I used the book by Consul and Famoye.
 here is an article about the potential of Coursera, here is an skeptical take on the quality of online education from Larry Moran at Sandwalk, here is an article about the role of the Khan academy in a global society, here is an account of John Hawks' d.i.y. experience in online teaching, and an article about Peter Thiel's solution to the higher education pricing bubble.
 the Coursera catalog can be found here. It includes courses from faculty at Michigan, Stanford, UC Berkeley, Princeton, and other top-tier institutions.
To what does the following picture refer? Hint can be found here.