Differential calculus lies at the heart of every modern Machine Learning and Deep Learning algorithm. From training linear models to optimizing deep neural networks, gradients drive learning. This course is designed to build a strong, intuitive, and hands-on understanding of differential calculus, specifically focused on its role in optimization and learning in AI systems.
The course begins with a quick mathematical refresher on functions, limits, and continuity, ensuring that learners from diverse backgrounds can comfortably progress. You will then dive deep into derivatives from first principles, learning how to compute and interpret them as rates of change and sensitivities in real-world problems.
As the course advances, you will explore partial derivatives, multivariable functions, gradients, and Hessians, which form the backbone of back-propagation and optimization in neural networks.
A major emphasis of this course is hands-on practice. You will implement derivatives numerically and symbolically, and code optimization algorithms such as gradient descent, stochastic gradient descent, and momentum-based methods. Through practical labs, you will directly connect calculus theory to training real ML and DL models.
By the end of this course, you will not only know how to compute derivatives, but also how to use calculus as a tool to understand, debug, and improve learning algorithms, giving you a decisive edge in advanced AI studies and real-world projects.
After completing this course, learners will be able to: