Neural Networks from Scratch with Pytorch
Neural Networks from Scratch with Pytorch

Abstract: 

There are many good tutorials on neural networks out there. While some of them dive deep into the code and show how to implement things, and others explain what is going on via diagrams or math, very few bring all the concepts needed to understand neural networks together, showing diagrams, code, and math side by side. In this tutorial, I’ll present a clear, step-by-step explanation of neural networks, implementing them from scratch in Numpy, while showing both diagrams that explain how they work and the math that explains why they work. We’ll cover normal, feedforward neural networks, convolutional neural networks (also from scratch) as well as recurrent neural networks (time permitting). Finally, we’ll be sure to leave time to translate what we learn into performant, flexible PyTorch code so you can apply what you’ve learned to real-world problems.

No background in neural networks is required, but a familiarity with the terminology of supervised learning (e.g. training set vs. testing set, features vs. target) will be helpful.

Bio: 

Brad Heintz is a partner engineer at Facebook working on PyTorch, an open source ML framework for research and production. He spreads PyTorch knowledge to audiences in person and online, and engages with the community to make sure their feedback is heard and incorporated. He is passionate about applying his interdisciplinary background to interesting technology domains, including Facebook's Spark AR augmented reality authoring platform, Big Data analytics using Hadoop and relate tools, and haptic interfaces. He's been building software, professionally and for fun, for forty years.