Machine learning with deep neural networks (deep learning) is at “peak-hype” according to Gartner. What this means for analytics practitioners is that we believe deep learning will be able to help us solve problems we’ve never been able to solve before, and we believe it is time to start developing and deploying deep learning models in our work. However, many data scientists and analytics professionals are still unclear on what deep learning is, how it works, and when to apply it. In this workshop we will provide a rigorous-yet-accessible introduction to deep learning with multi-layer perceptrons (a subset of deep neural networks dealing with structured data). In part one we will differentiate deep learning from traditional machine learning and provide guidance on when to consider using deep learning. In part two we will have a detailed discussion about the structure and algorithm of perceptrons and multi-layer perceptrons including topics like one hot encoding, matrix multiplication, activation functions, weight initialization, loss functions, gradient descent and optimizers, back-propagation, and error metrics. In part three we will discuss tuning a neural network by the network’s hyperparameters to improve model performance and efficiency. In part four we will discuss common “pitfalls”, or ways in which the neural network can struggle, including topics like vanishing gradients, overfitting, and dead neurons. Finally, we will conduct a lab in which we build a deep neural network in code using Keras and Tensorflow using a public machine learning data set. Note: knowledge of linear algebra, calculus, and statistics are beneficial but not required to gain significantly from this workshop. At the end of the day, development of models is the goal, and Keras makes doing so very easy with even a basic understanding of programming.