Machine Learning Pipelines with Containers: A Hands-on Quick-start

Workshop | Day 1 | 8:30 am | 240 Minute Duration | Grand Gallery C

Machine Learning Pipelines with Containers: A Hands-on Quick-start

Workshop | Day 1 | 8:30 am | 240 Minute Duration | Grand Gallery C

The three hottest trends in machine learning are containers, containers and containers! Teams of data scientists are leveraging these lightweight software bundles to customize their machine learning environments based on their own library preferences, to solve their unique dependencies and then prepare those solutions for deployment. The dexterity of containers handles the assembly of complex and heterogeneous model pipelines with relative ease. Only when a model gets deployed does it begin to create value for the business.

This workshop is a hands-on, quick-start introduction to modeling with containers. After this session you’ll understand the capabilities of these executable packages and how they can contribute to solving a problem, and have enough experience to start applying containers to your own opportunities.

During the workshop, we’ll provide an environment to build big-data pipelines and walk through examples designed to show the flexibility of these applications. Attendees will get a chance to do the following:

  • Create your first container & customize your environment
  • Build your first containerized models – e.g. classification algorithms
  • Serve your models as an ensemble via API’s
  • Invoke and deploy a complex algorithm within a container (i.e. deep learning for image classification)

Additionally, bonus material about open-source options, job orchestration, what to do when your data and algorithms begin to grow in complexity and extensions of containerizing model pipelines will be presented.

Prerequisites:

  • We won’t go down to the formula on algorithms, so if you’re not a PhD-level data scientist, don’t worry about it.
  • Bring your laptop and be ready to connect to our cluster.  We’ll take care of the rest.