Productionizing Your Machine Learning Models
Productionizing Your Machine Learning Models


You've developed and trained your ML model, and it performs beautifully in your development environment -- but what happens when you move that into production, and it suddenly has to scale massively varying elastic workloads, compete with other models for memory and processing resources, or mesh with models deployed in other languages and frameworks?

It isn't enough to simply fire up a machine instance, write a Flask wrapper, and call it a day: properly productionizing a model requires a deep understanding of container management, load balancing, CI/CD, dynamic resource allocation, and more. In this talk, we'll look at what your team does and does not need to build in order to move from weeks of deployment time to mere minutes, while preserving elasticity, low-latency, and flexibility.


Colin Spikes, Senior Manager of Solution Engineering at Algorithmia, is an experienced solution consultant with an extensive background in all things data. Prior to Algorithmia, Colin managed a team of Data Solution Architects at Socrata, assisting cities, states, and federal agencies worldwide to unlock the power of data to better understand and communicate conditions in their communities.