Loading…
Tutorial [clear filter]
Monday, May 20
 

10:30am PDT

Using Amazon SageMaker to Operationalize Machine Learning
In this tutorial, we'll talk about how Amazon Web Services customers are using Amazon SageMaker, a fully-managed service for machine learning, to accelerate the time-to-production for their ML models. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the algorithm, tune and optimize it for deployment, make predictions, and take action.

Takeaways:
There are multiple aspects to operationalizing ML, including security and standardized environments for data exploration and ingestion, scaling training from small datasets to large datasets, readying models for deployment, and monitoring/managing production deployments. We'll talk about how SageMaker helps with each of these stages.

Prerequisites:
Understanding of the full ML workflow, but otherwise none.

Speakers
KV

Kumar Venkateswar

Amazon
Kumar Venkateswar currently leads the Amazon SageMaker product management team. He came to Amazon to improve the machine learning platform through launching Amazon SageMaker, the Amazon Deep Learning AMI, and several features in Amazon Machine Learning which provided a better user... Read More →


Monday May 20, 2019 10:30am - 11:30am PDT
Lawrence/San Tomas/Lafayette Rooms

11:30am PDT

Nvidia “NGC” Deep Learning Containers
This tutorial will cover the Nvidia’s “NGC” containers for deep learning including: which Deep Learning Frameworks and utilities are provided in Nvidia NGC containers; how to access and use these containers; which GPUs and cloud services can run Nvidia NGC containers; sample and example code included in Nvidia NGC containers which implement Deep Learning models; latest features to simplify achieving optimum performance and support for multi-node training.

Speakers
CT

Chetan Tekur

Nvidia
Chetan Tekur is a field applications engineer and solutions architect at Nvidia, where he's focused on CSP and networking customers. Chetan has 11+ years of experience in HW industry focusing on pre-sales/post-sales support, technology evangelization, and customer management. He has... Read More →
FG

Fraser Gardiner

Nvidia
Fraser Gardiner is the Solutions Architecture Director that supports Nvidia’s Major Cloud Service Provider Partners. He has over 25 years of experience in Unix/Linux systems to support both mission-critical enterprise applications as well as industrial/hyperscale use cases. He has... Read More →


Monday May 20, 2019 11:30am - 12:30pm PDT
Lawrence/San Tomas/Lafayette Rooms

1:30pm PDT

Deep Learning Lifecycle Management with Kubernetes, REST, and Python
In this tutorial, we will learn the basics of using trained deep learning model in applications through REST API. We will deploy model with pure Python and with Tensorflow Serving. Each application should be as simple as possible. The most straightforward approach to build REST service with the keras model is just to put the model into web framework in python. Flask is the minimalistic framework and is a good choice for simple applications or MVP. To support multiple models, or multiple versions of one model at scale we will utilize TF Serving to build a scalable API able to serve hundreds and thousands of requests per second.

Prerequisites:
Python3, experience with Docker, and basic knowledge of REST web services.

Speakers

Monday May 20, 2019 1:30pm - 2:30pm PDT
Winchester Room

2:30pm PDT

ModelOps on AWS
In this talk/tutorial we will discuss the principles of DevOps and how they apply to Machine Learning (aka ModelOps) followed by a tutorial on how to apply those principles on AWS using Amazon SageMaker.

Prerequisites: Some ML experience (this is not a Machine Learning 101 talk)

Speakers
JC

John Calhoun

AWS
John Calhoun is a machine learning specialist for AWS Public Sector. He works with our customers and partners to provide leadership on machine learning, helping them shorten their time to value when using AWS.


Monday May 20, 2019 2:30pm - 3:30pm PDT
Winchester Room

4:00pm PDT

Accelerating the Machine Learning Lifecycle with MLflow
ML development brings many new complexities beyond traditional software development. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure.

In this session, we introduce MLflow, an open source ML platform started by Databricks in 2018 that is designed to integrate easily with arbitrary ML libraries, deployment tools, and workflows. MLflow introduces simple abstractions to package reproducible pipelines, track results, and encapsulate models that streamline sharing and productionizing ML. The project has a fast-growing open source community, with 80 contributors from over 40 companies, and integrations with Python, Java, R, and dozens of ML libraries and services. We show how to set up MLflow and execute various workflows in it based on best practices from current users.

Speakers
AC

Andrew Chen

DataBricks


Monday May 20, 2019 4:00pm - 5:00pm PDT
Winchester Room

5:00pm PDT

Consistent Multi-Cloud AI Lifecycle Management with Kubeflow
The journey or the AI/ML lifecycle consists of several steps ranging from accessing the data to training the models and then deploying it. This process is an involved one and is a subject of rapid engineering (especially in open source) and research (e.g. OpML). In this tutorial, we articulate the technical challenges faced during the AI/ML lifecycle management by a variety of persona ranging from the ML scientist to the ML DevOps engineer. We introduce a consistent platform across multiple clouds called Kubeflow, to help solve the challenges faced in multi-cloud AI/ML lifecycle management.

Speakers
DD

Debo Dutta

Cisco
Debo is a distinguished engineer at Cisco where he incubates and now leads an AI/ML systems team. His team’s efforts include major contributions to Kubeflow and neural architecture search (autoML).
XH

Xinyuan Huang

Cisco
Xinyuan Huang is a software engineer at Cisco, where he works on research and development of AI/ML systems. He is an active member in the Kubeflow community and owner of the Kubebench project.


Monday May 20, 2019 5:00pm - 6:00pm PDT
Winchester Room
 
Filter sessions
Apply filters to sessions.