Schedule
08:45 - 09:00 Opening remarks (Sarah Bird)
Session 1 (Chair: Gartn Gibson)
09:00 - 09:20 Invited Talk: Ray: A distributed execution engine for emerging AI applications, Ion Stoica, UC Berkeley
09:20 - 09:40 Contributed Talk: The Case for Learning Database Indexes
09:40 - 10:00 Invited Talk: Federated Multi-Task Learning, Virginia Smith, Stanford University
Session 2: Poster Previews (Chair: Joseph Gonzalez)
10:00 - 10:30 1 min lightning talks
10:30 - 11:30 Posters & Coffee
Session 3 (Chair: Erran Li)
11:30 - 11:50 Invited Talk: Accelerating Persistent Neural Networks at Datacenter Scale, Daniel Lo, Microsoft Research
11:50 - 12:10 Contributed Talk: DLVM: A modern compiler framework for neural network DSLs
12:10 - 13:20 Lunch (Optional Vowpal Wabbit tutorial by John Langford from 12:30 - 13:20, hosted at Extreme Classification workshop)
Session 4: ML Systems Updates (Chair: Aparna Lakshmiratan)
13:20 - 14:50 Updates from Current ML Systems: TensorFlow, PyTorch, Caffe2, CNTK, MXNet, TVM, Clipper, MacroBase, ModelDB
Session 5 (Chair: Yangqing Jia)
14:50 - 15:20 Invited Talk: Machine Learning for Systems and Systems for Machine Learning, Jeff Dean, Google Brain
15:20 - 15:40 Invited Talk: Creating an Open and Flexible ecosystem for AI models with ONNX, Sarah Bird and Dmytro Dzhulgakov, Facebook Research
15:40 - 16:30 Posters & Coffee
Session 6 (Chair: Dan Crankshaw)
16:30 - 16:50 Contributed Talk: NSML: A Machine Learning Platform That Enables You to Focus on Your Models
16:50 - 17:10 Contributed Talk: DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Session 7 (Chair: Siddhartha Sen)
17:10 - 18:15 Panel on Machine Learning Systems Research: Garth Gibson, Joseph Gonzalez, John Langford, Dawn Song, Yangqing Jia