ML Systems

Workshop on ML Systems

at NIPS 2017

December 8, 2017

TensorFlow-Serving: Flexible, High-Performance ML Serving: Christopher Olston*, Google; Noah Fiedel, Google; Kiril Gorovoy, Google; Jeremiah Harmsen, Google; Li Lao, Google; Fangwei Li, Google; Vinu Rajashekhar, Google; Sukriti Ramesh, Google; Jordan Soyke, Google

Sketchy Inference: Towards Streaming LDA: Jean-Baptiste Tristan*, Oracle Labs; Michael Wick, Oracle Labs; Joseph Tassarotti, Carnegie Mellon University

Towards High-Performance Prediction Serving Systems: Yunseong Lee*, SNU; Alberto Scolari, Politecnico di Milano; Matteo Interlandi, Microsoft; Markus Weimer, Microsoft; Byung-Gon Chun, Seoul National University

CatBoost: gradient boosting with categorical features support: Anna Veronika Dorogush*, Yandex; Vasily Ershov, Yandex; Andrey Gulin, Yandex

UberShuffle: Communication-efficient Data Shuffling for SGD via Coding Theory: Jichan Chung*, EE at KAIST; Kangwook Lee, EE at KAIST; Ramtin Pedersani, ECE at UC Santa Barbara; Dimitris Papailiopoulos, ECE at University of Wisconsin-Madison; Kannan Ramchandran, EECS at UC Berkeley

Automatically Tracking Metadata and Provenance of Machine Learning Experiments: Sebastian Schelter*, Amazon; Joos-Hendrik Boese, Amazon; Johannes Kirschnick, Amazon; Thoralf Klein, Amazon; Stephan Seufert, Amazon

An Open-Source Recipe for Real-Time, Adaptive Classifiers in a Planetary-Scale Network: Christopher Kappler*, Facebook; Dejan Curcic, Facebook

CuPy: A NumPy-Compatible Library for NVIDIA GPU Calculations: Ryosuke Okuta, Preferred Networks; Yuya Unno, Preferred Networks; Daisuke Nishino, Preferred Networks; Shohei Hido*, Preferred Networks; Crissman Loomis

Training Deeper Models by GPU Memory Optimization on TensorFlow: Chen Meng*, Alibaba Group; minmin sun, Alibaba; Jun Yang, Alibaba; Minghui Qiu, Alibaba; Yang Gu, Alibaba Group

The Data Linter: Lightweight, Automated Sanity Checking for ML Data Sets: Michael Terry*, Google; D Sculley, Google; Nick Hynes, UC Berkeley

Realtime Semantic Segmentation Benchmarking Framework: Mennatullah Siam*, University of Alberta; Mostafa Gamal, Faculty of Engineering, Cairo University; Moemen AbdelRazek, Cairo University; Senthil Yogamani, Valeo Vision Systems

Distributed Machine Learning - but at what COST?: Christoph Boden*, TU Berlin; Tilmann Rabl, Technische Universität Berlin; Volker Markl, Techniche Universität Berlin

A Machine Learning Approach to Databases Indexes: Alex Beutel*, Google; Tim Kraska, Brown University; Ed Chi, Google; Jeff Dean, Google; Neoklis Polyzotis, Google

DLVM: A modern compiler framework for neural network DSLs: Richard Wei, University of Illinois at Urbana-Champaign; Lane Schwartz*, University of Illinois at Urbana-Champaign; Vikram Adve, University of Illinois at Urbana-Champaign

ChainerMN: Scalable Distributed Deep Learning Framework: Takuya Akiba, Preferred Networks, Inc.; Keisuke Fukuda*, Preferred Networks, Inc.; Shuji Suzuki, Preferred Networks, Inc.

How Much Should We Invest for Network Facility: Quantitative Analysis on Network ’Fatness’ and Machine Learning Performance: Hidemoto Nakada*, AIST

DAWNBench: An End-to-End Deep Learning Benchmark and Competition: Cody Coleman*, Stanford; Deepak Narayanan, Stanford; Daniel Kang, Stanford University; Tian Zhao, Stanford University; Jian Zhang, Stanford; Luigi Nardi, Stanford; Peter Bailis, MIT and Stanford; Kunle Olukotun, Stanford University; Christopher Re, Stanford University; Matei Zaharia, Stanford and Databricks

Tangent: Automatic Differentiation Using Source Code Transformation in Python: Bart van Merriënboer, Alexander B Wiltschko*, Dan Moldovan, Google Inc.

Scalable Multi-Framework Multi-Tenant Lifecycle Management of Deep Learning Training Jobs: Scott Boag, Parijat Dube, Benjamin Herta*, Waldemar Hummer, Vatche Ishakian, K. R. Jayaram, Michael Kalantar, Vinod Muthusamy, Priya Nagpurkar, Florian Rosenberg; IBM Research

NSML: A Machine Learning Platform That Enables You to Focus on Your Models: Nako Sung, NAVER; Minkyu Kim, NAVER; Hyunwoo Jo, NAVER; Youngil Yang, NAVER; Jinwoong Kim, NAVER; Leonard Lausen, NAVER & HKUST; Youngkwan Kim, NAVER; Gayoung Lee, NAVER Webtoon; Donghyun Kwak, Search Solution Inc; Jung Woo Ha*, Naver; Sung Kim, NAVER & HKUST

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training: Yujun Lin*, Tsinghua University; song han, Stanford University; huizi mao, stanford university; Yu Wang, tsinghua university; bill dally, stanford university

Synkhronos: a Multi-GPU Theano Extension for Data Parallelism: Adam Stooke*, UC Berkeley; Pieter Abbeel, UC Berkeley

Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks: Shizhen Xu, Tsinghua University; Hao Zhang*, Carnegie Mellon University; Graham Neubig, Carnegie Mellon University; Wei Dai, Carnegie Mellon University; Qirong Ho, Institute for Infocomm Research; Guangwen Yang, Tsinghua University; Eric Xing, Petuum Inc.