AISys

Workshop on AI Systems

at Symposium on Operating Systems Principles (SOSP)

October 28, 2017

Accepted Posters

Experiences of Deep Learning Optimization on PAI

  • Wang Siyu*, AlibabaGroup; Jun Yang, Alibaba; Li Lanbo, AlibabaGroup

Optimization Mapping for Deep Learning

  • Wencong Xiao*, Beihang University; Cheng Chen, Microsoft Research; Youshan Miao, Microsoft Research; Jilong Xue, Microsoft Research; Ming Wu, Microsoft Research

Learning with Less: Can Approximate Storage Systems Save Learning From Drowning in Data?

  • Nitin Agrawal*, Samsung Research; Ashish Vulimiri, Samsung Research

From Cloud to Edge: Enabling Fast Inference Computation for Convolutional Neural Networks on ARM-based Multi-core and Many-core Architectures

  • Haidong Lan, Tencent; Jintao Meng*, Tencent; Fangyun Zhou, Tencent; Chengjun Zhan, Tencent; Minwen Deng, Tencent; Weiguo Liu, Shandong University

Litz: Transparent Elasticity for High-Performance Machine Learning

  • Aurick Qiao*; Abutalib Aghayev; Weiren Yu; Haoyang Chen; Qirong Ho; Garth Gibson; Eric Xing, Petuum, Inc.

Cavs: A Vertex-centric Symbolic Programming Model for Dynamic Dataflow Graphs

  • Shizhen Xu, Carnegie Mellon University; Hao Zhang*, Carnegie Mellon University; Graham Neubig, Carnegie Mellon University; Qirong Ho, Petuum Inc.; Guangwen Yang, Tsinghua University; Eric Xing, Petuum Inc.

High Accuracy Approximation of Secure Multiparty Neural Network Training

  • Daniel Ho*, UC Berkeley; Xin Wang, UC Berkeley; Wenting Zheng, UC Berkeley; Joseph Gonzalez, UC Berkeley; Raluca Popa, UC Berkeley; Ion Stoica, UC Berkeley

DAWNBench: An End-to-End Deep Learning Benchmark and Competition

  • Cody Coleman, Stanford; Deepak Narayanan*, Stanford; Daniel Kang, Stanford University; Tian Zhao, Stanford University; Jian Zhang, Stanford; Luigi Nardi, Stanford; Peter Bailis, MIT and Stanford; Kunle Olukotun, Stanford University; Christopher Re, Stanford University; Matei Zaharia, Stanford

A Reliable, Self-Adaptive Face Identification Framework via Lyapunov Optimization

  • Dohyeon Kim, Chung-Ang University; Joongheon Kim*, Chung-Ang University; Jae young Bang, Kakao Corporation

Auto-Parallelizing Deep Learning for Multi-machine, Multi-GPU Environments

  • Soojeong Kim*, Seoul National University; Eunji Jeong , Seoul National University; Joo Seong Jeong , Seoul National University; Gyeong-In Yu, Seoul National University; Hojin Park, Seoul National University; Byung-Gon Chun, Seoul National University

Decentralized Distributed Deep Learning

  • Wei Zhang*, IBM T.J. Watson; Xiangru Liang, University of Rochester; Ce Zhang, ETH; Ji Liu, University of Rochester

Low Latency Model Serving with Late Batching

  • Pin Gao*, Tsinghua University; Lingfan Yu, New York University; Jinyang Li, New York University; Yongwei Wu, Tsinghua University

HyperDrive: Flexible and Efficient Parallel Hyperparameter Exploration

  • Jeff Rasley*, Brown University; Yuxiong He, Microsoft; Feng Yan, University of Nevada, Reno; Olatunji Ruwase, Microsoft; Rodrigo Fonseca, Brown University

Symphony: Leveraging Probabilistic Graphical Models to Schedule Tasks to Clusters of Heterogeneous Processors

  • Subho Banerjee*, University of Illinois at Urbana-Champaign; Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign ; Steve Lumetta, University of Illinois at Urbana-Champaign ; Ravishankar Iyer, University of Illinois at Urbana-Champaign

Improving sample based cluster scheduler with learning based private-cluster-state

  • Chunliang Hao*, Yinrun Lyu, Jie Shen; Celia Chen, Jian Zhai, Mingshu LiInstitute of Software, Chinese Academy of Science