Back To Schedule
Monday, May 20 • 4:00pm - 4:20pm
KnowledgeNet: Disaggregated and Distributed Training and Serving of Deep Neural Networks

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Deep Neural Networks (DNNs) have a significant impact on numerous applications, such as reinforcement learning, object detection, video processing, virtual/augmented reality, etc. The ever-changing environment forces the DNN models to evolve, accordingly. Also, the transition from the cloud-only to edge-cloud paradigm has made the deployment and training of these models challenging. Addressing these challenges requires new methods and systems for continuous training and distribution of these models in a heterogeneous environment. In this paper, we propose KnowledgeNet (KN), which is a new architectural technique for a simple disaggregation and distribution of the neural networks for both training and serving. Using KN, DNNs can be partitioned into multiple small blocks and be deployed on a distributed set of computational nodes. Also, KN utilizes the knowledge transfer technique to provide small scale models with high accuracy in edge scenarios with limited resources. Preliminary results are showing that our new method can ensure a state-of-the-art accuracy for a DNN model while being disaggregated among multiple workers. Also, by using knowledge transfer technique, we can compress the model by 62% for deployment, while maintaining the same accuracy.


Saman Biookaghazadeh

Arizona State University

Yitao Chen

Arizona State University

Kaiqi Zhao

Arizona State University

Ming Zhao

Arizona State University

Monday May 20, 2019 4:00pm - 4:20pm PDT
Stevens Creek Room

Attendees (5)