MTL-Split: Multi-Task Learning for Edge Devices using Split Computing

1 Department of Engineering for Innovation Medicine, University of Verona, Italy 2 Department of Computer Science, The University of North Carolina at Chapel Hill, USA
MTL-Split teaser
The proposed architecture for handling complex inference tasks on edge devices by integrating Split Computing (SC) and Multi-Task Learning (MTL). This architecture consists of two components: i) a shared backbone deployed on the edge device, and ii) a series of task-solving heads on a single or multiple remote devices. Orange trapezoids are DNN models, while their parameters are enclosed in red boxes. The green components on the right-hand are the loss functions used to update the learnable parameters. A communication network separates edge and remote devices.

Abstract

Split Computing (SC), where a Deep Neural Network (DNN) is intelligently split with a part of it deployed on an edge device and the rest on a remote server is emerging as a promising approach. It allows the power of DNNs to be leveraged for latency-sensitive applications that do not allow the entire DNN to be deployed remotely, while not having sufficient computation bandwidth available locally. In many such embedded systems scenarios, such as those in the automotive domain, computational resource constraints also necessitate Multi-Task Learning (MTL), where the same DNN is used for multiple inference tasks instead of having dedicated DNNs for each ask, which would need more computing bandwidth. However, how to partition such a multi-tasking DNN to be deployed within a SC framework has not been sufficiently studied. This paper studies this problem, and MTL-Split, our novel proposed architecture, shows encouraging results on both synthetic and real-world data.

BibTeX

TBA.