Experimental energy consumption analysis of neural network model compression methods on microcontrollers with applications in bird call classification

Abstract

Abstract-Running deep neural networks on low-power micro-controllers has become increasingly practical with the advance-ment in algorithm and hardware design. However, hardware resource limitations such as battery capacity still remains as a bottleneck which prevents the deployment of large classifi-cation algorithms. Model compression techniques address this issue through algorithmic means. This work investigates model compression methods, pruning and quantization, to optimize a ResNet-18 model for bird call classification. The investigation identifies the contribution of the quantization resolution and pruning sparsity towards the energy consumption, power con-sumption, inference time, performance accuracy, computational complexity, and the memory size. © 2022 IEEE.

Publication
Proceedings of IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2022
Yuqian Lu
Yuqian Lu
Principle Investigator / Senior Lecturer

My research interests include smart manufacturing systems, industrial AI and robotics.