Abstract: In this study, we propose a structure for model size reduction and speeding up of classifier using inverted residual block. Model size reduction for convolutional neural network computation in embedded systems is one of the main technique. To get a classifier structure that small and fast, we compare and analyze the experimental results of channel expansion parameter structure in inverted residual block proposed in MobileNetV2. Experiments were conducted on the Cifar-10 dataset for training and testing and compared with the method of MobileNetV2, 1.7% accuracy reduction, 60% model size reduction and 50% reduction in inference time were achieved.
Seong-Kyun Han and Soon-Chul Kwon, 2018. A Study on Channel Expansion Structure for Reducing Model Size and Speeding Up of Classifier Using Inverted Residual Block. Journal of Engineering and Applied Sciences, 13: 8670-8674. Asian Journal of Information Technology, 18: 250-260.