Abstract: In this study, we propose a structure for model size reduction and speeding up of classifier using inverted residual block. Model size reduction for convolutional neural network computation in embedded systems is one of the main technique. To get a classifier structure that small and fast, we compare and analyze the experimental results of channel expansion parameter structure in inverted residual block proposed in MobileNetV2. Experiments were conducted on the Cifar-10 dataset for training and testing and compared with the method of MobileNetV2, 1.7% accuracy reduction, 60% model size reduction and 50% reduction in inference time were achieved.