The robustness of deep neural networks is an increasingly essential issue as they become more and more prevalent in several real-world applications like autonomous vehicles. If traffic signs turn to adversarial examples, an autonomous vehicle will probably be misled and cause fatal accidents. To improve adversarial robustness, a new cost function for training convolutional neural recognition networks is proposed in this paper. Recent works proved that by employing the classifier probabilities on the complement (incorrect) classes as well as the ground-truth class in Softmax Cross Entropy, the model achieves better performance on adversarial inputs. In this paper, we show that in addition to using the information from Softmax layer, the extracted features from convolutional layers also enhance the robustness. In our new cost function, Regularized Guided Complement Entropy (RGCE), by decreasing the output of convolutional layers’ activation functions alongside utilizing Softmax layer output in training phase, we reach better model performance on adversarial attacks. Our proposed algorithm is evaluated on CIFAR-10 and GTSRB datasets. The performances of different convolutional neural networks on clean and adversarial images are reported and compared with other methods. © 2022 Published by Elsevier B.V.