Beautification and augmented reality filters are very popular in applications that use selfie images. However, they can distort or modify biometric features, severely affecting the ability to recognise the individuals’ identity or even detect the face. Accordingly, we address the effect of such filters on the accuracy of automated face detection and recognition. The social media image filters studied modify the image contrast, illumination, or occlude parts of the face. We observe that the effect of some of these filters is harmful to face detection and identity recognition, especially if they obfuscate the eye or (to a lesser extent) the nose. To counteract such effect, we develop a method to reverse the applied manipulation with a modified version of the U-NET segmentation network. This method is observed to contribute to better face detection and recognition accuracy. From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using several CNN backbones. We also evaluate if incorporating filtered images into the training set of machine learning approaches is beneficial. Our results show good recognition when filters do not occlude important landmarks, especially the eyes. The combined effect of the proposed approaches also allows mitigating the impact produced by filters that occlude parts of the face. © 2022 The Authors. Published by Elsevier B.V.