hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robustness of Deep Convolutional Neural Networks for Image Recognition
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0001-8804-5884
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
2016 (English)In: Intelligent Computing Systems: First International Symposium, ISICS 2016, Mérida, México, March 16-18, 2016, Proceedings / [ed] Anabel Martin-Gonzalez, Victor Uc-Cetina, Cham: Springer, 2016, Vol. 597, p. 16-30Conference paper, Published paper (Refereed)
Abstract [en]

Recent research has found deep neural networks to be vulnerable, by means of prediction error, to images corrupted by small amounts of non-random noise. These images, known as adversarial examples are created by exploiting the input to output mapping of the network. For the MNIST database, we observe in this paper how well the known regularization/robustness methods improve generalization performance of deep neural networks when classifying adversarial examples and examples perturbed with random noise. We conduct a comparison of these methods with our proposed robustness method, an ensemble of models trained on adversarial examples, able to clearly reduce prediction error. Apart from robustness experiments, human classification accuracy for adversarial examples and examples perturbed with random noise is measured. Obtained human classification accuracy is compared to the accuracy of deep neural networks measured in the same experimental settings. The results indicate, human performance does not suffer from neural network adversarial noise.

Place, publisher, year, edition, pages
Cham: Springer, 2016. Vol. 597, p. 16-30
Series
Communications in Computer and Information Science, ISSN 1865-0929
Keywords [en]
Adversarial examples, Deep neural network, Noise robustness
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-31443DOI: 10.1007/978-3-319-30447-2_2ISI: 000378489600002Scopus ID: 2-s2.0-84960448659ISBN: 978-3-319-30446-5 (print)ISBN: 978-3-319-30447-2 (print)OAI: oai:DiVA.org:hh-31443DiVA, id: diva2:944068
Conference
First International Symposium, ISICS 2016, Mérida, México, March 16-18 2016
Available from: 2016-06-28 Created: 2016-06-28 Last updated: 2018-03-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Uličný, MatejLundström, JensByttner, Stefan

Search in DiVA

By author/editor
Uličný, MatejLundström, JensByttner, Stefan
By organisation
CAISR - Center for Applied Intelligent Systems Research
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 342 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf