hh.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Robustness of Deep Convolutional Neural Networks for Image Recognition
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).ORCID-id: 0000-0001-8804-5884
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
2016 (Engelska)Ingår i: Intelligent Computing Systems: First International Symposium, ISICS 2016, Mérida, México, March 16-18, 2016, Proceedings / [ed] Anabel Martin-Gonzalez, Victor Uc-Cetina, Cham: Springer, 2016, Vol. 597, s. 16-30Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Recent research has found deep neural networks to be vulnerable, by means of prediction error, to images corrupted by small amounts of non-random noise. These images, known as adversarial examples are created by exploiting the input to output mapping of the network. For the MNIST database, we observe in this paper how well the known regularization/robustness methods improve generalization performance of deep neural networks when classifying adversarial examples and examples perturbed with random noise. We conduct a comparison of these methods with our proposed robustness method, an ensemble of models trained on adversarial examples, able to clearly reduce prediction error. Apart from robustness experiments, human classification accuracy for adversarial examples and examples perturbed with random noise is measured. Obtained human classification accuracy is compared to the accuracy of deep neural networks measured in the same experimental settings. The results indicate, human performance does not suffer from neural network adversarial noise.

Ort, förlag, år, upplaga, sidor
Cham: Springer, 2016. Vol. 597, s. 16-30
Serie
Communications in Computer and Information Science, ISSN 1865-0929
Nyckelord [en]
Adversarial examples, Deep neural network, Noise robustness
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:hh:diva-31443DOI: 10.1007/978-3-319-30447-2_2ISI: 000378489600002Scopus ID: 2-s2.0-84960448659ISBN: 978-3-319-30446-5 (tryckt)ISBN: 978-3-319-30447-2 (tryckt)OAI: oai:DiVA.org:hh-31443DiVA, id: diva2:944068
Konferens
First International Symposium, ISICS 2016, Mérida, México, March 16-18 2016
Tillgänglig från: 2016-06-28 Skapad: 2016-06-28 Senast uppdaterad: 2018-03-22Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Uličný, MatejLundström, JensByttner, Stefan

Sök vidare i DiVA

Av författaren/redaktören
Uličný, MatejLundström, JensByttner, Stefan
Av organisationen
CAISR Centrum för tillämpade intelligenta system (IS-lab)
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 225 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf