hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution
Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany.ORCID iD: 0000-0003-0129-0540
Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany.ORCID iD: 0000-0002-0816-2042
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany.ORCID iD: 0000-0002-5712-6777
Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany.ORCID iD: 0000-0002-2540-1869
Show others and affiliations
2018 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 3, no 4, p. 4007-4014Article in journal (Refereed) Published
Abstract [en]

We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory that facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model as follows: First, encodes observed actions in a latent vector space and, based on this latent encoding, second, infers most similar episodes previously experienced, third, reconstructs original episodes, and finally, predicts future frames in an end-to-end fashion. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Based on these results, we introduce an action matching and retrieval mechanism, benchmark its performance on two large-scale action datasets, 20BN-something-something and ActivityNet and evaluate its generalization capability in a real-world scenario on a humanoid robot.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2018. Vol. 3, no 4, p. 4007-4014
Keywords [en]
Learning and adaptive systems, visual learning, deep learning in robotics and automation
National Category
Robotics
Identifiers
URN: urn:nbn:se:hh:diva-38426DOI: 10.1109/LRA.2018.2860057ISI: 000441935900003Scopus ID: 2-s2.0-85062299146OAI: oai:DiVA.org:hh-38426DiVA, id: diva2:1266021
Funder
EU, Horizon 2020, 641100 (TimeStorm)German Research Foundation (DFG), SPP 1527Available from: 2018-11-27 Created: 2018-11-27 Last updated: 2024-01-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Aksoy, Eren

Search in DiVA

By author/editor
Rothfuss, JonasFerreira, FabioAksoy, ErenZhou, YouAsfour, Tamim
By organisation
CAISR - Center for Applied Intelligent Systems Research
In the same journal
IEEE Robotics and Automation Letters
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 299 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf