We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory that facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model as follows: First, encodes observed actions in a latent vector space and, based on this latent encoding, second, infers most similar episodes previously experienced, third, reconstructs original episodes, and finally, predicts future frames in an end-to-end fashion. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Based on these results, we introduce an action matching and retrieval mechanism, benchmark its performance on two large-scale action datasets, 20BN-something-something and ActivityNet and evaluate its generalization capability in a real-world scenario on a humanoid robot.