hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-Task Representation Learning
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-2859-6155
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-7796-5201
2017 (English)In: 30th Annual Workshop ofthe Swedish Artificial Intelligence Society SAIS 2017: May 15–16, 2017, Karlskrona, Sweden / [ed] Niklas Lavesson, Linköping: Linköping University Electronic Press, 2017, p. 53-59Conference paper, Published paper (Refereed)
Abstract [en]

The majority of existing machine learning algorithms assume that training examples are already represented with sufficiently good features, in practice ones that are designed manually. This traditional way of preprocessing the data is not only tedious and time consuming, but also not sufficient to capture all the different aspects of the available information. With big data phenomenon, this issue is only going to grow, as the data is rarely collected and analyzed with a specific purpose in mind, and more often re-used for solving different problems. Moreover, the expert knowledge about the problem which allows them to come up with good representations does not necessarily generalize to other tasks. Therefore, much focus has been put on designing methods that can automatically learn features or representations of the data instead of learning from handcrafted features. However, a lot of this work used ad hoc methods and the theoretical understanding in this area is lacking.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2017. p. 53-59
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 137
Keywords [en]
Representation Learning, Multi-Task Learning, Machine Learning, Supervised Learning, Feature Learning
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:hh:diva-36755ISBN: 978-91-7685-496-9 (print)OAI: oai:DiVA.org:hh-36755DiVA, id: diva2:1205474
Conference
30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden
Available from: 2018-05-14 Created: 2018-05-14 Last updated: 2018-06-12Bibliographically approved

Open Access in DiVA

fulltext(336 kB)2 downloads
File information
File name FULLTEXT01.pdfFile size 336 kBChecksum SHA-512
f35e1e563e451cf241201f405752d1c838d919a936e0236731825fb2dfeb08cfd47b4008b8ddaec2989546b8202f5ad769f7b67a145bcd435a9529a74197c7a2
Type fulltextMimetype application/pdf

Other links

Proceeding

Authority records BETA

Bouguelia, Mohamed-RafikPashami, SepidehNowaczyk, Sławomir

Search in DiVA

By author/editor
Bouguelia, Mohamed-RafikPashami, SepidehNowaczyk, Sławomir
By organisation
CAISR - Center for Applied Intelligent Systems Research
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 2 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 0 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf