hh.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Multi-Task Representation Learning
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).ORCID-id: 0000-0002-2859-6155
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).ORCID-id: 0000-0003-3272-4145
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).ORCID-id: 0000-0002-7796-5201
2017 (engelsk)Inngår i: 30th Annual Workshop ofthe Swedish Artificial Intelligence Society SAIS 2017: May 15–16, 2017, Karlskrona, Sweden / [ed] Niklas Lavesson, Linköping: Linköping University Electronic Press, 2017, s. 53-59Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The majority of existing machine learning algorithms assume that training examples are already represented with sufficiently good features, in practice ones that are designed manually. This traditional way of preprocessing the data is not only tedious and time consuming, but also not sufficient to capture all the different aspects of the available information. With big data phenomenon, this issue is only going to grow, as the data is rarely collected and analyzed with a specific purpose in mind, and more often re-used for solving different problems. Moreover, the expert knowledge about the problem which allows them to come up with good representations does not necessarily generalize to other tasks. Therefore, much focus has been put on designing methods that can automatically learn features or representations of the data instead of learning from handcrafted features. However, a lot of this work used ad hoc methods and the theoretical understanding in this area is lacking.

sted, utgiver, år, opplag, sider
Linköping: Linköping University Electronic Press, 2017. s. 53-59
Serie
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 137
Emneord [en]
Representation Learning, Multi-Task Learning, Machine Learning, Supervised Learning, Feature Learning
HSV kategori
Identifikatorer
URN: urn:nbn:se:hh:diva-36755ISBN: 978-91-7685-496-9 (tryckt)OAI: oai:DiVA.org:hh-36755DiVA, id: diva2:1205474
Konferanse
30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden
Tilgjengelig fra: 2018-05-14 Laget: 2018-05-14 Sist oppdatert: 2019-04-12bibliografisk kontrollert

Open Access i DiVA

fulltext(336 kB)20 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 336 kBChecksum SHA-512
f35e1e563e451cf241201f405752d1c838d919a936e0236731825fb2dfeb08cfd47b4008b8ddaec2989546b8202f5ad769f7b67a145bcd435a9529a74197c7a2
Type fulltextMimetype application/pdf

Andre lenker

Proceeding

Personposter BETA

Bouguelia, Mohamed-RafikPashami, SepidehNowaczyk, Sławomir

Søk i DiVA

Av forfatter/redaktør
Bouguelia, Mohamed-RafikPashami, SepidehNowaczyk, Sławomir
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 20 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 123 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf