hh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 50) Show all publications
Habibovic, A., Andersson, J., Malmsten Lundgren, V., Klingegård, M., Englund, C. & Larsson, S. (2019). External Vehicle Interfaces for Communication with Other Road Users?. In: Gereon Meyer & Sven Beiker (Ed.), Road Vehicle Automation 5: (pp. 91-102). Cham: Springer
Open this publication in new window or tab >>External Vehicle Interfaces for Communication with Other Road Users?
Show others...
2019 (English)In: Road Vehicle Automation 5 / [ed] Gereon Meyer & Sven Beiker, Cham: Springer, 2019, p. 91-102Chapter in book (Refereed)
Abstract [en]

How to ensure trust and societal acceptance of automated vehicles (AVs) is a widely-discussed topic today. While trust and acceptance could be influenced by a range of factors, one thing is sure: the ability of AVs to safely and smoothly interact with other road users will play a key role. Based on our experiences from a series of studies, this paper elaborates on issues that AVs may face in interactions with other road users and whether external vehicle interfaces could support these interactions. Our overall conclusion is that such interfaces may be beneficial in situations where negotiation is needed. However, these benefits, and potential drawbacks, need to be further explored to create a common language, or standard, for how AVs should communicate with other road users.

Place, publisher, year, edition, pages
Cham: Springer, 2019
Series
Lecture Notes in Mobility, ISSN 2196-5544, E-ISSN 2196-5552
Keywords
External signaling, Communication of intent, Automated vehicles, Other road users
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:hh:diva-41106 (URN)10.1007/978-3-319-94896-6_9 (DOI)978-3-319-94895-9 (ISBN)978-3-319-94896-6 (ISBN)
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-04Bibliographically approved
Sprei, F., Habibi, S., Englund, C., Pettersson, S., Voronov, A. & Wedlin, J. (2019). Free-floating car-sharing electrification and mode displacement: Travel time and usage patterns from 12 cities in Europe and the United States. Transportation Research Part D: Transport and Environment, 71(SI), 127-140
Open this publication in new window or tab >>Free-floating car-sharing electrification and mode displacement: Travel time and usage patterns from 12 cities in Europe and the United States
Show others...
2019 (English)In: Transportation Research Part D: Transport and Environment, ISSN 1361-9209, E-ISSN 1879-2340, Vol. 71, no SI, p. 127-140Article in journal (Refereed) Published
Abstract [en]

Free-floating car-sharing (FFCS) allows users to book a vehicle through their phone, use it and return it anywhere within a designated area in the city. FFCS has the potential to contribute to a transition to low-carbon mobility if the vehicles are electric, and if the usage does not displace active travel or public transport use. The aim of this paper is to study what travel time and usage patterns of the vehicles among the early adopters of the service reveal about these two issues.

We base our analysis on a dataset containing rentals from 2014 to 2017, for 12 cities in Europe and the United States. For seven of these cities, we have collected travel times for equivalent trips with walking, biking, public transport and private car.

FFCS services are mainly used for shorter trips with a median rental time of 27 min and actual driving time closer to 15 min. When comparing FFCS with other transport modes, we find that rental times are generally shorter than the equivalent walking time but longer than cycling. For public transport, the picture is mixed: for some trips there is no major time gain from taking FFCS, for others it could be up to 30 min.

For electric FFCS vehicles rental time is shorter and the number of rentals per car and day are slightly fewer compared to conventional vehicles. Still, evidence from cities with an only electric fleet show that these services can be electrified and reach high levels of utilization. © 2018 The Authors

Place, publisher, year, edition, pages
Oxford: Elsevier, 2019
Keywords
Alternative trips, Electric vehicles, Free-floating car-sharing, Shared mobility, Travel time, Usage patterns
National Category
Transport Systems and Logistics
Identifiers
urn:nbn:se:hh:diva-41099 (URN)10.1016/j.trd.2018.12.018 (DOI)000471361300009 ()2-s2.0-85058802098 (Scopus ID)
Funder
Swedish Energy Agency
Note

Funding details: This research was supported by Swedish Energy Agency and Chalmers Area of Advance of Energy and Transport.

Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-04Bibliographically approved
Torstensson, M., Hai Bui, T., Englund, C., Lindström, D. & Duran, B. (2019). In-vehicle Driver and Passenger Activity Recognition. In: : . Paper presented at 37th Annual Swedish Symposium on Image Analysis (SSBA 2019), Gothenburg, Sweden, March 19-20, 2019.
Open this publication in new window or tab >>In-vehicle Driver and Passenger Activity Recognition
Show others...
2019 (English)Conference paper, Published paper (Other academic)
Abstract [en]

Recognition of human behaviour within vehicles are becoming increasingly important. Paradoxically, the more control the car has (ie in terms of support systems), the more we need to know about the person behind the wheel [1] especially if he or she is expected to take over control from automation. A lot of focus has been devoted to research on the sensors monitoring the outside surroundings, but sensors on the inside has not received nearly as much attention. In terms of monitoring distractions, what is currently seen as dangerous (eg use of mobile phones) can in the future be seen as something good that helps to keep people awake in highly automated vehicles. Another reason for mapping activities inside the car is the often occurring mismatch between driver expectations and the reality of what today’s automated vehicles are capable of [2]. As long as the automation comes with limitations that impose a need for the driver to take over control at some point, it will be important to know more about what happens inside the vehicle. In this paper we describe the work performed within the ongoing DRAMA project1 to combine UX research with computer vision and machine learning to gather knowledge about what activities in a cabin can be mapped how they can be modelled to improve traffic safety and UX functionality.

National Category
Vehicle Engineering
Identifiers
urn:nbn:se:hh:diva-41101 (URN)
Conference
37th Annual Swedish Symposium on Image Analysis (SSBA 2019), Gothenburg, Sweden, March 19-20, 2019
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2020-01-02
Henriksson, J., Berger, C., Borg, M., Tornberg, L., Sathyamoorthy, S. & Englund, C. (2019). Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks. In: Staron, M., Capilla, R. & Skavhaug, A. (Ed.), Proceedings. 45th Euromicro Conference on Software Engineering and Advanced Applications. SEAA 2019: 28 - 30 August 2019 Kallithea, Chalkidiki, Greece. Paper presented at Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Kallithea, Chalkidiki, Greece, August 28-30, 2019. Piscataway: IEEE
Open this publication in new window or tab >>Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks
Show others...
2019 (English)In: Proceedings. 45th Euromicro Conference on Software Engineering and Advanced Applications. SEAA 2019: 28 - 30 August 2019 Kallithea, Chalkidiki, Greece / [ed] Staron, M., Capilla, R. & Skavhaug, A., Piscataway: IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Several areas have been improved with Deep Learning during the past years. For non-safety related products adoption of AI and ML is not an issue, whereas in safety critical applications, robustness of such approaches is still an issue. A common challenge for Deep Neural Networks (DNN) occur when exposed to out-of-distribution samples that are previously unseen, where DNNs can yield high confidence predictions despite no prior knowledge of the input. In this paper we analyse two supervisors on two well-known DNNs with varied setups of training and find that the outlier detection performance improves with the quality of the training procedure. We analyse the performance of the supervisor after each epoch during the training cycle, to investigate supervisor performance as the accuracy converges. Understanding the relationship between training results and supervisor performance is valuable to improve robustness of the model and indicates where more work has to be done to create generalized models for safety critical applications. © 2019 IEEE

Place, publisher, year, edition, pages
Piscataway: IEEE, 2019
Keywords
deep neural networks, robustness, out-of distribution, automotive perception
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-41096 (URN)10.1109/SEAA.2019.00026 (DOI)2-s2.0-85076012153 (Scopus ID)978-1-7281-3421-5 (ISBN)978-1-7281-3422-2 (ISBN)978-1-7281-3285-3 (ISBN)
Conference
Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Kallithea, Chalkidiki, Greece, August 28-30, 2019
Funder
VinnovaWallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Note

Other funder: Fordonsstrategisk forskning och innovation (FFI) under the grant number: 2017-03066.

Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-19Bibliographically approved
Borg, M., Englund, C., Wnuk, K., Durann, B., Lewandowski, C., Gao, S., . . . Törnqvist, J. (2019). Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry. Journal of Automotive Software Engineering, 1(1), 1-19
Open this publication in new window or tab >>Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry
Show others...
2019 (English)In: Journal of Automotive Software Engineering, ISSN 2589-2258, Vol. 1, no 1, p. 1-19Article in journal (Refereed) Published
Abstract [en]

Deep neural networks (DNNs) will emerge as a cornerstone in automotive software engineering. However, developing systems with DNNs introduces novel challenges for safety assessments. This paper reviews the state-of-the-art in verification and validation of safety-critical systems that rely on machine learning. Furthermore, we report from a workshop series on DNNs for perception with automotive experts in Sweden, confirming that ISO 26262 largely contravenes the nature of DNNs. We recommend aerospace-to-automotive knowledge transfer and systems-based safety approaches, for example, safety cage architectures and simulated system test cases.© 2019 The Authors.

Place, publisher, year, edition, pages
Paris: Atlantis Press, 2019
Keywords
Deep learning, Safety-critical systems, Machine learning, Verification and validation, ISO 26262
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:hh:diva-41108 (URN)10.2991/jase.d.190131.001 (DOI)
Funder
VinnovaKnowledge Foundation
Note

Funding details: Thanks go to all participants in the SMILE workshops, in particular Carl Zandén, Michaël Simoen, and Konstantin Lindström. This work was carried out within the SMILE and SMILE II projects financed by Vinnova, FFI, Fordonsstrategisk forskning och innovation under the grant numbers: 2016-04255 and 2017-03066. We would like to acknowledge that this work was supported by the KKS foundation through the S.E.R.T. Research Profile project at Blekinge Institute of Technology.

Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-04Bibliographically approved
Henriksson, J., Berger, C., Borg, M., Tornberg, L., Englund, C., Sathyamoorthy, S. & Ursing, S. (2019). Towards Structured Evaluation of Deep Neural Network Supervisors. In: 2019 IEEE International Conference On Artificial Intelligence Testing (AITest): . Paper presented at EEE International Conference On Artificial Intelligence Testing (AITest), San Francisco, CA, USA, 4-9 April, 2019 (pp. 27-34). New York: IEEE
Open this publication in new window or tab >>Towards Structured Evaluation of Deep Neural Network Supervisors
Show others...
2019 (English)In: 2019 IEEE International Conference On Artificial Intelligence Testing (AITest), New York: IEEE, 2019, p. 27-34Conference paper, Published paper (Refereed)
Abstract [en]

Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to the training set, which might lead to high confidence predictions despite proper knowledge of the input. Several previous studies have proposed to complement DNNs with a supervisor that detects when inputs are outside the scope of the network. Most of these supervisors, however, are developed and tested for a selected scenario using a specific performance metric. In this work, we emphasize the need to assess and compare the performance of supervisors in a structured way. We present a framework constituted by four datasets organized in six test cases combined with seven evaluation metrics. The test cases provide varying complexity and include data from publicly available sources as well as a novel dataset consisting of images from simulated driving scenarios. The latter we plan to make publicly available. Our framework can be used to support DNN supervisor evaluation, which in turn could be used to motive development, validation, and deployment of DNNs in safety-critical applications. © 2019 IEEE.

Place, publisher, year, edition, pages
New York: IEEE, 2019
Keywords
deep neural networks, robustness, out-of-distribution, supervisor, automotive perception
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:hh:diva-41102 (URN)10.1109/AITest.2019.00-12 (DOI)000470916100005 ()2-s2.0-85067113703 (Scopus ID)978-1-7281-0492-8 (ISBN)
Conference
EEE International Conference On Artificial Intelligence Testing (AITest), San Francisco, CA, USA, 4-9 April, 2019
Funder
VinnovaWallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding Agency: Fordonsstrategisk forskning och innovation (FFI) Grant Number: 2017-03066

Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-19Bibliographically approved
Torstensson, M., Duran, B. & Englund, C. (2019). Using Recurrent Neural Networks for Action and Intention Recognition of Car Drivers. In: ICPRAM 2019 - Proceedings of the 8th International Conferenceon Pattern Recognition Applications and Methods: . Paper presented at 8th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2019, Prague, Czech Republic; 19-21 February, 2019 (pp. 232-242). Setúbal
Open this publication in new window or tab >>Using Recurrent Neural Networks for Action and Intention Recognition of Car Drivers
2019 (English)In: ICPRAM 2019 - Proceedings of the 8th International Conferenceon Pattern Recognition Applications and Methods, Setúbal, 2019, p. 232-242Conference paper, Published paper (Refereed)
Abstract [en]

Traffic situations leading up to accidents have been shown to be greatly affected by human errors. To reduce these errors, warning systems such as Driver Alert Control, Collision Warning and Lane Departure Warning have been introduced. However, there is still room for improvement, both regarding the timing of when a warning should be given as well as the time needed to detect a hazardous situation in advance. Two factors that affect when a warning should be given are the environment and the actions of the driver. This study proposes an artificial neural network-based approach consisting of a convolutional neural network and a recurrent neural network with long short-term memory to detect and predict different actions of a driver inside a vehicle. The network achieved an accuracy of 84% while predicting the actions of the driver in the next frame, and an accuracy of 58% 20 frames ahead with a sampling rate of approximately 30 frames per second. © 2019 by SCITEPRESS - Science and Technology Publications, Lda. All rights reserved

Place, publisher, year, edition, pages
Setúbal: , 2019
Keywords
CNN, Optical Flow, RNN
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:hh:diva-41105 (URN)2-s2.0-85064628843 (Scopus ID)978-9-897-58351-3 (ISBN)
Conference
8th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2019, Prague, Czech Republic; 19-21 February, 2019
Funder
Knowledge Foundation, 20140220
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-19Bibliographically approved
Varytimidis, D., Alonso-Fernandez, F., Englund, C. & Duran, B. (2018). Action and intention recognition of pedestrians in urban traffic. In: Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir (Ed.), 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): . Paper presented at The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018 (pp. 676-682). Piscataway, N.J.: IEEE
Open this publication in new window or tab >>Action and intention recognition of pedestrians in urban traffic
2018 (English)In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Piscataway, N.J.: IEEE, 2018, p. 676-682Conference paper, Published paper (Refereed)
Abstract [en]

Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.

Place, publisher, year, edition, pages
Piscataway, N.J.: IEEE, 2018
Keywords
Action Recognition, Intention Recognition, Pedestrian, Traffic, Driver Assistance
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38504 (URN)10.1109/SITIS.2018.00109 (DOI)978-1-5386-9385-8 (ISBN)978-1-5386-9386-5 (ISBN)
Conference
The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018
Projects
SIDUS AIR
Funder
Knowledge Foundation, 20140220Swedish Research CouncilVinnova
Note

Funding: This work is financed by the SIDUS AIR project of the Swedish Knowledge Foundation under the grant agreement number 20140220. Author F. A.-F. also thanks the Swedish Research Council (VR), and the Sweden’s innovation agency (VINNOVA).

Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-05-16Bibliographically approved
Henriksson, J., Borg, M. & Englund, C. (2018). Automotive safety and machine learning: Initial results from a study on how to adapt the ISO 26262 safety standard. In: 2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems (SEFAIAS): . Paper presented at 1st ACM/IEEE International Workshop on Software Engineering for AI in Autonomous Systems, SEFAIAS 2018 (ICSE 2018), Gothenburg, Sweden, 28 May, 2018 (pp. 47-49). New York, NY: ACM Publications
Open this publication in new window or tab >>Automotive safety and machine learning: Initial results from a study on how to adapt the ISO 26262 safety standard
2018 (English)In: 2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems (SEFAIAS), New York, NY: ACM Publications, 2018, p. 47-49Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning (ML) applications generate a continuous stream of success stories from various domains. ML enables many novel applications, also in safety-critical contexts. However, the functional safety standards such as ISO 26262 did not evolve to cover ML. We conduct an exploratory study on which parts of ISO 26262 represent the most critical gaps between safety engineering and ML development. While this paper only reports the first steps toward a larger research endeavor, we report three adaptations that are critically needed to allow ISO 26262 compliant engineering, and related suggestions on how to evolve the standard. © 2018 ACM.

Place, publisher, year, edition, pages
New York, NY: ACM Publications, 2018
Keywords
Computing methodologies, Machine learning, Software and its engineering, Software safety
National Category
Embedded Systems
Identifiers
urn:nbn:se:hh:diva-37754 (URN)10.1145/3194085.3194090 (DOI)2-s2.0-85051137851 (Scopus ID)978-1-4503-5739-5 (ISBN)978-1-5386-6261-8 (ISBN)
Conference
1st ACM/IEEE International Workshop on Software Engineering for AI in Autonomous Systems, SEFAIAS 2018 (ICSE 2018), Gothenburg, Sweden, 28 May, 2018
Projects
SMILE II
Funder
VINNOVA
Note

Funding: Vinnova/FFI and partially by the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP)

Available from: 2018-08-21 Created: 2018-08-21 Last updated: 2019-01-03Bibliographically approved
Ploeg, J., Semsar-Kazerooni, E., Morales Medina, A. I., de Jongh, J. F. C., van de Sluis, J., Voronov, A., . . . van de Wouw, N. (2018). Cooperative Automated Maneuvering at the 2016 Grand Cooperative Driving Challenge. IEEE transactions on intelligent transportation systems (Print), 19(4), 1213-1226
Open this publication in new window or tab >>Cooperative Automated Maneuvering at the 2016 Grand Cooperative Driving Challenge
Show others...
2018 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 4, p. 1213-1226Article in journal (Refereed) Published
Abstract [en]

Cooperative adaptive cruise control and platooning are well-known applications in the field of cooperative automated driving. However, extension toward maneuvering is desired to accommodate common highway maneuvers, such as merging, and to enable urban applications. To this end, a layered control architecture is adopted. In this architecture, the tactical layer hosts the interaction protocols, describing the wireless information exchange to initiate the vehicle maneuvers, supported by a novel wireless message set, whereas the operational layer involves the vehicle controllers to realize the desired maneuvers. This hierarchical approach was the basis for the Grand Cooperative Driving Challenge (GCDC), which was held in May 2016 in The Netherlands. The GCDC provided the opportunity for participating teams to cooperatively execute a highway lane-reduction scenario and an urban intersection-crossing scenario. The GCDC was set up as a competition and, hence, also involving assessment of the teams' individual performance in a cooperative setting. As a result, the hierarchical architecture proved to be a viable approach, whereas the GCDC appeared to be an effective instrument to advance the field of cooperative automated driving. © Copyright 2017 IEEE - All rights reserved.

Place, publisher, year, edition, pages
Piscataway: IEEE Press, 2018
Keywords
Cooperative driving, interaction protocol, controller design, vehicle platoons, wireless communications
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-35490 (URN)10.1109/TITS.2017.2765669 (DOI)2-s2.0-85035089916 (Scopus ID)
Projects
i-GAME
Available from: 2017-11-27 Created: 2017-11-27 Last updated: 2018-04-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1043-8773

Search in DiVA

Show all publications