hh.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 67) Show all publications
Arvidsson, M., Sawirot, S., Englund, C., Alonso-Fernandez, F., Torstensson, M. & Duran, B. (2023). Drone navigation and license place detection for vehicle location in indoor spaces. In: : . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023.
Open this publication in new window or tab >>Drone navigation and license place detection for vehicle location in indoor spaces
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Millions of vehicles are transported every year, tightly parked in vessels or boats. To reduce the risks of associated safety issues like fires, knowing the location of vehicles is essential, since different vehicles may need different mitigation measures, e.g. electric cars. This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates. We do so via a wall-following algorithm, and a CNN trained to detect license plates. All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map with the position of the plates. Our solution is capable of reading all plates across eight test cases (with several rows of plates, different drone speeds, or low light) by aggregation of measurements across several drone journeys.

Keywords
Nano-drone, License plate detection, Vehicle location, UAV
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51292 (URN)10.48550/arXiv.2307.10165 (DOI)
Conference
VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023
Funder
VinnovaSwedish Research Council
Available from: 2023-07-19 Created: 2023-07-19 Last updated: 2023-12-08Bibliographically approved
Rosberg, F., Aksoy, E., Alonso-Fernandez, F. & Englund, C. (2023). FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping. In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023: . Paper presented at 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023 (pp. 3443-3452). Piscataway: IEEE
Open this publication in new window or tab >>FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
2023 (English)In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, Piscataway: IEEE, 2023, p. 3443-3452Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we present a new single-stage method for subject agnostic face swapping and identity transfer, named FaceDancer. We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR). The AFFA module is embedded in the decoder and adaptively learns to fuse attribute features and features conditioned on identity information without requiring any additional facial segmentation process. In IFSR, we leverage the intermediate features in an identity encoder to preserve important attributes such as head pose, facial expression, lighting, and occlusion in the target face, while still transferring the identity of the source face with high fidelity. We conduct extensive quantitative and qualitative experiments on various datasets and show that the proposed FaceDancer outperforms other state-of-the-art networks in terms of identityn transfer, while having significantly better pose preservation than most of the previous methods. © 2023 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2023
Keywords
Algorithms, Biometrics, and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning), body pose, face, formulations, gesture, Machine learning architectures
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48618 (URN)10.1109/WACV56688.2023.00345 (DOI)000971500203054 ()2-s2.0-85149000603 (Scopus ID)9781665493468 (ISBN)
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023
Available from: 2022-11-15 Created: 2022-11-15 Last updated: 2024-03-18Bibliographically approved
Rosberg, F., Aksoy, E., Englund, C. & Alonso-Fernandez, F. (2023). FIVA: Facial Image and Video Anonymization and Anonymization Defense. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): . Paper presented at 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023 (pp. 362-371). Los Alamitos, CA: IEEE
Open this publication in new window or tab >>FIVA: Facial Image and Video Anonymization and Anonymization Defense
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos, CA: IEEE, 2023, p. 362-371Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image. © 2023 IEEE.

Place, publisher, year, edition, pages
Los Alamitos, CA: IEEE, 2023
Series
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9944
Keywords
Anonymization, Deep Fakes, Facial Recognition, Identity Tracking, Reconstruction Attacks
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52592 (URN)10.1109/ICCVW60793.2023.00043 (DOI)2-s2.0-85182917356 (Scopus ID)9798350307443 (ISBN)
Conference
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023
Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-03-18Bibliographically approved
Aramrattana, M., Larsson, T., Englund, C., Jansson, J. & Nåbo, A. (2022). A Simulation Study on Effects of Platooning Gaps on Drivers of Conventional Vehicles in Highway Merging Situations. IEEE transactions on intelligent transportation systems (Print), 23(4), 3790-3796
Open this publication in new window or tab >>A Simulation Study on Effects of Platooning Gaps on Drivers of Conventional Vehicles in Highway Merging Situations
Show others...
2022 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 23, no 4, p. 3790-3796Article in journal (Refereed) Published
Abstract [en]

Platooning refers to a group of vehicles that--enabled by wireless vehicle-to-vehicle (V2V) communication and vehicle automation--drives with short inter-vehicular distances. Before its deployment on public roads, several challenging traffic situations need to be handled. Among the challenges are cut-in situations, where a conventional vehicle--a vehicle that has no automation or V2V communication--changes lane and ends up between vehicles in a platoon. This paper presents results from a simulation study of a scenario, where a conventional vehicle, approaching from an on-ramp, merges into a platoon of five cars on a highway. We created the scenario with four platooning gaps: 15, 22.5, 30, and 42.5 meters. During the study, the conventional vehicle was driven by 37 test persons, who experienced all the platooning gaps using a driving simulator. The participants' opinions towards safety, comfort, and ease of driving between the platoon in each gap setting were also collected through a questionnaire. The results suggest that a 15-meter gap prevents most participants from cutting in, while causing potentially dangerous maneuvers and collisions when cut-in occurs. A platooning gap of at least 30 meters yield positive opinions from the participants, and facilitating more smooth cut-in maneuvers while less collisions were observed.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2022
Keywords
Merging, Vehicles, Roads, Automobiles, Vehicular ad hoc networks, Meters, Safety
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-43769 (URN)10.1109/TITS.2020.3040085 (DOI)000776187400074 ()2-s2.0-85098774184 (Scopus ID)
Funder
Vinnova, 2015-04881
Note

Funding: Swedish Government Agency for Innovation Systems VINNOVA through the NGEA step 2; Vehicle and Traffic Safety Centre at Chalmers SAFER

Available from: 2021-01-11 Created: 2021-01-11 Last updated: 2022-05-02Bibliographically approved
Svanström, F., Alonso-Fernandez, F. & Englund, C. (2022). Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. Drones, 6(11), Article ID 317.
Open this publication in new window or tab >>Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities
2022 (English)In: Drones, ISSN 2504-446X, Vol. 6, no 11, article id 317Article in journal (Refereed) Published
Abstract [en]

Automatic detection of flying drones is a key issue where its presence, especially if unauthorized, can create risky situations or compromise security. Here, we design and evaluate a multi-sensor drone detection system. In conjunction with standard video cameras and microphone sensors, we explore the use of thermal infrared cameras, pointed out as a feasible and promising solution that is scarcely addressed in the related literature. Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest. The sensing solutions are complemented with an ADS-B receiver, a GPS receiver, and a radar module. However, our final deployment has not included the latter due to its limited detection range. The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution. Two other novelties of our work are the creation of a new public dataset of multi-sensor annotated data that expands the number of classes compared to existing ones, as well as the study of the detector performance as a function of the sensor-to-target distance. Sensor fusion is also explored, showing that the system can be made more robust in this way, mitigating false detections of the individual sensors. © 2022 by the authors.

Place, publisher, year, edition, pages
Basel: MDPI, 2022
Keywords
anti-drone systems, drone detection, UAV detection
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-48786 (URN)10.3390/drones6110317 (DOI)000881010600001 ()2-s2.0-85141807932 (Scopus ID)
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2022-12-09Bibliographically approved
Svanström, F., Alonso-Fernandez, F. & Englund, C. (2021). A dataset for multi-sensor drone detection. Data in Brief, 39, Article ID 107521.
Open this publication in new window or tab >>A dataset for multi-sensor drone detection
2021 (English)In: Data in Brief, E-ISSN 2352-3409, Vol. 39, article id 107521Article in journal (Refereed) Published
Abstract [en]

The use of small and remotely controlled unmanned aerial vehicles (UAVs), referred to as drones, has increased dramatically in recent years, both for professional and recreative purposes. This goes in parallel with (intentional or unintentional) misuse episodes, with an evident threat to the safety of people or facilities [1]. As a result, the detection of UAV has also emerged as a research topic [2]. Most of the existing studies on drone detection fail to specify the type of acquisition device, the drone type, the detection range, or the employed dataset. The lack of proper UAV detection studies employing thermal infrared cameras is also acknowledged as an issue, despite its success in detecting other types of targets [2]. Beside, we have not found any previous study that addresses the detection task as a function of distance to the target. Sensor fusion is indicated as an open research issue as well to achieve better detection results in comparison to a single sensor, although research in this direction is scarce too [3–6]. To help in counteracting the mentioned issues and allow fundamental studies with a common public benchmark, we contribute with an annotated multi-sensor database for drone detection that includes infrared and visible videos and audio files. The database includes three different drones, a small-sized model (Hubsan H107D+), a medium-sized drone (DJI Flame Wheel in quadcopter configuration), and a performance-grade model (DJI Phantom 4 Pro). It also includes other flying objects that can be mistakenly detected as drones, such as birds, airplanes or helicopters. In addition to using several different sensors, the number of classes is higher than in previous studies [4]. The video part contains 650 infrared and visible videos (365 IR and 285 visible) of drones, birds, airplanes and helicopters. Each clip is of ten seconds, resulting in a total of 203,328 annotated frames. The database is complemented with 90 audio files of the classes drones, helicopters and background noise. To allow studies as a function of the sensor-to-target distance, the dataset is divided into three categories (Close, Medium, Distant) according to the industry-standard Detect, Recognize and Identify (DRI) requirements [7], built on the Johnson criteria [8]. Given that the drones must be flown within visual range due to regulations, the largest sensor-to-target distance for a drone in the dataset is 200 m, and acquisitions are made in daylight. The data has been obtained at three airports in Sweden: Halmstad Airport (IATA code: HAD/ICAO code: ESMT), Gothenburg City Airport (GSE/ESGP) and Malmö Airport (MMX/ESMS). The acquisition sensors are mounted on a pan-tilt platform that steers the cameras to the objects of interest. All sensors and the platform are controlled with a standard laptop vis a USB hub. © 2021

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2021
Keywords
Anti-drone systems, Drone detection, UAV detection
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-45926 (URN)10.1016/j.dib.2021.107521 (DOI)000718134700023 ()2-s2.0-85118496043 (Scopus ID)
Funder
Swedish Research CouncilVinnova
Available from: 2021-11-23 Created: 2021-11-23 Last updated: 2022-01-31Bibliographically approved
Englund, C., Erdal Aksoy, E., Alonso-Fernandez, F., Cooney, M. D., Pashami, S. & Åstrand, B. (2021). AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control. Smart Cities, 4(2), 783-802
Open this publication in new window or tab >>AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control
Show others...
2021 (English)In: Smart Cities, E-ISSN 2624-6511, Vol. 4, no 2, p. 783-802Article in journal (Refereed) Published
Abstract [en]

Smart Cities and Communities (SCC) constitute a new paradigm in urban development. SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities. Information and communication technology along with internet of things enables data collection and with the help of artificial intelligence (AI) situation awareness can be obtained to feed the SCC actors with enriched knowledge. This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control. Perception, Smart Traffic Control and Driver Modelling are described along with open research challenges and standardization to help introduce advanced driver assistance systems and automated vehicle functionality in traffic. To fully realize the potential of SCC, to create a holistic view on a city level, the availability of data from different stakeholders is need. Further, though AI technologies provide accurate predictions and classifications there is an ambiguity regarding the correctness of their outputs. This can make it difficult for the human operator to trust the system. Today there are no methods that can be used to match function requirements with the level of detail in data annotation in order to train an accurate model. Another challenge related to trust is explainability, while the models have difficulties explaining how they come to a certain conclusions it is difficult for humans to trust it. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
Basel: MDPI, 2021
Keywords
smart cities, artificial intelligence, perception, smart traffic control, driver modeling
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-44272 (URN)10.3390/smartcities4020040 (DOI)000668714200001 ()2-s2.0-85119570196 (Scopus ID)
Funder
Vinnova, 2018-05001; 2019-05871Knowledge FoundationSwedish Research Council, 2016-03497
Note

Funding: The research leading to these results has partially received funding from the Vinnova FFI project SHARPEN, under grant agreement no. 2018-05001 and the Vinnova FFI project SMILE III, under the grant agreement no. 2019-05871. The funding received from the Knowledge Foundation (KKS) in the framework of “Safety of Connected Intelligent Vehicles in Smart Cities–SafeSmart” project (2019–2023) is gratefully acknowledged. Finally, the authors thanks the Swedish Research Council (project 2016-03497) for funding their research.

Available from: 2021-05-11 Created: 2021-05-11 Last updated: 2023-06-08Bibliographically approved
Rosberg, F. & Englund, C. (2021). Comparing Facial Expressions for Face Swapping Evaluation with Supervised Contrastive Representation Learning. In: Vitomir Štruc; Marija Ivanovska (Ed.), 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021): Proceedings. Paper presented at 16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Virtual, Jodhpur, India, 15- 18 December, 2021. Piscataway: IEEE
Open this publication in new window or tab >>Comparing Facial Expressions for Face Swapping Evaluation with Supervised Contrastive Representation Learning
2021 (English)In: 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021): Proceedings / [ed] Vitomir Štruc; Marija Ivanovska, Piscataway: IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

Measuring and comparing facial expression have several practical applications. One such application is to measure the facial expression embedding, and to compare distances between those expressions embeddings in order to determine the identity- and face swapping algorithms' capabilities in preserving the facial expression information. One useful aspect is to present how well the expressions are preserved while anonymizing facial data during privacy aware data collection. We show that a weighted supervised contrastive learning is a strong approach for learning facial expression representation embeddings and dealing with the class imbalance bias. By feeding a classifier-head with the learned embeddings we reach competitive state-of-the-art results. Furthermore, we demonstrate the use case of measuring the distance between the expressions of a target face, a source face and the anonymized target face in the facial anonymization context. © 2021 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2021
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-46506 (URN)10.1109/FG52635.2021.9666958 (DOI)000784811600027 ()2-s2.0-85125063047 (Scopus ID)978-1-6654-3176-7 (ISBN)
Conference
16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Virtual, Jodhpur, India, 15- 18 December, 2021
Available from: 2022-04-21 Created: 2022-04-21 Last updated: 2024-03-18Bibliographically approved
Voronov, A., Andersson, J. & Englund, C. (2021). Cut-ins in Truck Platoons: Modeling Loss of Fuel Savings (1ed.). In: Umar Zakir Abdul Hamid; Fadi Al-Turjman (Ed.), Towards Connected and Autonomous Vehicle Highways: Technical, Security and Social Challenges (pp. 11-26). Cham: Springer
Open this publication in new window or tab >>Cut-ins in Truck Platoons: Modeling Loss of Fuel Savings
2021 (English)In: Towards Connected and Autonomous Vehicle Highways: Technical, Security and Social Challenges / [ed] Umar Zakir Abdul Hamid; Fadi Al-Turjman, Cham: Springer, 2021, 1, p. 11-26Chapter in book (Refereed)
Abstract [en]

Reducing fuel consumption is one of the major benefits of platooning. While introducing platooning in mixed traffic, surrounding traffic will interfere with the platoon, risking a loss in fuel savings. In this work, a method for estimating potential fuel loss due to cut-ins in platoons is presented. Based on interviews with truck drivers with experience from platooning, and naturalistic data from previous research, we estimate the potential loss of fuel savings due to cut-ins and compare two scenarios with different amounts of traffic. The results show that platoons spend as much as 20% of time in cut-ins on typical European roads, reducing fuel savings in platooning from 13% down to 10%. Consequently, avoiding cut-ins has a positive environmental effect worth considering. © 2021, Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Cham: Springer, 2021 Edition: 1
Series
EAI/Springer Innovations in Communication and Computing, ISSN 2522-8595, E-ISSN 2522-8609
National Category
Transport Systems and Logistics
Identifiers
urn:nbn:se:hh:diva-46102 (URN)10.1007/978-3-030-66042-0_2 (DOI)2-s2.0-85108413163 (Scopus ID)978-3-030-66041-3 (ISBN)978-3-030-66044-4 (ISBN)978-3-030-66042-0 (ISBN)
Note

Funding: Strategic Innovation Program Drive Sweden

Available from: 2021-12-13 Created: 2021-12-13 Last updated: 2021-12-13Bibliographically approved
Svanström, F., Englund, C. & Alonso-Fernandez, F. (2021). Real-Time Drone Detection and Tracking With Visible, Thermal and Acoustic Sensors. In: 2020 25th International Conference on Pattern Recognition (ICPR): . Paper presented at International Conference on Pattern Recognition, ICPR, Milan, Italy, 10-15 January, 2021 (pp. 7265-7272). IEEE
Open this publication in new window or tab >>Real-Time Drone Detection and Tracking With Visible, Thermal and Acoustic Sensors
2021 (English)In: 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, p. 7265-7272Conference paper, Published paper (Refereed)
Abstract [en]

TThis paper explores the process of designing an automatic multi-sensor drone detection system. Besides the common video and audio sensors, the system also includes a thermal infrared camera, which is shown to be a feasible solution to the drone detection task. Even with slightly lower resolution, the performance is just as good as a camera in visible range. The detector performance as a function of the sensor-to-target distance is also investigated. In addition, using sensor fusion, the system is made more robust than the individual sensors, helping to reduce false detections. To counteract the lack of public datasets, a novel video dataset containing 650 annotated infrared and visible videos of drones, birds, airplanes and helicopters is also presented 1.1. https://github.com/DroneDetectionThesis/Drone-detection-dataset. The database is complemented with an audio dataset of the classes drones, helicopters and background noise. © 2020 IEEE

Place, publisher, year, edition, pages
IEEE, 2021
Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-43312 (URN)10.1109/ICPR48806.2021.9413241 (DOI)000678409207054 ()2-s2.0-85109250336 (Scopus ID)978-1-7281-8808-9 (ISBN)978-1-7281-8809-6 (ISBN)
Conference
International Conference on Pattern Recognition, ICPR, Milan, Italy, 10-15 January, 2021
Funder
Swedish Research CouncilVinnova
Available from: 2020-10-19 Created: 2020-10-19 Last updated: 2023-08-21Bibliographically approved
Projects
DIFFUSE: Disentanglement of Features For Utilization in Systematic Evaluation [2021-05038_Vinnova]; RISE - Research Institutes of Sweden (2017-2019) (Closed down 2019-12-31)
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1043-8773

Search in DiVA

Show all publications