hh.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 71) Show all publications
Rosberg, F., Englund, C., Aksoy, E. & Alonso-Fernandez, F. (2025). Adversarial Attacks and Identity Leakage in De-Identification Systems: An Empirical Study. IEEE Transactions on Biomedical Engineering, 1-18
Open this publication in new window or tab >>Adversarial Attacks and Identity Leakage in De-Identification Systems: An Empirical Study
2025 (English)In: IEEE Transactions on Biomedical Engineering, ISSN 0018-9294, E-ISSN 1558-2531, p. 1-18Article in journal (Refereed) Submitted
Abstract [en]

In this paper, we investigate the impact of adversar- ial attacks on identity encoders within a realistic de-identification framework. Our experiments show that the transferability of attacks transfers from an external surrogate model to the system model (e.g., CosFace to ArcFace), allows the adversary to cause identity information to leak in a sufficiently sensitive face recognition system. We present experimental evidence and propose strategies to mitigate this vulnerability. Specifically, we show how fine-tuning on adversarial examples helps to mitigate this effect for distortion-based attacks (i.e., snow, fog, etc.), while a simple low-pass filter can attenuate the effect of adversarial noise without affecting the de-identified images. Our mitigation results in a de-identification system that preserves its functionality while being significantly more robust to adversarial noise. 

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2025
Keywords
De-Identification, Adversarial Attacks, Adversarial Transferability
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-55647 (URN)
Available from: 2025-03-18 Created: 2025-03-18 Last updated: 2025-03-18Bibliographically approved
Perez-Cerrolaza, J., Abella, J., Borg, M., Donzella, C., Cerquides, J., Cazorla, F. J., . . . Flores, J. L. (2024). Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey. ACM Computing Surveys, 56(7), Article ID 176.
Open this publication in new window or tab >>Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey
Show others...
2024 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, no 7, article id 176Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-critical systems in which Machine Learning (ML) algorithms learn optimized and safe solutions. AI can also support and assist human safety engineers in developing safety-critical systems. However, reconciling both cutting-edge and state-of-the-art AI technology with safety engineering processes and safety standards is an open challenge that must be addressed before AI can be fully embraced in safety-critical systems. Many works already address this challenge, resulting in a vast and fragmented literature. Focusing on the industrial and transportation domains, this survey structures and analyzes challenges, techniques, and methods for developing AI-based safety-critical systems, from traditional functional safety systems to autonomous systems. AI trustworthiness spans several dimensions, such as engineering, ethics and legal, and this survey focuses on the safety engineering dimension. © 2024 Copyright held by the owner/author(s).

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2024
Keywords
autonomous systems, functional safety
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-52951 (URN)10.1145/3626314 (DOI)001208811000015 ()2-s2.0-85191063705 (Scopus ID)
Available from: 2024-03-22 Created: 2024-03-22 Last updated: 2024-07-09Bibliographically approved
Arvidsson, M., Sawirot, S., Englund, C., Alonso-Fernandez, F., Torstensson, M. & Duran, B. (2023). Drone navigation and license place detection for vehicle location in indoor spaces. In: Yanio Hernández Heredia; Vladimir Milián Núñez; José Ruiz Shulcloper (Ed.), Progress in Artificial Intelligence and Pattern Recognition: . Paper presented at 8th International Congress on Artificial Intelligence and Pattern Recognition, IWAIPR 2023, Varadero, Cuba, September 27–29, 2023 (pp. 362-374). Heidelberg: Springer
Open this publication in new window or tab >>Drone navigation and license place detection for vehicle location in indoor spaces
Show others...
2023 (English)In: Progress in Artificial Intelligence and Pattern Recognition / [ed] Yanio Hernández Heredia; Vladimir Milián Núñez; José Ruiz Shulcloper, Heidelberg: Springer, 2023, p. 362-374Conference paper, Published paper (Refereed)
Abstract [en]

Millions of vehicles are transported every year, tightly parked in vessels or boats. To reduce the risks of associated safety issues like fires, knowing the location of vehicles is essential, since different vehicles may need different mitigation measures, e.g. electric cars. This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates. We do so via a wall-following algorithm, and a CNN trained to detect license plates. All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map with the position of the plates. Our solution is capable of reading all plates across eight test cases (with several rows of plates, different drone speeds, or low light) by aggregation of measurements across several drone journeys. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14335
Keywords
Nano-drone, License plate detection, Vehicle location, UAV
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51292 (URN)10.1007/978-3-031-49552-6_31 (DOI)2-s2.0-85180752157& (Scopus ID)978-3-031-49551-9 (ISBN)978-3-031-49552-6 (ISBN)
Conference
8th International Congress on Artificial Intelligence and Pattern Recognition, IWAIPR 2023, Varadero, Cuba, September 27–29, 2023
Funder
VinnovaSwedish Research Council
Available from: 2023-07-19 Created: 2023-07-19 Last updated: 2024-04-04Bibliographically approved
Rosberg, F., Aksoy, E., Alonso-Fernandez, F. & Englund, C. (2023). FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping. In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023: . Paper presented at 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023 (pp. 3443-3452). Piscataway: IEEE
Open this publication in new window or tab >>FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
2023 (English)In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, Piscataway: IEEE, 2023, p. 3443-3452Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we present a new single-stage method for subject agnostic face swapping and identity transfer, named FaceDancer. We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR). The AFFA module is embedded in the decoder and adaptively learns to fuse attribute features and features conditioned on identity information without requiring any additional facial segmentation process. In IFSR, we leverage the intermediate features in an identity encoder to preserve important attributes such as head pose, facial expression, lighting, and occlusion in the target face, while still transferring the identity of the source face with high fidelity. We conduct extensive quantitative and qualitative experiments on various datasets and show that the proposed FaceDancer outperforms other state-of-the-art networks in terms of identityn transfer, while having significantly better pose preservation than most of the previous methods. © 2023 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2023
Keywords
Algorithms, Biometrics, and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning), body pose, face, formulations, gesture, Machine learning architectures
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48618 (URN)10.1109/WACV56688.2023.00345 (DOI)000971500203054 ()2-s2.0-85149000603 (Scopus ID)9781665493468 (ISBN)
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023
Available from: 2022-11-15 Created: 2022-11-15 Last updated: 2025-03-18Bibliographically approved
Rosberg, F., Aksoy, E., Englund, C. & Alonso-Fernandez, F. (2023). FIVA: Facial Image and Video Anonymization and Anonymization Defense. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): . Paper presented at 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023 (pp. 362-371). Los Alamitos, CA: IEEE
Open this publication in new window or tab >>FIVA: Facial Image and Video Anonymization and Anonymization Defense
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos, CA: IEEE, 2023, p. 362-371Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image. © 2023 IEEE.

Place, publisher, year, edition, pages
Los Alamitos, CA: IEEE, 2023
Series
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9944
Keywords
Anonymization, Deep Fakes, Facial Recognition, Identity Tracking, Reconstruction Attacks
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52592 (URN)10.1109/ICCVW60793.2023.00043 (DOI)2-s2.0-85182917356 (Scopus ID)9798350307443 (ISBN)
Conference
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023
Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2025-03-18Bibliographically approved
Abella, J., Perez, J., Englund, C., Zonooz, B., Giordana, G., Donzella, C., . . . Cunial, D. (2023). SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI. In: DATE 23: Design, Automation And Test In Europe: The European Event For Electronic System Design & Test. Paper presented at The 26th DATE conference, Antwerp, Belgium, 17- 19 April, 2023 (pp. 1-6).
Open this publication in new window or tab >>SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI
Show others...
2023 (English)In: DATE 23: Design, Automation And Test In Europe: The European Event For Electronic System Design & Test, 2023, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism. © 2023 EDAA.

Series
Design, Automation and Test in Europe (DATE), ISSN 1530-1591, E-ISSN 1558-1101
Keywords
Deep learning, Embedded systems, Product design, Software testing, Stochastic systems
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-52952 (URN)10.23919/date56975.2023.10137128 (DOI)001027444200173 ()2-s2.0-85162662708& (Scopus ID)978-3-9819263-7-8 (ISBN)
Conference
The 26th DATE conference, Antwerp, Belgium, 17- 19 April, 2023
Funder
EU, Horizon Europe, 101069595
Note

The research leading to these results has received funding from the Horizon Europe Programme under the SAFEXPLAIN Project (www.safexplain.eu), grant agreement num. 101069595. BSC authors have also been supported by the Spanish Ministry of Science and Innovation under grant PID2019-107255GBC21/AEI/10.13039/501100011033

Available from: 2024-03-22 Created: 2024-03-22 Last updated: 2024-03-26Bibliographically approved
Aramrattana, M., Larsson, T., Englund, C., Jansson, J. & Nåbo, A. (2022). A Simulation Study on Effects of Platooning Gaps on Drivers of Conventional Vehicles in Highway Merging Situations. IEEE transactions on intelligent transportation systems (Print), 23(4), 3790-3796
Open this publication in new window or tab >>A Simulation Study on Effects of Platooning Gaps on Drivers of Conventional Vehicles in Highway Merging Situations
Show others...
2022 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 23, no 4, p. 3790-3796Article in journal (Refereed) Published
Abstract [en]

Platooning refers to a group of vehicles that--enabled by wireless vehicle-to-vehicle (V2V) communication and vehicle automation--drives with short inter-vehicular distances. Before its deployment on public roads, several challenging traffic situations need to be handled. Among the challenges are cut-in situations, where a conventional vehicle--a vehicle that has no automation or V2V communication--changes lane and ends up between vehicles in a platoon. This paper presents results from a simulation study of a scenario, where a conventional vehicle, approaching from an on-ramp, merges into a platoon of five cars on a highway. We created the scenario with four platooning gaps: 15, 22.5, 30, and 42.5 meters. During the study, the conventional vehicle was driven by 37 test persons, who experienced all the platooning gaps using a driving simulator. The participants' opinions towards safety, comfort, and ease of driving between the platoon in each gap setting were also collected through a questionnaire. The results suggest that a 15-meter gap prevents most participants from cutting in, while causing potentially dangerous maneuvers and collisions when cut-in occurs. A platooning gap of at least 30 meters yield positive opinions from the participants, and facilitating more smooth cut-in maneuvers while less collisions were observed.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2022
Keywords
Merging, Vehicles, Roads, Automobiles, Vehicular ad hoc networks, Meters, Safety
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-43769 (URN)10.1109/TITS.2020.3040085 (DOI)000776187400074 ()2-s2.0-85098774184 (Scopus ID)
Funder
Vinnova, 2015-04881
Note

Funding: Swedish Government Agency for Innovation Systems VINNOVA through the NGEA step 2; Vehicle and Traffic Safety Centre at Chalmers SAFER

Available from: 2021-01-11 Created: 2021-01-11 Last updated: 2022-05-02Bibliographically approved
Svanström, F., Alonso-Fernandez, F. & Englund, C. (2022). Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities. Drones, 6(11), Article ID 317.
Open this publication in new window or tab >>Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities
2022 (English)In: Drones, ISSN 2504-446X, Vol. 6, no 11, article id 317Article in journal (Refereed) Published
Abstract [en]

Automatic detection of flying drones is a key issue where its presence, especially if unauthorized, can create risky situations or compromise security. Here, we design and evaluate a multi-sensor drone detection system. In conjunction with standard video cameras and microphone sensors, we explore the use of thermal infrared cameras, pointed out as a feasible and promising solution that is scarcely addressed in the related literature. Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest. The sensing solutions are complemented with an ADS-B receiver, a GPS receiver, and a radar module. However, our final deployment has not included the latter due to its limited detection range. The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution. Two other novelties of our work are the creation of a new public dataset of multi-sensor annotated data that expands the number of classes compared to existing ones, as well as the study of the detector performance as a function of the sensor-to-target distance. Sensor fusion is also explored, showing that the system can be made more robust in this way, mitigating false detections of the individual sensors. © 2022 by the authors.

Place, publisher, year, edition, pages
Basel: MDPI, 2022
Keywords
anti-drone systems, drone detection, UAV detection
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-48786 (URN)10.3390/drones6110317 (DOI)000881010600001 ()2-s2.0-85141807932 (Scopus ID)
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2022-12-09Bibliographically approved
Svanström, F., Alonso-Fernandez, F. & Englund, C. (2021). A dataset for multi-sensor drone detection. Data in Brief, 39, Article ID 107521.
Open this publication in new window or tab >>A dataset for multi-sensor drone detection
2021 (English)In: Data in Brief, E-ISSN 2352-3409, Vol. 39, article id 107521Article in journal (Refereed) Published
Abstract [en]

The use of small and remotely controlled unmanned aerial vehicles (UAVs), referred to as drones, has increased dramatically in recent years, both for professional and recreative purposes. This goes in parallel with (intentional or unintentional) misuse episodes, with an evident threat to the safety of people or facilities [1]. As a result, the detection of UAV has also emerged as a research topic [2]. Most of the existing studies on drone detection fail to specify the type of acquisition device, the drone type, the detection range, or the employed dataset. The lack of proper UAV detection studies employing thermal infrared cameras is also acknowledged as an issue, despite its success in detecting other types of targets [2]. Beside, we have not found any previous study that addresses the detection task as a function of distance to the target. Sensor fusion is indicated as an open research issue as well to achieve better detection results in comparison to a single sensor, although research in this direction is scarce too [3–6]. To help in counteracting the mentioned issues and allow fundamental studies with a common public benchmark, we contribute with an annotated multi-sensor database for drone detection that includes infrared and visible videos and audio files. The database includes three different drones, a small-sized model (Hubsan H107D+), a medium-sized drone (DJI Flame Wheel in quadcopter configuration), and a performance-grade model (DJI Phantom 4 Pro). It also includes other flying objects that can be mistakenly detected as drones, such as birds, airplanes or helicopters. In addition to using several different sensors, the number of classes is higher than in previous studies [4]. The video part contains 650 infrared and visible videos (365 IR and 285 visible) of drones, birds, airplanes and helicopters. Each clip is of ten seconds, resulting in a total of 203,328 annotated frames. The database is complemented with 90 audio files of the classes drones, helicopters and background noise. To allow studies as a function of the sensor-to-target distance, the dataset is divided into three categories (Close, Medium, Distant) according to the industry-standard Detect, Recognize and Identify (DRI) requirements [7], built on the Johnson criteria [8]. Given that the drones must be flown within visual range due to regulations, the largest sensor-to-target distance for a drone in the dataset is 200 m, and acquisitions are made in daylight. The data has been obtained at three airports in Sweden: Halmstad Airport (IATA code: HAD/ICAO code: ESMT), Gothenburg City Airport (GSE/ESGP) and Malmö Airport (MMX/ESMS). The acquisition sensors are mounted on a pan-tilt platform that steers the cameras to the objects of interest. All sensors and the platform are controlled with a standard laptop vis a USB hub. © 2021

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2021
Keywords
Anti-drone systems, Drone detection, UAV detection
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-45926 (URN)10.1016/j.dib.2021.107521 (DOI)000718134700023 ()2-s2.0-85118496043 (Scopus ID)
Funder
Swedish Research CouncilVinnova
Available from: 2021-11-23 Created: 2021-11-23 Last updated: 2022-01-31Bibliographically approved
Englund, C., Erdal Aksoy, E., Alonso-Fernandez, F., Cooney, M. D., Pashami, S. & Åstrand, B. (2021). AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control. Smart Cities, 4(2), 783-802
Open this publication in new window or tab >>AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control
Show others...
2021 (English)In: Smart Cities, E-ISSN 2624-6511, Vol. 4, no 2, p. 783-802Article in journal (Refereed) Published
Abstract [en]

Smart Cities and Communities (SCC) constitute a new paradigm in urban development. SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities. Information and communication technology along with internet of things enables data collection and with the help of artificial intelligence (AI) situation awareness can be obtained to feed the SCC actors with enriched knowledge. This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control. Perception, Smart Traffic Control and Driver Modelling are described along with open research challenges and standardization to help introduce advanced driver assistance systems and automated vehicle functionality in traffic. To fully realize the potential of SCC, to create a holistic view on a city level, the availability of data from different stakeholders is need. Further, though AI technologies provide accurate predictions and classifications there is an ambiguity regarding the correctness of their outputs. This can make it difficult for the human operator to trust the system. Today there are no methods that can be used to match function requirements with the level of detail in data annotation in order to train an accurate model. Another challenge related to trust is explainability, while the models have difficulties explaining how they come to a certain conclusions it is difficult for humans to trust it. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
Basel: MDPI, 2021
Keywords
smart cities, artificial intelligence, perception, smart traffic control, driver modeling
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-44272 (URN)10.3390/smartcities4020040 (DOI)000668714200001 ()2-s2.0-85119570196 (Scopus ID)
Funder
Vinnova, 2018-05001; 2019-05871Knowledge FoundationSwedish Research Council, 2016-03497
Note

Funding: The research leading to these results has partially received funding from the Vinnova FFI project SHARPEN, under grant agreement no. 2018-05001 and the Vinnova FFI project SMILE III, under the grant agreement no. 2019-05871. The funding received from the Knowledge Foundation (KKS) in the framework of “Safety of Connected Intelligent Vehicles in Smart Cities–SafeSmart” project (2019–2023) is gratefully acknowledged. Finally, the authors thanks the Swedish Research Council (project 2016-03497) for funding their research.

Available from: 2021-05-11 Created: 2021-05-11 Last updated: 2023-06-08Bibliographically approved
Projects
DIFFUSE: Disentanglement of Features For Utilization in Systematic Evaluation [2021-05038_Vinnova]; RISE - Research Institutes of Sweden (2017-2019) (Closed down 2019-12-31)
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1043-8773

Search in DiVA

Show all publications