hh.sePublications
Change search
Link to record
Permanent link

Direct link
Åstrand, Björn
Alternative names
Publications (10 of 35) Show all publications
Muhammad, N., Hedenberg, K. & Åstrand, B. (2021). Adaptive warning fields for warehouse AGVs. In: 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA): . Paper presented at 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Västerås, Sweden (Online), 7-10 Sept., 2021 (pp. 1-8). IEEE
Open this publication in new window or tab >>Adaptive warning fields for warehouse AGVs
2021 (English)In: 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), IEEE, 2021, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

AGV (automated guided vehicle) systems are extensively used in factory and warehouse environments. As most of these environments employ a mix of AGVs, manually driven vehicles, and human workers, safety is an important subject. Current AGV systems employ safety fields and laser scanners to ensure safety in their environments. These fields however are often primitive and do not take into account future AGVtrajectory or intentions of agents in their vicinity. This results in inefficient operation of such AGVs. We propose a three-layered architecture that consists of safety fields that are formed around immediate future trajectory of AGV as well as on the predicted intention of an agent in AGV vicinity, resulting in more efficient AGV behaviour. Results are presented using real laser data from a small-sized lab AGV as well as an industrial forklift truck.

Place, publisher, year, edition, pages
IEEE, 2021
Keywords
AGV, AGV safety, Safety architecture
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-46052 (URN)10.1109/ETFA45728.2021.9613565 (DOI)000766992600182 ()2-s2.0-85122937580 (Scopus ID)978-1-7281-2989-1 (ISBN)978-1-7281-2990-7 (ISBN)
Conference
2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Västerås, Sweden (Online), 7-10 Sept., 2021
Projects
CAISR
Funder
Knowledge Foundation
Note

Funding: The Swedish Knowledge Foundation (under the CAISR program), (ii) by the Euro- pean Social Fund via IT Academy programme, and (iii) by industrial partners Kollmorgen and Toyota Material Handling Europe.

Available from: 2021-12-07 Created: 2021-12-07 Last updated: 2023-10-05Bibliographically approved
Englund, C., Erdal Aksoy, E., Alonso-Fernandez, F., Cooney, M. D., Pashami, S. & Åstrand, B. (2021). AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control. Smart Cities, 4(2), 783-802
Open this publication in new window or tab >>AI Perspectives in Smart Cities and Communities to Enable Road Vehicle Automation and Smart Traffic Control
Show others...
2021 (English)In: Smart Cities, E-ISSN 2624-6511, Vol. 4, no 2, p. 783-802Article in journal (Refereed) Published
Abstract [en]

Smart Cities and Communities (SCC) constitute a new paradigm in urban development. SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities. Information and communication technology along with internet of things enables data collection and with the help of artificial intelligence (AI) situation awareness can be obtained to feed the SCC actors with enriched knowledge. This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control. Perception, Smart Traffic Control and Driver Modelling are described along with open research challenges and standardization to help introduce advanced driver assistance systems and automated vehicle functionality in traffic. To fully realize the potential of SCC, to create a holistic view on a city level, the availability of data from different stakeholders is need. Further, though AI technologies provide accurate predictions and classifications there is an ambiguity regarding the correctness of their outputs. This can make it difficult for the human operator to trust the system. Today there are no methods that can be used to match function requirements with the level of detail in data annotation in order to train an accurate model. Another challenge related to trust is explainability, while the models have difficulties explaining how they come to a certain conclusions it is difficult for humans to trust it. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Place, publisher, year, edition, pages
Basel: MDPI, 2021
Keywords
smart cities, artificial intelligence, perception, smart traffic control, driver modeling
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-44272 (URN)10.3390/smartcities4020040 (DOI)000668714200001 ()2-s2.0-85119570196 (Scopus ID)
Funder
Vinnova, 2018-05001; 2019-05871Knowledge FoundationSwedish Research Council, 2016-03497
Note

Funding: The research leading to these results has partially received funding from the Vinnova FFI project SHARPEN, under grant agreement no. 2018-05001 and the Vinnova FFI project SMILE III, under the grant agreement no. 2019-05871. The funding received from the Knowledge Foundation (KKS) in the framework of “Safety of Connected Intelligent Vehicles in Smart Cities–SafeSmart” project (2019–2023) is gratefully acknowledged. Finally, the authors thanks the Swedish Research Council (project 2016-03497) for funding their research.

Available from: 2021-05-11 Created: 2021-05-11 Last updated: 2023-06-08Bibliographically approved
Muhammad, N. & Åstrand, B. (2019). Predicting Agent Behaviour and State for Applications in a Roundabout-Scenario Autonomous Driving. Sensors, 19(19), Article ID 4279.
Open this publication in new window or tab >>Predicting Agent Behaviour and State for Applications in a Roundabout-Scenario Autonomous Driving
2019 (English)In: Sensors, E-ISSN 1424-8220, Vol. 19, no 19, article id 4279Article in journal (Refereed) Published
Abstract [en]

As human drivers, we instinctively employ our understanding of other road users' behaviour for enhanced efficiency of our drive and safety of the traffic. In recent years, different aspects of assisted and autonomous driving have gotten a lot of attention from the research and industrial community, including the aspects of behaviour modelling and prediction of future state. In this paper, we address the problem of modelling and predicting agent behaviour and state in a roundabout traffic scenario. We present three ways of modelling traffic in a roundabout based on: (i) the roundabout geometry; (ii) mean path taken by vehicles inside the roundabout; and (iii) a set of reference trajectories traversed by vehicles inside the roundabout. The roundabout models are compared in terms of exit-direction classification and state (i.e., position inside the roundabout) prediction of query vehicles inside the roundabout. The exit-direction classification and state prediction are based on a particle-filter classifier algorithm. The results show that the roundabout model based on set of reference trajectories is better suited for both the exit-direction and state prediction.

Place, publisher, year, edition, pages
Basel: MDPI, 2019
Keywords
Behaviour modeling, Roundabout
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-40662 (URN)10.3390/s19194279 (DOI)000494823200223 ()2-s2.0-85072920077 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2019-10-04 Created: 2019-10-04 Last updated: 2022-02-10Bibliographically approved
Ericson, S. K. & Åstrand, B. (2018). Analysis of two visual odometry systems for use in an agricultural field environment. Biosystems Engineering, 166, 116-125
Open this publication in new window or tab >>Analysis of two visual odometry systems for use in an agricultural field environment
2018 (English)In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 166, p. 116-125Article in journal (Refereed) Published
Abstract [en]

This paper analyses two visual odometry systems for use in an agricultural field environment. The impact of various design parameters and camera setups are evaluated in a simulation environment. Four real field experiments were conducted using a mobile robot operating in an agricultural field. The robot was controlled to travel in a regular back-and-forth pattern with headland turns. The experimental runs were 1.8–3.1 km long and consisted of 32–63,000 frames. The results indicate that a camera angle of 75° gives the best results with the least error. An increased camera resolution only improves the result slightly. The algorithm must be able to reduce error accumulation by adapting the frame rate to minimise error. The results also illustrate the difficulties of estimating roll and pitch using a downward-facing camera. The best results for full 6-DOF position estimation were obtained on a 1.8-km run using 6680 frames captured from the forward-facing cameras. The translation error (x, y, z) is 3.76% and the rotational error (i.e., roll, pitch, and yaw) is 0.0482 deg m−1. The main contributions of this paper are an analysis of design option impacts on visual odometry results and a comparison of two state-of-the-art visual odometry algorithms, applied to agricultural field data. © 2017 IAgrE

Place, publisher, year, edition, pages
London: Academic Press, 2018
Keywords
Visual odometry, Agricultural field robots, Visual navigation
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-35853 (URN)10.1016/j.biosystemseng.2017.11.009 (DOI)000424726400009 ()2-s2.0-85037985130 (Scopus ID)
Available from: 2017-12-14 Created: 2017-12-14 Last updated: 2020-02-03Bibliographically approved
Muhammad, N. & Åstrand, B. (2018). Intention Estimation Using Set of Reference Trajectories as Behaviour Model. Sensors, 18(12), Article ID 4423.
Open this publication in new window or tab >>Intention Estimation Using Set of Reference Trajectories as Behaviour Model
2018 (English)In: Sensors, E-ISSN 1424-8220, Vol. 18, no 12, article id 4423Article in journal (Refereed) Published
Abstract [en]

Autonomous robotic systems operating in the vicinity of other agents, such as humans, manually driven vehicles and other robots, can model the behaviour and estimate intentions of the other agents to enhance efficiency of their operation, while preserving safety. We propose a data-driven approach to model the behaviour of other agents, which is based on a set of trajectories navigated by other agents. Then, to evaluate the proposed behaviour modelling approach, we propose and compare two methods for agent intention estimation based on: (i) particle filtering; and (ii) decision trees. The proposed methods were validated using three datasets that consist of real-world bicycle and car trajectories in two different scenarios, at a roundabout and at a t-junction with a pedestrian crossing. The results validate the utility of the data-driven behaviour model, and show that decision-tree based intention estimation works better on a binary-class problem, whereas the particle-filter based technique performs better on a multi-class problem, such as the roundabout, where the method yielded an average gain of 14.88 m for correct intention estimation locations compared to the decision-tree based method. © 2018 by the authors

Place, publisher, year, edition, pages
Basel: MDPI, 2018
Keywords
behaviour modelling, intention estimation
National Category
Robotics Signal Processing
Identifiers
urn:nbn:se:hh:diva-38614 (URN)10.3390/s18124423 (DOI)000454817100342 ()2-s2.0-85058645512 (Scopus ID)
Projects
CAISR/SAS2
Funder
Knowledge Foundation
Available from: 2018-12-14 Created: 2018-12-14 Last updated: 2022-06-07Bibliographically approved
Mashad Nemati, H., Gholami Shahbandi, S. & Åstrand, B. (2016). Human Tracking in Occlusion based on Reappearance Event Estimation. In: Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani (Ed.), ICINCO 2016: 13th International Conference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2. Paper presented at 13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016 (pp. 505-512). SciTePress, 2
Open this publication in new window or tab >>Human Tracking in Occlusion based on Reappearance Event Estimation
2016 (English)In: ICINCO 2016: 13th International Conference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2 / [ed] Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani, SciTePress, 2016, Vol. 2, p. 505-512Conference paper, Published paper (Refereed)
Abstract [en]

Relying on the commonsense knowledge that the trajectory of any physical entity in the spatio-temporal domain is continuous, we propose a heuristic data association technique. The technique is used in conjunction with an Extended Kalman Filter (EKF) for human tracking under occlusion. Our method is capable of tracking moving objects, maintain their state hypothesis even in the period of occlusion, and associate the target reappeared from occlusion with the existing hypothesis. The technique relies on the estimation of the reappearance event both in time and location, accompanied with an alert signal that would enable more intelligent behavior (e.g. in path planning). We implemented the proposed method, and evaluated its performance with real-world data. The result validates the expected capabilities, even in case of tracking multiple humans simultaneously.

Place, publisher, year, edition, pages
SciTePress, 2016
Keywords
Detection and Tracking Moving Objects, Extended Kalman Filter, Human Tracking, Occlusion, Intelligent Vehicles, Mobile Robots
National Category
Robotics Signal Processing Computer Vision and Robotics (Autonomous Systems) Medical Image Processing
Identifiers
urn:nbn:se:hh:diva-31709 (URN)10.5220/0006006805050512 (DOI)000392601900061 ()2-s2.0-85013059501 (Scopus ID)978-989-758-198-4 (ISBN)
Conference
13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016
Available from: 2016-08-04 Created: 2016-08-04 Last updated: 2022-07-06Bibliographically approved
Fan, Y., Aramrattana, M., Shahbandi, S. G., Nemati, H. M. & Åstrand, B. (2016). Infrastructure Mapping in Well-Structured Environments Using MAV. Paper presented at 17th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2016, Sheffield, United Kingdom, 26 June-1 July, 2016. Lecture Notes in Computer Science, 9716, 116-126
Open this publication in new window or tab >>Infrastructure Mapping in Well-Structured Environments Using MAV
Show others...
2016 (English)In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 9716, p. 116-126Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a design of a surveying system for warehouse environment using low cost quadcopter. The system focus on mapping the infrastructure of surveyed environment. As a unique and essential parts of the warehouse, pillars from storing shelves are chosen as landmark objects for representing the environment. The map are generated based on fusing the outputs of two different methods, point cloud of corner features from Parallel Tracking and Mapping (PTAM) algorithm with estimated pillar position from a multi-stage image analysis method. Localization of the drone relies on PTAM algorithm. The system is implemented in Robot Operating System(ROS) and MATLAB, and has been successfully tested in real-world experiments. The result map after scaling has a metric error less than 20 cm. © Springer International Publishing Switzerland 2016.

Place, publisher, year, edition, pages
Cham, Switzerland: Springer, 2016
Keywords
Robotic mapping, parallel tracking and mapping, MAV
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-31645 (URN)10.1007/978-3-319-40379-3_12 (DOI)000386324700012 ()2-s2.0-84977496781 (Scopus ID)978-3-319-40378-6 (ISBN)978-3-319-40379-3 (ISBN)
Conference
17th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2016, Sheffield, United Kingdom, 26 June-1 July, 2016
Available from: 2016-07-14 Created: 2016-07-14 Last updated: 2021-05-19Bibliographically approved
Midtiby, H. S., Åstrand, B., Jørgensen, O. & Jørgensen, R. N. (2016). Upper limit for context-based crop classification in robotic weeding applications. Biosystems Engineering, 146, 183-192
Open this publication in new window or tab >>Upper limit for context-based crop classification in robotic weeding applications
2016 (English)In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 146, p. 183-192Article in journal (Refereed) Published
Abstract [en]

Knowledge of the precise position of crop plants is a prerequisite for effective mechanical weed control in robotic weeding application such as in crops like sugar beets which are sensitive to mechanical stress. Visual detection and recognition of crop plants based on their shapes has been described many times in the literature. In this paper the potential of using knowledge about the crop seed pattern is investigated based on simulated output from a perception system. The reliability of position–based crop plant detection is shown to depend on the weed density (ρ, measured in weed plants per square metre) and the crop plant pattern position uncertainty (σx and σy, measured in metres along and perpendicular to the crop row, respectively). The recognition reliability can be described with the positive predictive value (PPV), which is limited by the seeding pattern uncertainty and the weed density according to the inequality: PPV ≤ (1 + 2πρσxσy)−1. This result matches computer simulations of two novel methods for position–based crop recognition as well as earlier reported field–based trials. © 2016 IAgrE

Place, publisher, year, edition, pages
London: Academic Press, 2016
Keywords
Crop recognition, Row structure, Weeding robots
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-31234 (URN)10.1016/j.biosystemseng.2016.01.012 (DOI)000378966400014 ()2-s2.0-84958260860 (Scopus ID)
Note

Special Issue: Advances in Robotic Agriculture for Crops

Available from: 2016-06-17 Created: 2016-06-17 Last updated: 2018-03-22Bibliographically approved
Hedenberg, K. & Åstrand, B. (2015). 3D Sensors on Driverless Trucks for Detection of Overhanging Objects in the Pathway. In: Roger Bostelman & Elena Messina (Ed.), Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor. Paper presented at ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015 (pp. 41-56). Paper presented at ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015. Conshohocken: ASTM International
Open this publication in new window or tab >>3D Sensors on Driverless Trucks for Detection of Overhanging Objects in the Pathway
2015 (English)In: Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor / [ed] Roger Bostelman & Elena Messina, Conshohocken: ASTM International, 2015, p. 41-56Chapter in book (Refereed)
Abstract [en]

Human-operated and driverless trucks often collaborate in a mixed work space in industries and warehouses. This is more efficient and flexible than using only one kind of truck. However, since driverless trucks need to give way to trucks, a reliable detection system is required. Several challenges exist in the development of an obstacle detection system in an industrial setting. The first is to select interesting situations and objects. Overhanging objects are often found in industrial environments, e.g. tines on a forklift. Second is choosing a detection system that has the ability to detect those situations. The traditional laser scanner situated two decimetres above the floor does not detect overhanging objects. Third is to ensure that the perception system is reliable. A solution used on trucks today is to mount a 2D laser scanner on the top of the truck and tilt the scanner towards the floor. However, objects at the top of the truck will be detected too late and a collision cannot always be avoided. Our aim is to replace the upper 2D laser scanner with a 3D camera, structural light or time-of-flight (TOF) camera. It is important to maximize the field of view in the desired detection volume. Hence, the placement of the sensor is important. We conducted laboratory experiments to check and compare the various sensors’ capabilities for different colors, used tines and a model of a tine in a controlled industrial environment. We also conducted field experiments in a warehouse. The conclusion is that both the tested structural light and TOF sensors have problems to detect black items that is nonperpendicular to the sensor and at the distance of interest. It is important to optimize the light economy, meaning the illumination power, field of view and exposure time in order to detect as many different objects as possible. Copyright © 2016 by ASTM International

Place, publisher, year, edition, pages
Conshohocken: ASTM International, 2015
Series
ASTM Special Technical Publication, ISSN 0066-0558 ; 1594
Keywords
mobile robots, safety, obstacle detection
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-29358 (URN)10.1520/STP159420150051 (DOI)000380525000003 ()2-s2.0-84978164198 (Scopus ID)9780803176331 (ISBN)9780803176348 (ISBN)
Conference
ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015
Projects
AIMS
Funder
Knowledge Foundation
Note

Conference: Workshop on Autonomous Industrial Vehicles - from Laboratory to the Factory Floor, Seattle, WA, United States, May 26-30, 2015

Available from: 2015-09-02 Created: 2015-09-02 Last updated: 2018-03-22Bibliographically approved
Gholami Shahbandi, S., Åstrand, B. & Philippsen, R. (2015). Semi-Supervised Semantic Labeling of Adaptive Cell Decomposition Maps in Well-Structured Environments. In: 2015 European Conference on Mobile Robots (ECMR): . Paper presented at 7th European Conference on Mobile Robots 2015, Lincoln, United Kingdom, 2-4 September, 2015. Piscataway, NJ: IEEE Press, Article ID 7324207.
Open this publication in new window or tab >>Semi-Supervised Semantic Labeling of Adaptive Cell Decomposition Maps in Well-Structured Environments
2015 (English)In: 2015 European Conference on Mobile Robots (ECMR), Piscataway, NJ: IEEE Press, 2015, article id 7324207Conference paper, Published paper (Refereed)
Abstract [en]

We present a semi-supervised approach for semantic mapping, by introducing human knowledge after unsupervised place categorization has been combined with an adaptive cell decomposition of an occupancy map. Place categorization is based on clustering features extracted from raycasting in the occupancy map. The cell decomposition is provided by work we published previously, which is effective for the maps that could be abstracted by straight lines. Compared to related methods, our approach obviates the need for a low-level link between human knowledge and the perception and mapping sub-system, or the onerous preparation of training data for supervised learning. Application scenarios include intelligent warehouse robots which need a heightened awareness in order to operate with a higher degree of autonomy and flexibility, and integrate more fully with inventory management systems. The approach is shown to be robust and flexible with respect to different types of environments and sensor setups. © 2015 IEEE

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Press, 2015
Keywords
Continuous wavelet transforms, Feature extraction, Labeling, Robot sensing systems, Robustness, Semantics
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-29343 (URN)10.1109/ECMR.2015.7324207 (DOI)000380213600041 ()2-s2.0-84962293280 (Scopus ID)
Conference
7th European Conference on Mobile Robots 2015, Lincoln, United Kingdom, 2-4 September, 2015
Projects
AIMS
Funder
Knowledge Foundation
Note

This work was supported by the Swedish Knowledge Foundation and industry partners Kollmorgen, Optronic, and Toyota Material Handling Europe.

Available from: 2015-09-01 Created: 2015-09-01 Last updated: 2022-09-21Bibliographically approved
Projects
Safe and Efficient EV Battery Recycling - Pre-study [48211-1]
Organisations

Search in DiVA

Show all publications