hh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Åstrand, Björn
Alternative names
Publications (10 of 32) Show all publications
Ericson, S. K. & Åstrand, B. (2018). Analysis of two visual odometry systems for use in an agricultural field environment. Biosystems Engineering, 166, 116-125
Open this publication in new window or tab >>Analysis of two visual odometry systems for use in an agricultural field environment
2018 (English)In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 166, p. 116-125Article in journal (Refereed) Published
Abstract [en]

This paper analyses two visual odometry systems for use in an agricultural field environment. The impact of various design parameters and camera setups are evaluated in a simulation environment. Four real field experiments were conducted using a mobile robot operating in an agricultural field. The robot was controlled to travel in a regular back-and-forth pattern with headland turns. The experimental runs were 1.8–3.1 km long and consisted of 32–63,000 frames. The results indicate that a camera angle of 75° gives the best results with the least error. An increased camera resolution only improves the result slightly. The algorithm must be able to reduce error accumulation by adapting the frame rate to minimise error. The results also illustrate the difficulties of estimating roll and pitch using a downward-facing camera. The best results for full 6-DOF position estimation were obtained on a 1.8-km run using 6680 frames captured from the forward-facing cameras. The translation error (x, y, z) is 3.76% and the rotational error (i.e., roll, pitch, and yaw) is 0.0482 deg m−1. The main contributions of this paper are an analysis of design option impacts on visual odometry results and a comparison of two state-of-the-art visual odometry algorithms, applied to agricultural field data. © 2017 IAgrE

Place, publisher, year, edition, pages
London: Academic Press, 2018
Keywords
Visual odometry, Agricultural field robots, Visual navigation
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-35853 (URN)10.1016/j.biosystemseng.2017.11.009 (DOI)2-s2.0-85037985130 (Scopus ID)
Available from: 2017-12-14 Created: 2017-12-14 Last updated: 2018-01-09Bibliographically approved
Muhammad, N. & Åstrand, B. (2018). Intention Estimation Using Set of Reference Trajectories as Behaviour Model. Sensors, 18(12), Article ID 4423.
Open this publication in new window or tab >>Intention Estimation Using Set of Reference Trajectories as Behaviour Model
2018 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 12, article id 4423Article in journal (Refereed) Published
Abstract [en]

Autonomous robotic systems operating in the vicinity of other agents, such as humans, manually driven vehicles and other robots, can model the behaviour and estimate intentions of the other agents to enhance efficiency of their operation, while preserving safety. We propose a data-driven approach to model the behaviour of other agents, which is based on a set of trajectories navigated by other agents. Then, to evaluate the proposed behaviour modelling approach, we propose and compare two methods for agent intention estimation based on: (i) particle filtering; and (ii) decision trees. The proposed methods were validated using three datasets that consist of real-world bicycle and car trajectories in two different scenarios, at a roundabout and at a t-junction with a pedestrian crossing. The results validate the utility of the data-driven behaviour model, and show that decision-tree based intention estimation works better on a binary-class problem, whereas the particle-filter based technique performs better on a multi-class problem, such as the roundabout, where the method yielded an average gain of 14.88 m for correct intention estimation locations compared to the decision-tree based method. © 2018 by the authors

Place, publisher, year, edition, pages
Basel: MDPI, 2018
Keywords
behaviour modelling, intention estimation
National Category
Robotics Signal Processing
Identifiers
urn:nbn:se:hh:diva-38614 (URN)10.3390/s18124423 (DOI)2-s2.0-85058645512 (Scopus ID)
Projects
CAISR/SAS2
Funder
Knowledge Foundation
Available from: 2018-12-14 Created: 2018-12-14 Last updated: 2019-01-02Bibliographically approved
Mashad Nemati, H., Gholami Shahbandi, S. & Åstrand, B. (2016). Human Tracking in Occlusion based on Reappearance Event Estimation. In: Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani (Ed.), ICINCO 2016: 13th International Conference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2. Paper presented at 13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016 (pp. 505-512). SciTePress, 2
Open this publication in new window or tab >>Human Tracking in Occlusion based on Reappearance Event Estimation
2016 (English)In: ICINCO 2016: 13th International Conference on Informatics in Control, Automation and Robotics: Proceedings, Volume 2 / [ed] Oleg Gusikhin, Dimitri Peaucelle & Kurosh Madani, SciTePress, 2016, Vol. 2, p. 505-512Conference paper, Published paper (Refereed)
Abstract [en]

Relying on the commonsense knowledge that the trajectory of any physical entity in the spatio-temporal domain is continuous, we propose a heuristic data association technique. The technique is used in conjunction with an Extended Kalman Filter (EKF) for human tracking under occlusion. Our method is capable of tracking moving objects, maintain their state hypothesis even in the period of occlusion, and associate the target reappeared from occlusion with the existing hypothesis. The technique relies on the estimation of the reappearance event both in time and location, accompanied with an alert signal that would enable more intelligent behavior (e.g. in path planning). We implemented the proposed method, and evaluated its performance with real-world data. The result validates the expected capabilities, even in case of tracking multiple humans simultaneously.

Place, publisher, year, edition, pages
SciTePress, 2016
Keywords
Detection and Tracking Moving Objects, Extended Kalman Filter, Human Tracking, Occlusion, Intelligent Vehicles, Mobile Robots
National Category
Robotics Signal Processing Computer Vision and Robotics (Autonomous Systems) Medical Image Processing
Identifiers
urn:nbn:se:hh:diva-31709 (URN)10.5220/0006006805050512 (DOI)000392601900061 ()2-s2.0-85013059501 (Scopus ID)978-989-758-198-4 (ISBN)
Conference
13th International Conference on Informatics in Control, Automation and Robotics, Lisbon, Portugal, 29-31 July, 2016
Available from: 2016-08-04 Created: 2016-08-04 Last updated: 2018-01-10Bibliographically approved
Fan, Y., Aramrattana, M., Shahbandi, S. G., Nemati, H. M. & Åstrand, B. (2016). Infrastructure Mapping in Well-Structured Environments Using MAV. Paper presented at 17th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2016, Sheffield, United Kingdom, 26 June-1 July, 2016. Lecture Notes in Computer Science, 9716, 116-126
Open this publication in new window or tab >>Infrastructure Mapping in Well-Structured Environments Using MAV
Show others...
2016 (English)In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 9716, p. 116-126Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a design of a surveying system for warehouse environment using low cost quadcopter. The system focus on mapping the infrastructure of surveyed environment. As a unique and essential parts of the warehouse, pillars from storing shelves are chosen as landmark objects for representing the environment. The map are generated based on fusing the outputs of two different methods, point cloud of corner features from Parallel Tracking and Mapping (PTAM) algorithm with estimated pillar position from a multi-stage image analysis method. Localization of the drone relies on PTAM algorithm. The system is implemented in Robot Operating System(ROS) and MATLAB, and has been successfully tested in real-world experiments. The result map after scaling has a metric error less than 20 cm. © Springer International Publishing Switzerland 2016.

Place, publisher, year, edition, pages
Cham, Switzerland: Springer, 2016
Keywords
Robotic mapping, parallel tracking and mapping, MAV
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-31645 (URN)10.1007/978-3-319-40379-3_12 (DOI)000386324700012 ()2-s2.0-84977496781 (Scopus ID)978-3-319-40378-6 (ISBN)978-3-319-40379-3 (ISBN)
Conference
17th Annual Conference on Towards Autonomous Robotic Systems, TAROS 2016, Sheffield, United Kingdom, 26 June-1 July, 2016
Available from: 2016-07-14 Created: 2016-07-14 Last updated: 2018-03-22Bibliographically approved
Midtiby, H. S., Åstrand, B., Jørgensen, O. & Jørgensen, R. N. (2016). Upper limit for context-based crop classification in robotic weeding applications. Biosystems Engineering, 146, 183-192
Open this publication in new window or tab >>Upper limit for context-based crop classification in robotic weeding applications
2016 (English)In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 146, p. 183-192Article in journal (Refereed) Published
Abstract [en]

Knowledge of the precise position of crop plants is a prerequisite for effective mechanical weed control in robotic weeding application such as in crops like sugar beets which are sensitive to mechanical stress. Visual detection and recognition of crop plants based on their shapes has been described many times in the literature. In this paper the potential of using knowledge about the crop seed pattern is investigated based on simulated output from a perception system. The reliability of position–based crop plant detection is shown to depend on the weed density (ρ, measured in weed plants per square metre) and the crop plant pattern position uncertainty (σx and σy, measured in metres along and perpendicular to the crop row, respectively). The recognition reliability can be described with the positive predictive value (PPV), which is limited by the seeding pattern uncertainty and the weed density according to the inequality: PPV ≤ (1 + 2πρσxσy)−1. This result matches computer simulations of two novel methods for position–based crop recognition as well as earlier reported field–based trials. © 2016 IAgrE

Place, publisher, year, edition, pages
London: Academic Press, 2016
Keywords
Crop recognition, Row structure, Weeding robots
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-31234 (URN)10.1016/j.biosystemseng.2016.01.012 (DOI)000378966400014 ()2-s2.0-84958260860 (Scopus ID)
Note

Special Issue: Advances in Robotic Agriculture for Crops

Available from: 2016-06-17 Created: 2016-06-17 Last updated: 2018-03-22Bibliographically approved
Hedenberg, K. & Åstrand, B. (2015). 3D Sensors on Driverless Trucks for Detection of Overhanging Objects in the Pathway. In: Roger Bostelman & Elena Messina (Ed.), Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor. Paper presented at ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015 (pp. 41-56). Paper presented at ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015. Conshohocken: ASTM International
Open this publication in new window or tab >>3D Sensors on Driverless Trucks for Detection of Overhanging Objects in the Pathway
2015 (English)In: Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor / [ed] Roger Bostelman & Elena Messina, Conshohocken: ASTM International, 2015, p. 41-56Chapter in book (Refereed)
Abstract [en]

Human-operated and driverless trucks often collaborate in a mixed work space in industries and warehouses. This is more efficient and flexible than using only one kind of truck. However, since driverless trucks need to give way to trucks, a reliable detection system is required. Several challenges exist in the development of an obstacle detection system in an industrial setting. The first is to select interesting situations and objects. Overhanging objects are often found in industrial environments, e.g. tines on a forklift. Second is choosing a detection system that has the ability to detect those situations. The traditional laser scanner situated two decimetres above the floor does not detect overhanging objects. Third is to ensure that the perception system is reliable. A solution used on trucks today is to mount a 2D laser scanner on the top of the truck and tilt the scanner towards the floor. However, objects at the top of the truck will be detected too late and a collision cannot always be avoided. Our aim is to replace the upper 2D laser scanner with a 3D camera, structural light or time-of-flight (TOF) camera. It is important to maximize the field of view in the desired detection volume. Hence, the placement of the sensor is important. We conducted laboratory experiments to check and compare the various sensors’ capabilities for different colors, used tines and a model of a tine in a controlled industrial environment. We also conducted field experiments in a warehouse. The conclusion is that both the tested structural light and TOF sensors have problems to detect black items that is nonperpendicular to the sensor and at the distance of interest. It is important to optimize the light economy, meaning the illumination power, field of view and exposure time in order to detect as many different objects as possible. Copyright © 2016 by ASTM International

Place, publisher, year, edition, pages
Conshohocken: ASTM International, 2015
Series
ASTM Special Technical Publication, ISSN 0066-0558 ; 1594
Keywords
mobile robots, safety, obstacle detection
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-29358 (URN)10.1520/STP159420150051 (DOI)000380525000003 ()2-s2.0-84978164198 (Scopus ID)9780803176331 (ISBN)9780803176348 (ISBN)
Conference
ICRA 2015 Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor, Seattle, WA, USA, 30 May, 2015
Projects
AIMS
Funder
Knowledge Foundation
Note

Conference: Workshop on Autonomous Industrial Vehicles - from Laboratory to the Factory Floor, Seattle, WA, United States, May 26-30, 2015

Available from: 2015-09-02 Created: 2015-09-02 Last updated: 2018-03-22Bibliographically approved
Gholami Shahbandi, S., Åstrand, B. & Philippsen, R. (2015). Semi-Supervised Semantic Labeling of Adaptive Cell Decomposition Maps in Well-Structured Environments. In: 2015 European Conference on Mobile Robots (ECMR): . Paper presented at 7th European Conference on Mobile Robots 2015, Lincoln, United Kingdom, 2-4 September, 2015. Piscataway, NJ: IEEE Press, Article ID 7324207.
Open this publication in new window or tab >>Semi-Supervised Semantic Labeling of Adaptive Cell Decomposition Maps in Well-Structured Environments
2015 (English)In: 2015 European Conference on Mobile Robots (ECMR), Piscataway, NJ: IEEE Press, 2015, article id 7324207Conference paper, Published paper (Refereed)
Abstract [en]

We present a semi-supervised approach for semantic mapping, by introducing human knowledge after unsupervised place categorization has been combined with an adaptive cell decomposition of an occupancy map. Place categorization is based on clustering features extracted from raycasting in the occupancy map. The cell decomposition is provided by work we published previously, which is effective for the maps that could be abstracted by straight lines. Compared to related methods, our approach obviates the need for a low-level link between human knowledge and the perception and mapping sub-system, or the onerous preparation of training data for supervised learning. Application scenarios include intelligent warehouse robots which need a heightened awareness in order to operate with a higher degree of autonomy and flexibility, and integrate more fully with inventory management systems. The approach is shown to be robust and flexible with respect to different types of environments and sensor setups. © 2015 IEEE

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Press, 2015
Keywords
Continuous wavelet transforms, Feature extraction, Labeling, Robot sensing systems, Robustness, Semantics
National Category
Robotics
Identifiers
urn:nbn:se:hh:diva-29343 (URN)10.1109/ECMR.2015.7324207 (DOI)000380213600041 ()2-s2.0-84962293280 (Scopus ID)978-1-4673-9163-4 (ISBN)978-1-4673-9163-15 (ISBN)
Conference
7th European Conference on Mobile Robots 2015, Lincoln, United Kingdom, 2-4 September, 2015
Projects
AIMS
Funder
Knowledge Foundation
Note

This work was supported by the Swedish Knowledge Foundation and industry partners Kollmorgen, Optronic, and Toyota Material Handling Europe.

Available from: 2015-09-01 Created: 2015-09-01 Last updated: 2018-05-02Bibliographically approved
Andreasson, H., Bouguerra, A., Åstrand, B. & Rögnvaldsson, T. (2014). Gold-fish SLAM: An application of SLAM to localize AGVs. In: Kazuya Yoshida & Satoshi Tadokoro (Ed.), Field and Service Robotics: Results of the 8th International Conference. Paper presented at The 8th International Conference on Field and Service Robotics, Matsushima, Miyagi, Japan, July 16-19, 2012 (pp. 585-598). Heidelberg: Springer
Open this publication in new window or tab >>Gold-fish SLAM: An application of SLAM to localize AGVs
2014 (English)In: Field and Service Robotics: Results of the 8th International Conference / [ed] Kazuya Yoshida & Satoshi Tadokoro, Heidelberg: Springer, 2014, p. 585-598Conference paper, Published paper (Refereed)
Abstract [en]

The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs. © Springer-Verlag Berlin Heidelberg 2014.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2014
Series
Springer Tracts in Advanced Robotics, ISSN 1610-7438 ; 92
National Category
Robotics Signal Processing Computer Systems
Identifiers
urn:nbn:se:hh:diva-19473 (URN)10.1007/978-3-642-40686-7_39 (DOI)2-s2.0-84897721700 (Scopus ID)978-3-642-40685-0 (ISBN)978-3-642-40686-7 (ISBN)
Conference
The 8th International Conference on Field and Service Robotics, Matsushima, Miyagi, Japan, July 16-19, 2012
Projects
MALTA
Funder
Knowledge Foundation
Available from: 2012-09-05 Created: 2012-09-05 Last updated: 2018-03-22Bibliographically approved
Gholami Shahbandi, S. & Åstrand, B. (2014). Modeling of a Large Structured Environment: With a Repetitive Canonical Geometric-Semantic Model. In: Michael Mistry, Aleš Leonardis, Mark Witkowski & Chris Melhuish (Ed.), Advances in Autonomous Robotics Systems: 15th Annual Conference, TAROS 2014, Birmingham, UK, September 1-3, 2014. Proceedings. Paper presented at 15th Annual Conference, TAROS (Towards Autonomous Robotic Systems) 2014, Birmingham, United Kingdom, September 1-3, 2014 (pp. 1-12). Heidelberg: Springer, 8717
Open this publication in new window or tab >>Modeling of a Large Structured Environment: With a Repetitive Canonical Geometric-Semantic Model
2014 (English)In: Advances in Autonomous Robotics Systems: 15th Annual Conference, TAROS 2014, Birmingham, UK, September 1-3, 2014. Proceedings / [ed] Michael Mistry, Aleš Leonardis, Mark Witkowski & Chris Melhuish, Heidelberg: Springer, 2014, Vol. 8717, p. 1-12Conference paper, Published paper (Refereed)
Abstract [en]

AIMS project attempts to link the logistic requirements of an intelligent warehouse and state of the art core technologies of automation, by providing an awareness of the environment to the autonomous systems and vice versa. In this work we investigate a solution for modeling the infrastructure of a structured environment such as warehouses, by the means of a vision sensor. The model is based on the expected pattern of the infrastructure, generated from and matched to the map. Generation of the model is based on a set of tools such as closed-form Hough transform, DBSCAN clustering algorithm, Fourier transform and optimization techniques. The performance evaluation of the proposed method is accompanied with a real world experiment. © 2014 Springer International Publishing.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2014
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8717
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-26316 (URN)10.1007/978-3-319-10401-0_1 (DOI)2-s2.0-84906729072 (Scopus ID)978-3-319-10400-3 (ISBN)978-3-319-10401-0 (ISBN)
Conference
15th Annual Conference, TAROS (Towards Autonomous Robotic Systems) 2014, Birmingham, United Kingdom, September 1-3, 2014
Projects
AIMS
Funder
Knowledge Foundation
Note

This work as a part of AIMS project, is supported by the Swedish Knowledge Foundation and industry partners Kollmorgen, Optronic, and Toyota Material Handling Europe.

Available from: 2014-08-28 Created: 2014-08-28 Last updated: 2018-05-02Bibliographically approved
Gholami Shahbandi, S., Åstrand, B. & Philippsen, R. (2014). Sensor Based Adaptive Metric-Topological Cell Decomposition Method for Semantic Annotation of Structured Environments. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV): . Paper presented at 13th International Conference on Control, Automation, Robotics and Vision, ICARCV 2014, Marina Bay Sands, Singapore, December 10-12, 2014 (pp. 1771-1777). Piscataway, NJ: IEEE Press, Article ID 7064584.
Open this publication in new window or tab >>Sensor Based Adaptive Metric-Topological Cell Decomposition Method for Semantic Annotation of Structured Environments
2014 (English)In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), Piscataway, NJ: IEEE Press, 2014, p. 1771-1777, article id 7064584Conference paper, Published paper (Refereed)
Abstract [en]

A fundamental ingredient for semantic labeling is a reliable method for determining and representing the relevant spatial features of an environment. We address this challenge for planar metric-topological maps based on occupancy grids. Our method detects arbitrary dominant orientations in the presence of significant clutter, fits corresponding line features with tunable resolution, and extracts topological information by polygonal cell decomposition. Real-world case studies taken from the target application domain (autonomous forklift trucks in warehouses) demonstrate the performance and robustness of our method, while results from a preliminary algorithm to extract corridors, and junctions, demonstrate its expressiveness. Contribution of this work starts with the formulation of metric-topological surveying of environment, and a generic n-direction planar representation accompanied with a general method for extracting it from occupancy map. The implementation also includes some semantic labels specific to warehouse like environments. © 2014 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Press, 2014
National Category
Signal Processing Robotics
Identifiers
urn:nbn:se:hh:diva-26597 (URN)10.1109/ICARCV.2014.7064584 (DOI)000393395800306 ()2-s2.0-84949925965 (Scopus ID)978-1-4799-5199-4 (ISBN)
Conference
13th International Conference on Control, Automation, Robotics and Vision, ICARCV 2014, Marina Bay Sands, Singapore, December 10-12, 2014
Funder
Knowledge Foundation
Note

This work was supported by the Swedish Knowledge Foundation and industry partners Kollmorgen, Optronic, and Toyota Material Handling Europe.

Available from: 2014-09-26 Created: 2014-09-26 Last updated: 2018-05-02Bibliographically approved
Organisations

Search in DiVA

Show all publications