The vehicle-to-vehicle (V2V) propagation channel has significant implications on the design and performance of novel communication protocols for vehicular ad hoc networks (VANETs). Extensive research efforts have been made to develop V2V channel models to be implemented in advanced VANET system simulators for performance evaluation. The impact of shadowing caused by other vehicles has, however, largely been neglected in most of the models, as well as in the system simulations. In this paper we present a shadow fading model targeting system simulations based on real measurements performed in urban and highway scenarios. The measurement data is separated into three categories, line-of-sight (LOS), obstructed line-of-sight (OLOS) by vehicles, and non-line-of-sight due to buildings, with the help of video information recorded during the measurements. It is observed that vehicles obstructing the LOS induce an additional average attenuation of about 10 dB in the received signal power. An approach to incorporate the LOS/OLOS model into existing VANET simulators is also provided. Finally, system level VANET simulation results are presented, showing the difference between the LOS/OLOS model and a channel model based on Nakagami-m fading.
Water is the source of all life, but unfortunately, the water quality is getting only worse due to many factors like overuse, contamination, indifference and even by nature itself. By identifying the problem, we are one step closer to solving the problem, and that is why an intelligent water quality device is required to examine water and detect impurities within it. In this project, we are developing a device that uses an entirely new method to measure water quality. Even though the theory behind the device is very advanced, the device is still primitive in its functions and needs development to increase the usefulness and accuracy of the measurements!
Deep learning and Computer vision are becoming a part of everyday objects and machines. Involvement of artificial intelligence in human’s daily life open doors to new opportunities and research. This involvement provides the idea of improving upon the in-hand research of spatial relations and coming up with a more generic and robust algorithm that provides us with 2-D and 3-D spatial relations and uses RGB and RGB-D images which can help us with few complex relations such as ‘on’ or ‘in’ as well. Suggested methods are tested on the dataset with animated and real objects, where the number of objects varies in every image from at least 4 to at most 10 objects. The size and orientation of objects are also different in every image.
Information security is an important aspect when running a business. Before, information security has been separated to the business area of IT. But lately this issue has broadened and become an important part of business-activity. This has resulted in a growing interest among business leaders. Literature within the subject information security mainly focuses on how organizations maintain safe systems and protect themselves from cyber-attacks and information infringements. Existing literature identifies new security threats that have emerged after advances in internet technology, but little is known about how these threats can be managed. Researchers request research on how cooperation in supply chains poses risks to secure information management. Logistics companies provide customers with logistics services such as warehouse management, transport, order processing and packaging. Logistics companies are a central node in supply chains. They often participate in several supply chains in different industries. The extensive interconnection of companies poses a security risk. It also means that logistics companies can be seen as targets for cyber-attacks. The purpose of the study has therefore been to create an understanding of the challenges logistics companies face in managing information security in the supply chain.
The research question has been answered by interviewing representatives from logistics organizations. The empirical data has undergone a thematic analysis. The results of the study show that the management of information security varies between companies. The study’s conclusions present recommendations. The recommendations describe how logistics companies can manage information security in the supply chain.
The existence of high load and latency in the CAN bus network would indeed lead to a situation where a given message crosses its deadline; this situation would disturb the continuity of the required service as well as activating fault codes due to delay of message delivery, which might lead to system failure.
The outcome and goal of this thesis is to research and formulate methods to determine and model busload and latencies, by determining parameters such as alpha and breakdown utilization, which are considered as indications to the start of network breakdown when a given message in a dataset start to introduce latency by crossing its deadline which are totally prohibited in critical real time communications.
The final goal of this master thesis is to develop a TOOL for calculating, modeling, determining and visualizing worst case busload, throughput, networks’ breakdown points and worst case latency in Scania CAN bus networks which is based on the J1939 protocol.
SCANLA (The developed CAN busload analyzer tool in this thesis) is running as an executable application and uses a Graphical User Interface as a human-computer interface (i.e., a way for humans to interact with the tool) that useswindows,icons and menus and which can be manipulated by a mouse.
This thesis work provides the implementation of 3D structure tensor on a Massively Parallel Processor Array (MPPA), Ambric 2045.
The 3D structure tensor algorithm is often used in image processing applications to compute the optical flow or to detect local 3D structures and their directions. The 3D structure tensor algorithm (3D-STA) consists of three main parts: gradient, tensor and smoothing. This algorithm is computationally expensive due to many multiplications and additions which are required to calculate the gradient (edge), the tensor and to smooth every pixel of the image. This is why this algorithm is very slow to run on a single processor. Therefore, it is important to make it parallel for high performance computation.
This thesis provides two parallel implementations of 3D-STA; namely coarse-grained parallelism and fine-grained parallelism. Ambric has 336 processors. Only 49 processors are used in coarse-grained implementation and 165 processors are used in fine-grained implementation. The performance of the two implementations is measured using a video stream input, consisting of a sequence of images of size 20x256x256. The performance of the coarse-grained parallelism implementation is 25 frames per second (fps) and the one of the fine-grained parallelism implementation is 100 fps. Thus the fine-grained version is four time faster than the coarse-grained one.
Additionally, the results are compared with the result of the Matlab implementation, running on Intel(R) Core 2 duo @2.10 GHz processor and also compared with another parallel optical flow implementation, in terms of speed and efficiency. The coarse-grained implementation is 58 times faster than the Matlab implementation and it achieves approximately half of the performance of the other parallel optical flow implementation. On the other hand, the fine-grained implementation is 230 times faster than the Matlab implementation and more than twice as (100/43) fast as the other parallel optical flow implementation.
These performance results are satisfactory and the results that our parallel implementations can be considered for real-time applications.
The three dimensional structure tensor algorithm (3D-STA) is often used in image processing applications to compute the optical flow or to detect local 3D structures and their directions. This algorithm is computationally expensive due to many computations that are required to calculate the gradient, the tensor, and to smooth every pixel of the image frames. Therefore, it is important to parallelize the implementation to achieve high performance. In this paper we present two parallel implementations of 3D-STA; namely moderately parallelized and highly parallelized implementation, on a massively parallel reconfigurable array. Finally, we evaluate the performance of the generated code and results are compared with another optical flow implementation. The throughput achieved by the moderately parallelized implementation is approximately half of the throughput of the Optical flow implementation, whereas the highly parallelized implementation results in a 2x gain in throughput as compared to the optical flow implementation. © 2012 IEEE.
The acquisition of data from mobile phones have been a mainstay of criminal digital forensics for a number of years now. However, this forensic acquisition is getting more and more difficult with the increasing security level and complexity of mobile phones (and other embedded devices). In addition, it is often difficult or impossible to get access to design specifications, documentation and source code. As a result, the forensic acquisition methods are also increasing in complexity, requiring an ever deeper understanding of the underlying technology and its security mechanisms. Forensic acquisition techniques are turning to more offensive solutions to bypass security mechanisms, through security vulnerabilities. Common Criteria mode is a security feature that increases the security level of Samsung devices, and thus make forensic acquisition more difficult for law enforcement. With no access to design documents or source code, we have reverse engineered how the Common Criteria mode is actually implemented and protected by Samsung's secure bootloader. We present how this security mode is enforced, security vulnerabilities therein, and how the discovered security vulnerabilities can be used to circumvent Common Criteria mode for further forensic acquisition. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS.
Capturing physical phenomena such as node mobility or wave propagation is challenging in current network simulators, and is mostly achieved through crude abstractions. Despite being operationally efficient, such abstractions adversely affect simulation credibility. To realize more accurate modeling, we are currently developing a simulation environment integrating a hybrid modeling language into a mainstream network simulator. This paper gives a preliminary overview of our efforts. For illustration, an example simulation scenario with some basic mobility is described. © 2014 IEEE.
Smart keys are increasing in popularity due to the many benefits they bring. Access control and overview have never been more efficient than it is today. This thesis project automates the digital production of a new line of keys. Automating this production process improves the production in scalability, reliability, and efficiency. This report includes background research on critical components, methodologies to solve presented subproblems, the results of this project, and a discussion providing insight into the possible benefits of using an automated development line. This automation’s core elements are an integrated circuit holding a microcontroller, hardware components, and a graphical user interface. This project results in an automated production process capable of producing smart keys more efficiently than today. A report containing the most common errors using this production process and suggestions to improve scalability, reliability, and efficiency further.
Automotive radar is an emerging field of research and development. Technological advancements in this field will improve safety for vehicles, pedestrians, and bicyclists, and enable the development of autonomous vehicles. Usage of the Automotive radar is expanding in car and road areas to reduce collisions and accident. Automotive radar developers face a problem to test their radar sensor in the street since there are a lot of interferences signals, noise and unpredicted situations. This thesis provides a part of the solution for this problem by designing a device can demonstrate a different speeds value. This device will help the developer to test their radar sensor inside an anechoic chamber room that provides accurate control of the environmental conditions. This report shows how to build the measuring setup device, step by step to demonstrate the people and vehicle’s speed in the street by a Doppler emulator using the wheel for millimetre FWMC radar. Linear speed system needs a large space for testing, but using the rotational wheel allow the developer to test the radar sensor in a small area. It begins with the wheel design specifications and the relation between the rotational speed (RPM) of the wheel and the Doppler frequency. The Doppler frequency is changed by varying the speed of the wheel. Control and power circuit was carefully designed to control the wheel speed accurately. All the measuring setup device parts were assembled in one box. Also, signal processing was done by MATLAB to measure the Doppler frequency using millimetre FMCW radar sensor. The measuring setup device was tested in the anechoic chamber room for different speeds. the manual and automatic tests show good results to measure the different wheel speeds with high accuracy.
Reliability concerns of embedded systems are traditionally resolved by software-based control flow checking (CFC) methods where the execution flow of the processor is monitored to detect and compensate flow violations. Traditional CFC methods may lose their efficiency when it comes to multiprocessing embedded systems. In this paper, we introduce and validate a novel flow error model for multiprocessing embedded systems. Further, we propose a holistic CFC system which performs the flow checking of the processes of interest. The proposed CFC checking introduces the concept of a single monitoring process intended to check the execution flow of as many processes as wanted within an multiprocessing embedded system. Proposed solution does not introduce any substantial overheads in performance and memory consumption. Even more important is method's insensitivity to the number of checked processes. Our wide evaluations show the average performance overhead of 13.77%, average code-size overhead of 51.71%, and the average memory overhead of 1.95% on the Mibench benchmark suite. Results of fault injections confirm that the proposed CFC method successfully detects more than 95% of flow errors including our newly defined error model. © 2023 IEEE.
In the Internet of Things (IoT) era, the MQTT Protocol played a bigpart in increasing the flow of uninterrupted communication betweenconnected devices. With its functioning being on the publish/subscribe messaging system and having a central broker framework, MQTTconsidering its lightweight functionality, played a very vital role inIoT connectivity. Nonetheless, there are challenges ahead, especiallyin energy consumption, because the majority of IoT devices operateunder constrained power sources. In line with this, our research suggests how the MQTT broker can make an intelligent decision usingan intelligent algorithm. The algorithm idealizes wake-up times forsubscriber clients with the aid of previous data, including machinelearning (ML) regression techniques in the background that producesubstantial energy savings. The study combines the regression machine learning approaches with the quality of service levels’ incorporation into the decision framework through the introduction ofoperational modes designed for effective client management. The research, therefore, aims universally to enhance the efficiency availablein MQTT making it applicable across diverse IoT applications by simultaneously addressing both the broker and the client sides . Theversatile approach ensures more performance and sustainability forMQTT, further strengthening its build as one of the building blocksfor energy efficient and responsive communication in the IoT. Deeplearning approaches that follow regression will be the required leapfor the transformation of energy consumption and adoption of resource allocation within IoT networks to an optimization level thatwould unlock new frontiers of efficiency for a sustainable connectedfuture.
Wireless technology supporting vehicle-to-vehicle (V2V), and vehicle-to-infrastructure (V2I) communication, allow vehicles and infrastructures to exchange information, and cooperate. Cooperation among the actors in an intelligent transport system (ITS) can introduce several benefits, for instance, increase safety, comfort, efficiency. Automation has also evolved in vehicle control and active safety functions. Combining cooperation and automation would enable more advanced functions such as automated highway merge and negotiating right-of-way in a cooperative intersection. However, the combination have influences on the structure of the overall transport systems as well as on its behaviour. In order to provide a common understanding of such systems, this paper presents an analysis of cooperative ITS (C-ITS) with regard to dimensions of cooperation. It also presents possible influence on driving behaviour and challenges in deployment and automation of C-ITS.
Conformance testing is a formal and structured approach to verifying system correctness. We propose a conformance testing algorithm for cyber-physical systems, based on the notion of hybrid conformance by Abbas and Fainekos. We show how the dynamics of system specification and the sampling rate play an essential role in making sound verdicts. We specify and prove error bounds that lead to sound test-suites for a given specification and a given sampling rate. We use reachability analysis to find such bounds and implement the proposed approach using the CORA toolbox in Matlab. We apply the implemented approach on a case study from the automotive domain. © 2017 The Author(s).
Embedded systems are utilized in various modern-day applications in order to ease routine tasks that are otherwise demanding in manpower; an example of which is data sensing and acquisition. Modern sensor applications usually involve one or more embedded microcomputer systems with inputs from various sensors and a means of connection to transmit the acquitted data to its point of interest. While many sensor and acquisition applications and standards exist for industrial use, the consumer/small business markets for such devices seem to be fairly bleak in comparison. In this project, various data communication standards for transmitting information between sensors and acquisition devices are investigated. Wired, such as RS232, I²C, SPI and 1-Wire, as well as wireless protocols like Wi-Fi and ZigBee were studied. Then, a generic wireless data acquisition platform is designed and realized. The reference implementation connects to a wireless LAN network over Wi-Fi, starts a web server and serves HTML5 web pages and a CSS3 style sheet from SPI flash memory. The sensor data is collected from integrated sensor circuits over an I²C link. Device can run on 3 AA batteries for a combined uptime of at least 3 weeks. Lastly, possible expansion options for the implemented device are discussed and supplemented with real-world examples where possible. Feasible consumer applications for such a platform are then given to conclude.
With the rapid growth in the automotive industry, vehicles have become more complex and sophisticated. Vehicle development today, involves integration of both electrical and mechanical systems. Their design and production are typically time and cost critical. To complement and support the process of vehicle development and design, majority of the automotive industry use modelling and simulationfor testing automotive applications, vehicle subsystems or the vehicle behaviour in its entirety.
For the purpose of traffic simulations, where a large number of vehicles and other elements of the road network are simulated, implementing a highly complex vehicle model would greatly affect the performance of the simulation. The complexity of the vehicle model would entail a higher computation time of the system, making it unsuitable for any real time application. There in lies the trade-off indesigning a model that is both fast and accurate. The majority of the vehicle models that have been designed are either domain specific, highly complex or generalized. Thus, in this thesis, two class specific vehicles’ kinematic models with good accuracy and low computation time are presented.
Two different modelling paradigms have been adopted to design and test these models. The results, challenges and limitations that pertain to these paradigms are also presented and discussed. The results show the feasibility of the proposed kinematic models.
The importance of managing one’s use of natural resources and energy consumption is increasing. Water is one of the most important resources there is and the UN has set global goals to try and increase access to clean water for all mankind. In Sweden, it is not unheard of water being wasted due to a priority of comfort and convenience. One example is to let the water in the shower run before entering to make sure the temperature is hot and even. This thesis is made together with the company CTC and handles hot water production in their product EcoZenith i360. The process to be controlled is the heat exchange between two fluids inthe product's plate heat exchanger.
The purpose is to improve the control of the hot water temperature in regards to making the produced water temperature quickly stabilize from the start. The goals are to decrease the settling time, overshoot, and undershoot for the process by adding information about actual water flow to the controller and adjusting the fixed pump speed. A system needs to be created to handle the communication between the module and other devices as well as being able to control the pump speed. One limitation is to use the Modbus protocol when implementing the communication.
To achieve the thesis’ purpose the current module has been mapped out to identify what factors most affect the controller. Due to the dynamic range being the most significant factor, gain scheduling with fitted PID controller settings was chosen. A program with the new control was made to handle communication with two slaves over the same line. Error detection needed to be implemented and handled to ensure the system would not stop running.
The result of the project showed that the undershoot decreased in all cases while the overshoot and settling time had various results. The undershoot became a prioritized goal since it mostly affected the customer experience. A calculated example shows a potential decrease in a user's daily water usage by 14 % if the new regulation is used.
High performance computational platforms are required by industries that make use of automatic methods to manage modern machines, which are mostly controlled by high-performance specific hardware with processing capabilities. It usually works together with CPUs, forming a powerful execution platform. On an industrial production line, distinct tasks can be assigned to be processed by different machines depending on certain conditions and production parameters. However, these conditions can change at run-time influenced mainly by machine failure and maintenance, priorities changes, and possible new better task distribution. Therefore, self-adaptive computing is a potential paradigm as it can provide flexibility to explore the machine resources and improve performance on different execution scenarios of the production line. One approach is to explore scheduling and run-time task migration among machines’ hardware towards a balancing of tasks, aiming performance and production gain. This way, the monitoring of time requirements and its crosscutting behaviour play an important role for task (re)allocation decisions. This paper introduces the use of software aspect-oriented paradigms to perform machines’ monitoring and a self-rescheduling strategy of tasks to address nonfunctional timing constraints. As case study, tasks for a production line of aluminium ingots are designed. © 2009 IFAC.
This report presents the design of a conceptual prototype aimed at identifying keys in specific positions inside a cabinet utilizing Radio-frequency identification (RFID)technology. The prototype integrates RFID readers, managed by a microcontroller unit(MCU), establishing a backend peripheral system. The cabinet is made of steel, and given RFID’s sensitivity to nearby metal, experimentation was conducted to evaluate the impact of metal proximity on the reading range. Experimental results reveal a reduction in the reading range of 15 mm (43%) by one metal sheet and 26 mm (74%) by two metal sheets present, highlighting the relation between RFID technology and metallic environments. Additionally, the finished prototype is also presented in the Results and Discussion section, giving a more detailed insight into its practical implementation. This project demonstrates the viability of item-level identification through the utilization of low-frequency readers. Particularly relevant for positional identification, the short reading range of a low-frequency reader offers precision by limiting the area in which a detected transponder may be located.
Ultrasonic Additive Manufacturing (UAM) is a hybrid Additive Manufacturing (AM) process that involves layer-by-layer ultrasonic welding of metal foils and periodic machining to achieve the desired shape. Prior investigative research has demonstrated the potential of UAM for the embedding of electronic circuits inside a metal matrix. In this paper, a new approach for the fabrication of an insulating layer between an aluminium (Al) matrix and embedded electronic interconnections is presented. First, an Anodic Aluminium Oxide (AAO) layer is selectively grown onto the surface of Al foils prior to bonding. The pre-treated foils are then welded onto a UAM fabricated aluminium substrate. The bonding step can be repeated for the full encapsulation of the electronic interconnections or components. This ceramic AAO insulating layer provides several advantages over the alternative organic materials used in previous works.
Ultrasonic Additive Manufacturing (UAM) is an advanced manufacturing technique, which enables the embedding of electronic components and interconnections within solid aluminium structures, due to the low temperature encountered during material bonding. In this study, the effects of ultrasonic excitation, caused by the UAM process, on the electrical properties and the microstructure of thermally cured screen printed silver conductive inks were investigated. The electrical resistance and the dimensions of the samples were measured and compared before and after the ultrasonic excitation. The microstructure of excited and unexcited samples was examined using combined Focused Ion Beam and Scanning Electron Microscopy (FIB/SEM) and optical microscopy. The results showed an increase in the resistivity of the silver tracks after the ultrasonic excitation, which was correlated with a change in the microstructure: the size of the silver particles increased after the excitation, suggesting that inter-particle bonding has occurred. The study also highlighted issues with short circuiting between the conductive tracks and the aluminium substrate, which were attributed to the properties of the insulating layer and the inherent roughness of the UAM substrate. However, the reduction in conductivity and observed short circuiting were sufficiently small and rare, which leads to the conclusion that printed conductive tracks can function as interconnects in conjunction with UAM, for the fabrication of novel smart metal components.
This thesis presents a comparison of a GPU implementation of the Conjugate Residual method as a sequence of generic library kernels against implementations ofthe method with custom kernels to expose the performance gains of a keyoptimization strategy, kernel fusion, for memory-bound operations which is to makeefficient reuse of the processed data.
For massive MIMO the iterative solver is to be employed at the linear detection stageto overcome the computational bottleneck of the matrix inversion required in theequalization process, which is 𝒪(𝑛3) for direct solvers. A detailed analysis of howone more of the Krylov subspace methods that is feasible for massive MIMO can beimplemented on a GPU as a unified kernel is given.
Further, to show that kernel fusion can improve the execution performance not onlywhen the input data is large matrices-vectors as in scientific computing but also inthe case of massive MIMO and possibly similar cases where the input data is a largenumber of small matrices-vectors that must be processed in parallel.In more details, focusing on the small number of iterations required for the solver toachieve a close enough approximation of the exact solution in the case of massiveMIMO, and the case where the number of users matches the size of a warp, twodifferent approaches that allow to fully unroll the algorithm and gradually fuse allthe separate kernels into a single, until reaching a top-down hardcodedimplementation are proposed and tested.
Targeting to overcome the algorithms computational burden which is the matrixvector product, further optimization techniques such as two ways to utilize the faston-chip memories, preloading the matrix in shared memory and preloading thevector in shared memory, are tested and proposed to achieve high efficiency andhigh parallelism.
In modern vehicles, data from the user of the vehicle is often stored when a mobile phone or other device is paired through Bluetooth or USB connection. In cases where this data contains personal data, they may be of interest in an investigation and may be worth protecting from a privacy perspective. What happens to this data when the car is scrapped?
When a car is scrapped, it is dismantled and the parts that can be made money from are sold by the scrap company. This can be anything from shock absorbers, wheels and steering wheels, to electronic components and infotainment devices. In this report, personal data was extracted from three such infotainment devices purchased from scrap companies.
The most successful method was to remove the correct storage circuit from the infotainment device circuit board and extract its data by direct connection. In all cases, the information has been structured in a familiar file system which could be mounted.
In all three investigated infotainment devices, personal data were extracted. The result shows that there are deficiencies in the handling of personal data when a car is scrapped.
The technology for the realization of wireless sensors has been available for a long time, but due to progress and development in electrical engineering such sensors can be manufactured cost effectively and in large numbers nowadays. This availability and the possibility of creating cooperating wireless networks which consist of such sensors nodes, has led to a rapidly growing popularity of a technology named Wireless Sensor Networks (WSN). Its disadvantage is a high complexity in the task of programming applications based on WSN, which is a result of its distributed and embedded characteristic. To overcome this shortcoming, software agents have been identified as a suitable programming paradigm. The agent based approach commonly uses a middleware for the execution of the software agent. This thesis is meant to compare such agent middleware in their performance in the WSN domain. Therefore two prototypes of applications based on different agent models are implemented for a given set of middleware. After the implementation measurements are extracted in various experiments, which give information about the runtime performance of every middleware in the test set. In the following analysis it is examined whether each middleware under test is suited for the implemented applications in WSN. Thereupon, the results are discussed and compared with the author’s expectations. Finally a short outlook of further possible development and improvements is presented.
Automotive radars are subject to interference in spectrally congested environments. To mitigate this interference, various waveforms have been proposed. We compare two waveforms (FMCW and OFDM) in terms of their radar performance and robustness to interference, under similar parameter settings. Our results indicate that under proper windowing both waveforms can achieve similar performance, but OFDM is more sensitive to interference. ©2020 IEEE
Ultra Wide Band ( UWB ) radar technology shows great promise for non-contact remotemonitoring of vital signs such as respiration and heart rate. Previous studies show us theusefulness of the UWB-based radar for breathing and heart rate estimation. The obstaclepenetration capabilities of UWB radar make it appropriate for applications such as humanmonitoring, detection of people and parameters of their motion inside buildings and remotediagnosis of a person’s emotional state. The use of UWB radars for vital signs monitoringpresent some challenges:• Small torso movement during the measurement may compromise results.• Respiratory signal may overshadow the heartbeat rate.• Bio-signals are not stationary.• UWB utilizes very low-low power signals that are easily overpowered by noise.Knowing these problems this thesis investigates signal processing techniques in order to overcome these challenges and detect heart rate. In particular, this thesis investigates a new type ofUWB radar which has not be considered in previous publications. The proposed methods istested under several different experimental conditions and several different subjects. Resultsindicate that this type of UWB radar can be successfully used for breathing and heart ratedetection.
This thesis presents a prediction model for e-bikes and e-scooters, aimed at enhancing traffic safety and efficiency by sharing their intentions of future possible positions among road users. The research addresses the current automated vehicle technologies which lack communication between road users. The prediction model is based on and tested with a mobility model, adapted for modelling e-bikes and e-scooters in a simulator program primarily used for pedestrians. This implementation has produced the ability to predict future positions and further the development of intention-sharing capabilities in urban traffic scenarios. The model is built upon physical parameters and mathematical models for a controlled and regulated model. Polynomial regression was applied to predict positions based on historical data and the results were evaluated with RMSE metrics, demonstrating the prediction accuracy in different scenarios. The thesis also includes the integration of the prediction model into a hardware setup, a Raspberry Pi. Demonstrating the practical application and retaining the effectiveness of the model in a real-time environment. Gathered from the results, the model can reserve a predicted area every second but also has the capability to work during faster or slower time intervals, depending on the hardware used to enable the model in the protocol. With this, the research highlights the possibility of implementing this in CCAM systems. The results show promising accuracy with a simple controlled model using as little necessary data as possible. The project work contributes to the field of intelligent transport systems by providing a scalable solution to enhance the interaction between VRUs and vehicles, creating a step closer to achieving the Vision-Zero goal of having zero traffic-related accidents or fatalities.
The vehicle plays an important role in peoples life in modern times. The vehi- cle's behaviour is a complex and detailed subject, which requiring the knowledge of mathematics and physics. Meanwhile, the vehicles' behaviour is aected by a lot of dierent conditions, such as the driver and the environment. For the purpose of trac safety, simulation is required to analyze the vehicles' behaviour. Vari- eties of behaviour models, based on dierent levels (Macroscopic, Mesoscopic and Microscopic) have been presented. Vehicles are able to interact with each other through the Vehicular Ad Hoc Network (VANET). It is worthwhile to simulate how the behaviour is aected by an exchange of kinematic data. This thesis presents a new simulator, which is designed at microscopic level, based on the graph theory. Not only dierent vehicles' behaviour, but also coop- eration between vehicles can be implemented in the simulator. A new model of collision avoidance is created, incorporating the concept of kinematics and human emulation. The car-following model is also performed for the formation of trac
ow. Overall, the modeling in the simulator is simplied by ignoring the network disturbances. The data collected from the results of the simulation is used to display a scenario as visualization of a vehicles behaviour.
This paper presents an effort to support emerging Wireless Sensor Networks applications composed by different types of sensor nodes. The work is composed by two parts, in which the first is dedicated to provide cooperation abilities to sensor nodes, while the second is a customizable hardware platform intended to provide different types of sensor nodes, from those more resource constrained up to the resource-rich ones. A description of a testbed demonstra- tor of the proposed system is provided and comparisons with previous published simulation results denote the feasibility of the proposal.
Advances in vehicle intelligence technology is enabling the development of systems composed of unmanned vehicles, which are able to interact with devices spread on the environment in order to take decisions related to their movements. Sensor networks represent an area that can profit a lot of this new possibilities, as autonomous vehicles can be used to carry sensor devices, which interacting with static sensor nodes can enhance the results provided by the overall system. However, some problems arise in applications' development in such systems due to the network nodes heterogeneity, and also the dynamicity of the environment in which they are deployed, which changes constantly. Thus, new platform solutions are necessary to handle the heterogeneous nodes capabilities in order to facilitate coordination and integration among them. This paper proposes a supporting infrastructure to address these problems composed of an adaptive middleware and a customizable sensor node platform. The goal is to support cooperation in heterogeneous sensor networks, which are composed by static and mobile nodes with different capabilities. The middleware adapts itself in order to manage the very distinct computing resources of the nodes, and also changes in the environment and in the application demands. The customizable sensor node platform allows optimizations in hw/sw modules to meet specific application requirements, allowing the creation of low-end and resource rich nodes that work in an integrated network. In order to illustrate the proposed approach, a system for military surveillance applications is presented as case study.
The use of Unmanned Aerial Vehicles is increasing in the field of area patrolling and surveil- lance. A great issue that emerge in designing such systems is the target workload distribution over a fleet of UAVs, which generally have different capabilities of sensing and computing power. Targets should be assigned to the most suitable UAVs in order to efficiently perform the end-user initiated missions. To perform these missions, the UAVs require powerful high-performance platforms to deal with many dif- ferent algorithms that make use of massive calculations. The use of COTS hardware (e.g., GPU) presents an interesting low-cost alternative to compose the required platform. However, in order to efficiently use these heterogeneous platforms in a dynamic scenario, such as in surveillance systems, runtime reconfigu- ration strategies must be provided. This paper presents a dynamic approach to distribute the handling of targets among the UAVs and a heuristic method to address the efficient use of the heterogeneous hard- ware that equips these UAVs, with the goal to meet also mission timing requirements. Preliminary simu- lation results of the proposed heuristics are also provided.
Advances on wireless communication and sensor systems enabled the growing usage of Wireless Sensor Networks. This kind of network is being used to support a number of new emerging applications, thus the importance in studying the efficiency of new approaches to program them. This paper proposes a performance study of an application using high-level mobile agent model for Wireless Sensor Networks. The analysis is based on a mobile object tracking system, a classical WSN application. It is assumed that the sensor nodes are static, while the developed software is implemented as mobile agents by using the AFME framework. The presented project follows a Model-Driven Development (MDD) methodology using UML (Unified Modeling Language) models. Metrics related to dynamic features of the implemented solution are extracted from the deployed application, allowing a design space exploration in terms of metrics such as performance, memory and energy consumption. © Springer-Verlag Berlin Heidelberg 2011.
With the increasing level of automation in road vehicles, the traditional workhorse of safety assessment, namely, physical testing, is no longer adequate as the sole means of ensuring safety. A standard safety assessment benchmark is to evaluate the behavior of a new design in the context of a risk-exposing test scenario. Manual or computerized analysis of the behavior of such systems is challenging because of the presence of non-linear physical dynamics, computational components, and impacts. In this paper, we study the utility of a new technology called rigorous simulation for addressing this problem. Rigorous simulation aims to combine some of the benefits of traditional simulation methods with those of traditional analytical methods such as symbolic algebra. We develop and analyze in detail a case study involving an Intersection Collision Avoidance (ICA) test scenario using the hazard analysis techniques prescribed in the ISO 26262 functional safety standard. We show that it is possible to formally model and rigorously simulate the test scenario to produce informative results about the severity of collisions. The work presented in this paper demonstrates that rigorous simulation can handle models of non-trivial complexity. The work also highlights the practical challenges encountered in using it. © 2020, Springer Nature Switzerland AG.
With the increasing level of automation in road vehicles, the traditional workhorse of safety assessment, namely, physical testing, is no longer adequate as the sole means of ensuring safety. A standard safety assessment benchmark is to evaluate the behavior of a new design in the context of a risk-exposing test scenario. Manual or computerized analysis of the behavior of such systems is challenging because of the presence of non-linear physical dynamics, computational components, and impacts. In this paper, we study the utility of a new technology called rigorous simulation for addressing this problem. Rigorous simulation aims to combine some of the benefits of traditional simulation methods with those of traditional analytical methods such as symbolic algebra. We develop and analyze in detail a case study involving an Intersection Collision Avoidance (ICA) test scenario using the hazard analysis techniques prescribed in the ISO 26262 functional safety standard. We show that it is possible to formally model and rigorously simulate the test scenario to produce informative results about the severity of collisions. The work presented in this paper demonstrates that rigorous simulation can handle models of non-trivial complexity. The work also highlights the practical challenges encountered in using it. © 2020, Springer Nature Switzerland AG.
Hybrid systems are a powerful formalism for modeling cyber-physical systems. Reachability analysis is a general method for checking safety properties, especially in the presence of uncertainty and non-determinism. Rigorous simulation is a convenient tool for reachability analysis of hybrid systems. However, to serve as proof tool, a rigorous simulator must be correct w.r.t. a clearly defined notion of reachability, which captures what is intuitively reachable in finite time. As a step towards addressing this challenge, this paper presents a rigorous simulator in the form of an operational semantics and a specification in the form of a denotational semantics. We show that, under certain conditions about the representation of enclosures, the rigorous simulator is correct. We also show that finding a representation satisfying these assumptions is non-trivial. © 2018, Springer International Publishing AG, part of Springer Nature.
This thesis is about implementing an Internet of Things system for measuring water quality in rivers and other aquatic environments with an autonomous water drone, where the data from various components are collected and sent wirelessly to the database in real-time. A Raspberry Pi is connected to the internet through a 4G modem and a wireless satellite communication connection called RockBlock for emergency calls and notifications. In addition, a sonar is also implemented to collect data for the unmanned surface vehicle's (USV) avoidance of collisions. Finally, batteries are connected to solar panels to auto-generate energy and provide the USV with its requested current and voltage. The minimum parameters to measure water quality are four: potential hydrogen, dissolved oxygen, nitrates, and colored dissolved organic matter. As a result, the system in this thesis measures the four parameters mentioned above, plus turbidity and temperature, since the interconnected sensors can also measure those. In addition, optical sensors were chosen because of their exceptional accuracy and precision when measuring water quality. The environment, mainly the aquatic, will benefit from this project and change for the better with time.
Despite a steady decrease in fatality rates across European nations,Vulnerable Road User (VRU) continue to face significant risks in traffic incidents. Pedestrians, cyclists, and motorcyclists make up the majority of fatalities. Vision Zero sets a global standard for road safety,and its implementation complements broader European Union (EU)initiatives in prioritizing a zero-tolerance stance on road fatalities.This thesis underscores the transformative potential of VehicularAd hoc Networks (VANETs) by introducing a protocol to enhance roadsafety through intention sharing, particularly for micro-mobility vehicles such as E-bikes and E-scooters. By integrating this protocolwith Vehicle-to-Anything (V2X) technology, this thesis aims to redefine VRUs’ role in traffic safety from passive to active participants. Theprotocol aims to complement the quality of information and improving energy efficiency while maintaining safety metrics for cooperativetransportation systems. We achieved this by transmitting a reservedarea where the rider intends to be in the near future.Results from simulations demonstrate the efficacy of intention sharing in improving message reliability and efficiency compared to intention detection methods.While preliminary results show promise, further research is necessary to validate real-world applicability fully. Nonetheless, this thesiscontributes to the ongoing efforts to achieve Vision Zero by harnessing technological innovations to protect VRUs and create safer roadenvironments.
This paper consists of a project exploring the possibility to assess paper code reusability by measuring chuck damages utilizing a 3D sensor and usingMachine Learning to classify reusage. The paper cores are part of a rolling/unrolling system at a paper mill whereas a chuck is used to slow and eventually stop the revolving paper core, which creates damages that at a certain point is too grave for reuse. The 3D sensor used is a TriSpector1008from SICK, based on active triangulation through laser line projection and optic sensing. A number of paper cores with damages varying in severity labeled approved or unapproved for further use was provided. SupervisedLearning in the form of K-NN, Support Vector Machine, Decision Trees andRandom Forest was used to binary classify the dataset based on readings from the sensor. Features were extracted from these readings based on the spatial and frequency domain of each reading in an experimental way.Classification of reusage was previously done through thresholding on internal features in the sensor software. The goal of the project is to unify the decision making protocol/system with economical, environmental and sustainable waste management benefits. K-NN was found to be best suitedin our case. Features for standard deviation of calculated depth obtained from the readings, performed best and lead to a zero false positive rate and recall score of 99.14%, outperforming the compared threshold system.
At times when it is not suited to stand by a video camera can be resolved with a remote controlled camera mount. Examples of occasions is due to lack of space at a concert, a solemn ceremony in which someone must stand awkwardly to to get good image, out in nature where animals shall be filmed without being frightened away or when you have staff shortages, a tight budget and need to control multiple cameras simultaneously. The systems that are available in the amateur market today have different functionality and is not fitting for the above problem or have very limited range and cannot connect to a mobile application.
This project aims to develop a cost-effective and customized solution for the above problem by developing a remote-controlled camera mount in semi-professional segment where you through a controller and at a later stage with a mobile application controls the camera angle and rotation also called tilt and pan.
The differences between this and existing solutions are that by using Bluetooth technology can control both the controller and later on also with a mobile application, the reason why it must be able to control both is that it is not always allowed to use cell phones in all environments where you want to film and therefore have a wider range of applications for camera mounting.
Central authentication is a method that has been around a long time to manage users tovarious network resources, such as computers, printers, and servers. At a time whenmany industries are upgrading and expanding to meet new requirements to be accessedfrom around the world, many systems need to be rebuilt. The work will be donetogether with HMS Industrial Networks AB and will investigate the possibility ofauthenticating users against a built-in controller centrally instead of locally, as it istoday. Theory will be commingled with experiments of possible implementations andfinally evaluated with all the facts and a conclusion will be presented.
Att fjärrstyra sin värmepump gör det möjligt att styra inomhusklimatet även då man inte är hemma. Detta passar mycket bra i exempelvis fritidshus då man enkelt kan höja temperaturen innan man skall dit och därmed kan njuta av ett behagligt inomhusklimat direkt.
Med dagens fjärrstyrning av Daikins värmepumpar finns två problem. Systemet använder GSM-nätet och användaren måste kunna de olika SMS-koderna utantill. Fjärrstyrning används oftast i fritidshus på landsbygden och 3G-nätet har idag bättre täckning än GSM-nätet på många av dessa platser.
Målet med examensarbetet är att ta fram ett system som åtgärdar dessa problem.
Vi börjar med att ta fram en funktionsmodell att utgå från. Går vidare med val av hårdvara för modulen, och därefter vidare till mjukvaruutveckling.
Resultatet av detta examensarbete är ett system för fjärrstyrning av Daikins värmepumpar som använder 3G-nätet och enkelt kan styras via en androidapplikation.
There is an increased interest in contact-less vital sign monitoring methods as they offer higher flexibility to the individual being observed. Recent industrial development enabled radar functionality to be packed in single-chip solutions, decreasing application complexity and speeding up designs. Within this paper, a vital sign radar has been developed utilizing a recently released 60GHz frequency modulated continuous wave single-chip radar in combination with 3D-printed quasi-optics. The electronics development has been focused on compactness and high system integration using a low cost design process. The final experiments prove that the radar is capable of tracking human respiration rate and heartbeat at the same time from a distance of 1m.
Embedded DSP computing is currently shifting towards manycore architectures in order to cope with the ever growing computational demands. Actor based dataflow languages are being considered as a programming model. In this paper we present a code generator for CAL, one such dataflow language. We propose to use a compilation tool with two intermediate representations. We start from a machine model of the actors that provides an ordering for testing of conditions and firing of actions. We then generate an Action Execution Intermediate Representation that is closer to a sequential imperative language like C and Java. We describe our two intermediate representations and show the feasibility and portability of our approach by compiling a CAL implementation of the Two-Dimensional Inverse Discrete Cosine Transform on a general purpose processor, on the Epiphany manycore architecture and on the Ambric massively parallel processor array. © 2014 IEEE.
This thesis project was to design, construct, and program a test rig to be used by HMS when checking for faulty connection pins on their testing platforms. The test rig is physically attached to a platform computer at HMS via the pins of the ports on the platform. Our test rig automatically tests the pins of the separate ports in order to analyze and review their parameters and confirm that they are functioning correctly. We developed and designed hardware and software to test two of the five different ports used by the platforms at HMS: the DAQ port and the relay port. Hardware development involved designing couplings, building them, and testing them with preinstalled programs at HMS. The software development involved designing code in LabVIEW that could automatically run tests of the pins of the platform with the help of the couplings we had built. The complete test rig development was successfully completed and proven to work as intended.
Nowadays, different companies use Ethernet for different industrial applications. Industrial Ethernet has some specific requirements due to its specific applications and environmental conditions which is the reason that makes it different than corporate LANs. Real-time guarantees, which require precise synchronization between all communication devices, as well as reliability are the keys in performance evaluation of different methods [1]. High bandwidth, high availability, reduced cost, support for open infrastructure as well as deterministic architecture make packet-switched networks suitable for a variety of different industrial distributed hard real-time applications. Although research on guaranteeing timing requirements in packet-switched networks has been done, communication reliability is still an open problem for hard real-time applications.
In this thesis report, a framework for enhancing the reliability in multihop packet-switched networks is presented. Moreover, a novel admission control mechanism using a real-time analysis is suggested to provide deadline guarantees for hard real-time traffic. A generic and flexible simulator has been implemented for the purpose of this research study to measure different defined performance metrics. This simulator can also be used for future research due to its flexibility. The performance evaluation of the proposed solution shows a possible enhancement of the message error rate by several orders of magnitude, while the decrease in network utilization stays at a reasonable level.
The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error.
In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase programming productivity and program portability for manycores and to analyze their impact on performance and efficiency. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany and Ambric.
For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-pi’s constructs for dynamic reconfiguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. We also present a new CAL compilation framework which has a front end, two intermediate representations and three backends: for a uniprocessor, Epiphany, and Ambric. We show the feasibility of our approach by compiling a CAL implementation of the 2D-IDCT for the three backends. We also present an evaluation and optimization of code generation for Epiphany by comparing the code generated from CAL with a hand-written C code implementation of 2D-IDCT.
The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error. In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase program portability and scalability, and increase retargetability of the compilation framework. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany, Ambric, EIT, and ePUMA. For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-pi's constructs for dynamic reconguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. For CAL, we have developed a new compilation framework, namely Cal2Many. The Cal2Many framework has a front end, two intermediate representations and four backends: for a uniprocessor, Epiphany, Ambric, and a backend for SIMD based architectures. Also, we have identied and implemented of CAL actor fusion and fission methodologies for efficient mapping CAL applications. We have used QRD, FAST corner detection, 2D-IDCT, and MPEG applications to evaluate our compilation process and to analyze the limitations of the hardware.