Aiming at the problem of weak yawing controllability of solar-powered Unmanned Aerial Vehicles (UAVs) with low-speed and high aspect-ratio Blended Wing Body (BWB) configuration, based on the large-scale aerodynamic characteristics of the biomimetic swallow tail, the influence of different opening methods and angles of the swallow tail on the aerodynamic characteristics and stability and controllability of an UAV was derived. The mechanism of the higher yawing control efficiency of the differential throttle compared to the conventional rudder is analyzed in order to significantly decrease the control efficiency at small throttle. We innovatively propose a yawing control method combining the opening of the swallow tail to a moderately separation state of local airflow and we validate it for the case of the landing control of a solar-powered UAV. The flight dynamics mode characteristics show that this control method has little impact on the lateral-directional stability of the UAV and that the control efficiency can be reserved to the same level of cruise state. Therefore, the control law of cruise state can be applied to landing state with swallow tail open directly, which enhances the lateral-directional control ability of the UAVs in a simple but efficiently way. Flight simulation and flight test results show that the proposed bionic control method of the swallow tail combined with differential throttle can effectively enhance the landing control ability of the UAVs, improve the response speed, and reduce the trajectory error.
The use of automation in mining can be found at all stages of the mining process, covering exploration, excavation, loading, transportation, mineral processing, and search and rescue missions in emergency situations. Due to the harsh conditions during an underground mine disaster, robots can be of great assistance to rescue teams by entering areas that are unsafe for human rescuers, locating trapped workers and collecting valuable data. The design and implementation of mine rescue robots is characterized by great complexity since it encompasses the consideration of a great range of components and requirements. However, comprehensive guidelines for design and deployment of robots in harsh underground environments have not been established. As a result, the developers of mine rescue robots (MRRs) are to exercise their own judgment on the development and evaluation of the platform functionalities. This review attempts an extensive discussion of design and functionality requirements based on the common practices and lessons learned from the existing MRRs. Common considerations, as well as current and future trends are identified with the aim of facilitating future research and development on mine search and rescue robots.
Plastering is dominated manually, exhibiting low levels of automation and inconsistent finished quality. A comprehensive review of literature indicates that extant plastering robots demonstrate a subpar performance when tasked with rectifying defects in transition area. The limitations encompass a lack of capacity to independently evaluate the quality of work or perform remedial plastering procedures. To address this issue, this research describes the system design of the Puttybot and a paradigm of plastering to solve the stated problems. The Puttybot consists of a mobile chassis, lift platform, and a macro/micro manipulator. The force-controlled scraper parameters have been calibrated to dynamically modify their rigidity in response to the applied putty. This strategy utilizes Convolutional Neural Networks to identify plastering defects and executes the plastering operation with force feedback. This paradigm’s effectiveness was validated during an autonomous plastering trial wherein a large-scale wall was processed without human involvement.
Agricultural robots offer a viable solution to the critical challenges of productivity and sustainability of modern agriculture. The widespread deployment of agricultural robotic fleets, however, is still hindered by the overall system’s complexity, requiring the integration of several non-trivial components for the operation of each robot but also the orchestration of robots working with each other and human workers. This paper proposes a topological map as the unifying representation and computational model to facilitate the smooth deployment of robotic fleets in agriculture. This topological abstraction of the system state results in an efficient representation of large-scale environments, but also offers the scalable and efficient operation of the entire fleet and allows for ex-situ modelling and analysis of operations. The practical use of the proposed framework is demonstrated in a horticultural use case with a fleet of robots supporting the work of human fruit pickers. The critical components of the system are analysed and evaluated in deployment in both realistic digital twin and real-life soft fruit farms of different scales, demonstrating the scalability and effectiveness of the proposed framework. The presented framework is general and should be easy to adopt in other multi-robot/multi-human scenarios such as warehouse logistics, cleaning and maintenance of public spaces.
When serpentine robots move in the complex and unstructured environment, collisions between the serpentine robot and obstacles often impede its movement. In this paper, collision feedback-based lateral undulation gait is proposed, which utilizes collisions between the serpentine robot and obstacles to change robot's trajectory, or relies on obstacles to guide the locomotion of the serpentine robot. When the robot's head collides with an obstacle, the direction of motion changes, the robot performs global planning of the motion gait through steering control. An angle adaptive control based on load feedback is proposed for local gait planning of robots. When the robot joint load exceeds the set threshold, the joint angle is adjusted in time to adapt to terrain changes. Experiments are conducted to cross different obstacles using different control methods, and comparisons are made in terms of energy loss and motion rate. The experimental results show that this gait has the lowest energy consumption while ensuring the robot's motion rate.
Modern mobile robots require precise and robust localization and navigation systems to achieve mission tasks correctly. In particular, in the underwater environment, where Global Navigation Satellite Systems (GNSSs) cannot be exploited, the development of localization and navigation strategies becomes more challenging. Maximum A Posteriori (MAP) strategies have been analyzed and tested to increase navigation accuracy and take into account the entire history of the system state. In particular, a sensor fusion algorithm relying on a MAP technique for Simultaneous Localization and Mapping (SLAM) has been developed to fuse information coming from a monocular camera and a Doppler Velocity Log (DVL) and to consider the landmark points in the navigation framework. The proposed approach can guarantee to simultaneously locate the vehicle, thanks to the onboard sensors, and map the surrounding environment with the information extracted from the images acquired by a bottom-looking optical camera. Optical sensors can provide constraints between the vehicle poses and the landmarks belonging to the observed scene. The DVL measurements have been employed to solve the unknown scale factor and to guarantee the correct vehicle localization even in absence of visual features. After validating the solution through realistic simulations, an experimental campaign at sea was conducted in Stromboli Island (Messina), Italy. In conclusion, an algorithm, which works with the Poisson surface reconstruction method to obtain a smooth seabed surface, for mesh creation has been developed.
The nuclear energy sector can benefit from mobile robots for remote inspection and handling, reducing human exposure to radiation. Advances in cyber-physical systems have improved robotic platforms in this sector through digital twin technology. Digital twins enhance situational awareness for robot operators, crucial for safety in the nuclear energy sector, and their value is anticipated to increase with the growing complexity of cyber-physical systems. We propose a multimodal immersive digital twin platform for cyber-physical robot fleets based on ROS-Unity3D. The system design integrates building information models, displays mission parameters, visualizes robot sensor data, and provides multi-modal user interaction through traditional and virtual reality interfaces. This enables real-time monitoring and management of fleets with heterogeneous robotic platforms. We performed a heuristic evaluation of our digital twin interface with robot operators from leading nuclear research institutions: Sellafield Ltd and Japan Atomic Energy Agency, performing a simulated robot inspection mission. The three usability themes that emerged from this evaluation and inspired our design recommendations include the flexibility of the interface, the individuality of each robot, and adaptation to expanding sensor visualization capabilities.
This paper presents the development and implementation of a multi-UAV system focused on coverage path planning on multiple separated areas capable of re-planning the collective mission in case of unexpected events. For this purpose, we present a distributed-centralized architecture that uses heuristic and computationally efficient methods to perform the planning/re-planning and decision-making tasks during the control of the mission execution. We performed a computational evaluation of the algorithms, comparing them with other proposals, together with experiments in simulated and real flights. The results show that the system can distribute tasks equitably among the aircraft in an efficient way, even in the middle of the flight, when facing unexpected events; and show a higher computational efficiency when compared to multiple proposals in the state of the art.
Point cloud registration is a vital task in 3D perception, with several different applications in robotics. Recent advancements have introduced neural-based techniques that promise enhanced accuracy and robustness. In this paper, we thoroughly evaluate well-known neural-based point cloud registration methods using the Point Clouds Registration Benchmark, which was developed to cover a large variety of use cases. Our evaluation focuses on the performance of these techniques when applied to real-complex data, which presents a more challenging and realistic scenario than the simpler experiments typically conducted by the original authors. With few exceptions, the results reveal a significant performance gap, with most neural-based methods performing poorly on real-complex data. However, amidst the underwhelming results, 3DSmoothNet stands out as an exception, demonstrating excellent performance across the majority of benchmark sequences. Remarkably, it outperforms a state-of-the-art feature extractor, namely FPFH, showcasing its effectiveness in capturing informative and discriminative features from point clouds. Given these results, we assert that while neural-based techniques could have advantages over conventional methods, further research is necessary to fully unlock their practical utility. We advocate for more extensive studies employing unbiased and realistic testing procedures, akin to the rigorous evaluation framework employed in this work. This paper represents a crucial step towards establishing a more robust literature in the field of point cloud registration. By exposing the limitations and advantages of existing neural-based methods, we aim to inspire future research and drive the development of more effective techniques for real-world applications.
Unmanned aerial vehicles (UAVs) offer many advantages over ground vehicles, including quadruped robots, based on high maneuverability when performing exploration in complex and unknown environments. However, due to their limited computational capability, UAVs require light-weight but accurate state estimation algorithms for reliable exploration. In this paper, we propose an segmented map based exploration system based on LiDAR-based state estimation for UAVs. The proposed system includes capabilities such as exploration, obstacle avoidance, and object detection with localization using 3D dense maps generated by tightly coupled LiDAR Inertial Odometry (LIO). Our proposed system is a hybrid system that can switch between guided and exploration modes, making it practical for search and rescue missions in disaster scenarios. The proposed LIO algorithm adapts to its surroundings, allowing for fast and accurate state estimation in complex environments. The proposed exploration algorithm is designed to cover specific regions in the 3D dense map generated by proposed LIO, with the UAV determining if map points are included within the coverage area. We tested the proposed system in both simulation and real-world environments and validated that proposed system outperforms state-of-the-art algorithms in various aspects such as localization accuracy and exploration efficiency in complex environments.
Multi-sensor fusion-based localization technology has achieved high accuracy in autonomous systems. How to improve the robustness is the main challenge at present. The most commonly used LiDAR and camera are weather-sensitive, while the FMCW Radar has strong adaptability but suffers from noise and ghost effects. In this paper, we propose a heterogeneous localization method called Radar on LiDAR Map (RoLM), which aims to enhance localization accuracy without relying on loop closures by mitigating the accumulated error in Radar odometry in real time. Our approach involves embedding the data from both Radar and LiDAR sensors into a density map. We calculate the spatial vector similarity with an offset to determine the corresponding place index within the candidate map and estimate the rotation and translation. To refine the alignment, we utilize the Iterative Closest Point (ICP) algorithm to achieve optimal matching on the LiDAR submap. We conducted extensive experiments on the Mulran Radar Dataset, Oxford Radar RobotCar Dataset, and our dataset to demonstrate the feasibility and effectiveness of our proposed approach.
This article presents a study on the world's first unmanned machine designed for autonomous forestry operations. In response to the challenges associated with traditional forestry operations, we developed a platform equipped with essential hardware components necessary for performing autonomous forwarding tasks. Through the use of computer vision, autonomous navigation, and manipulator control algorithms, the machine is able to pick up logs from the ground and manoeuvre through a range of forest terrains without the need for human intervention. Our initial results demonstrate the potential for safe and efficient autonomous extraction of logs in the cut-to-length harvesting process. We achieved a high level of accuracy in our computer vision system, and our autonomous navigation system proved to be highly efficient. This research represents a significant milestone in the field of autonomous outdoor robotics, with far-reaching implications for the future of forestry operations. By reducing the need for human labour, autonomous machines have the potential to increase productivity and reduce labour costs, while also minimizing the environmental impact of timber harvesting. The success of our study highlights the potential for further development and optimization of autonomous machines in the forestry industry.
Projections for future air mobility envisage intensely utilised airspace that does not simply scale up from existing systems with centralised air traffic control. This paper considers the implementation and test of a software and hardware framework for decentralised control of aerial vehicles within intensely used airspace. Up to 10 rotary wing vehicles of maximum all up mass of 1 kg are flown in an outdoor volume with length scale of 100 m with GPS and WiFi connectivity. Flight control is implemented using a Pixhawk 4 flight controller running the PX4 firmware with guidance algorithms run on a separate onboard companion computer. Deconfliction is implemented using a simple elastic repulsion model with a guidance update rate of 10 Hz. Traffic structures are constructed from a path of directed waypoints and associated cross sectional geometry. Junctions are implemented when two paths converge into one or when one path diverges into two. Agents engage with structures through execution of flow, merge and swirl velocity rules. Calibration experiments showed that the worst case latency in agents sharing position information was of the order of 0 .5 s made up from delays due to finite guidance update rate, WiFi processing and centralised message processing. A choice of vehicle cruise speed of 2 m/s and conflict radius of 2 .5 m provided an acceptable compromise between experiment time efficiency (speed) and spatial efficiency (resolution) within the test volume. Results from recirculating junction experiments show that peak deconfliction activity occurs at the junction node, however biased distribution of agents within a corridor means the peak intensity is pushed ahead of the node. Use of meshed helical junction structures significantly reduces the intensity of conflict at the expense of reduced junction time efficiency.
This paper presents the design and development of a funnel-shaped Sparus Docking Station (SDS) intended for the non-holonomic torpedo-shaped Sparus II Autonomous Underwater Vehicles (AUV). The SDS is equipped with sensors and batteries, allowing for a stand-alone long-term deployment of the AUV. An inverted Ultra Short BaseLine (USBL) system is used to locate the Docking Station (DS) as well as to provide long-term drift-less AUV navigation. The SDS is able to observe the ocean currents using a Doppler Velocity Log (DVL), being motorized to allow its self-alignment with the current. Moreover, a docking algorithm accounting for the current is used to guide the robot during the docking maneuver. The paper reports experimental results of the docking maneuver in sea trials.
This paper fuses ideas from Reinforcement Learning (RL), Learning from Demonstration (LfD), and Ensemble Learning into a single paradigm. Knowledge from a mixture of control algorithms (experts) are used to constrain the action space of the agent, enabling faster RL refining of a control policy, by avoiding unnecessary explorative actions. Domain-specific knowledge of each expert is exploited. However, the resulting policy is robust against errors of individual experts, since it is refined by a RL reward function without copying any particular demonstration. Our method has the potential to supplement existing RLfD methods when multiple algorithmic approaches are available to function as experts. We illustrate our method in the context of a Visual Servoing (VS) task, in which a 7-dof robot arm is controlled to maintain a desired pose relative to a target object. We explore four methods for bounding the actions of the RL agent during training. These methods include using a hypercube and convex hull with modified loss functions, ignoring actions outside the convex hull, and projecting actions onto the convex hull. We compare the training progress of each method with and without using the expert demonstrators. Our experiments show that using the convex hull with a modified loss function significantly improves training progress. Furthermore, we demonstrate faster VS error convergence while maintaining higher manipulability of the arm, compared to classical image-based VS, position-based VS, and hybrid-decoupled VS.
A new form of waypoint navigation controller for a skid-steer vehicle is presented, which consisted of a multiple input-single output nonlinear fuzzy angular velocity controller. The mem- bership functions of the fuzzy controller employed a trape- zoidal structure with a completely symmetric rule-base. No- tably, Hierarchical Rule-Base Reduction (HRBR) was incorpo- rated into the controller to select only the rules most influen- tial on state errors. This was done by selecting inputs/outputs and generating a hierarchy of inputs using a Fuzzy Relations Control Strategy (FRCS). Similar to some traditional fuzzy con- trollers, the system provided coverage for the global operat- ing environment. However, a rule for every possible combi- nation of variables and states was no longer necessary. Con- sequently, HRBR fuzzy controllers effectively increase both the number of inputs and their associated fidelity without the rule-base dramatically increasing. To contextualize the performance of the controller, a background on vehicle dy- namic modeling methodologies and an in-depth explanation of the related simulation model are provided. An examina- tion of the proposed controller is then completed employing test courses. The test courses examine the effects of steer- ing disturbance, phase lag, and overshoot as expressed in Root Mean Square Error (RMSE), Max Error (ME), and Course Completion Time (CCT). Finally, simulation and experimental results for the controller’s performance were compared with a state-of-the-art waypoint navigation vehicle controller, ge- ometric pure pursuit. The fuzzy was found to outperform the pure pursuit experimentally by 52.1 percent in RMSE, 26.8 percent in ME, and 1.07 percent in CCT, on average, validat- ing the viability of the controller.