Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-18T07:19:00.126Z Has data issue: false hasContentIssue false

Rotorcraft aerial vehicle’s contact-based landing and vision-based localization research

Published online by Cambridge University Press:  08 November 2022

Xiangdong Meng*
Affiliation:
Microsystem Research Center, the 58th Research Institute of China Electronics Technology Group Corporation, Wuxi, China School of Instrument Science and Engineering, Southeast University, Nanjing, China
Haoyang Xi
Affiliation:
Microsystem Research Center, the 58th Research Institute of China Electronics Technology Group Corporation, Wuxi, China
Jinghe Wei
Affiliation:
Microsystem Research Center, the 58th Research Institute of China Electronics Technology Group Corporation, Wuxi, China
Yuqing He
Affiliation:
Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
Jianda Han
Affiliation:
College of Artificial Intelligence, Nankai University, Tianjin, China
Aiguo Song
Affiliation:
School of Instrument Science and Engineering, Southeast University, Nanjing, China
*
*Corresponding author. E-mail: xdmeng09@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

A novel concept—the contact-based landing on a mobile platform—is proposed in this paper. An adaptive backstepping controller is designed to deal with the unknown disturbances in the interactive process, and the contact-based landing mission is implemented under the hybrid force/motion control framework. A rotorcraft aerial vehicle system and a ground mobile platform are designed to conduct flight experiments, evaluating the feasibility of the proposed landing scheme and control strategy. To the best of our knowledge, this is the first time a rotorcraft unmanned aerial vehicle has been implemented to conduct a contact-based landing. To improve system autonomy in future applications, vision-based recognition and localization methods are studied, contributing to the detection of a partially occluded cooperative object or at a close range. The proposed recognition algorithms are tested on a ground platform and evaluated in several simulated scenarios, indicating the algorithm’s effectiveness.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Unmanned aerial vehicles (UAVs) have been developed for several decades and can achieve vertical take-off and landing, hovering, and low-altitude flight. They are widely applied to sensing, transportation, and industrial scenarios, such as aerial photography [Reference Eker, Aydın and Hubl1], delivery service [Reference Oishi and Jimbo2], and pesticide spraying [Reference Lan and Chen3]. Although take-off and landing abilities are critical for UAVs, the landing stage is more complicated, especially landing on a mobile platform, because UAVs can leave the ground in a flash while taking off [Reference Polvara, Sharma, Wan, Manning and Sutton4]. The usual landing process is as follows: the UAV follows the moving vehicle within limited position error and then descends to an appropriate height to get ready for landing. Through rapid evaluation and quick decision, the UAV stops its propellers and finally makes a touchdown. A small velocity or position error and hesitation may lead to a failed landing, usually making it an inefficient process.

The initial study for rotorcrafts’ landing missions was on navy ships. The Northrop Grumman MQ-8 and Schweizer RQ-8 Fire Scout were the early unmanned shipborne rotorcrafts [5, 6], and they could vertically take off and land on ship decks. Other relevant projects include SHARC [Reference Duranti and Aerosystems7], RemoH-M-100 [8], and Skeldar V-200 [9]. Nowadays, in civil fields, the research focus is the landing mission on a ground mobile platform. Mohamed Bin Zayed International Robotics Challenge [Reference Quenzel, Splietker, Pavlichenko, Schleich, Lenz, Schwarz, Schreiber, Beul and Behnke10] has arranged a targeted competition item—controlling a UAV to land on a moving vehicle [Reference Baca, Stepan, Spurny, Hert, Penicka, Saska, Thomas, Loianno and Kumar11Reference Jin, Owais, Lin, Song and Yuan13]. Various strategies proposed for tracking and planning were tested to conduct the autonomous landing missions. [Reference Jin, Owais, Lin, Song and Yuan13, Reference Li, Meng, Zhou, Ding, Wang, Zhang, Guo and Meng14] applied the model predictive method to UAVs to realize motion estimation and tracking. Vision-based recognition and localization techniques were used to realize precise landing in GPS-denied (global positioning system) environments [Reference Kumar, Vohra, Prakash and Behera15Reference Zhang, Fang, Zhang, Jiang and Chen17]. Driven by artificial intelligence technology, some learn-based control and planning methods were studied and facilitated autonomous landing tasks [Reference Rodriguez-Ramos, Sampedro, Bavle, Puente and Campoy18, Reference Kooi and Babuska19]. Additionally, to achieve landing on uneven or inclined surfaces, UAVs with multiple onboard manipulators were designed and tested in flight experiments [Reference Paul, Miyazaki, Ladig and Shimonomura20]. The class of these structures is known as robot landing gear. The above analysis shows that almost all the existing UAV landing research is implemented while the UAV is in free flight. Low efficiency, high risk, and harsh implementation requirement are the apparent features of the traditional landing approach.

A novel way—contact-based landing—is first proposed based on the next-generation UAV (i.e., AM, aerial manipulator [Reference Meng, He and Han21]), and the concept drawing is presented in Fig. 1. An AM system, usually composed of a rotorcraft UAV and manipulator(s), can physically interact with the external environment by relying on the onboard manipulator [Reference Ollero, Heredia, Franchi, Antonelli, Kondak, Sanfeliu, Viguria, Martinez-de Dios, Pierri, Cortes, Santamaria-Navarro, Trujillo, Balachandran, Andrade-Cetto and Rodriguez22]. The contact-based landing process is as follows: the AM approaches the mobile platform, establishes a steady contact, realizes following in synchronization with the platform’s motion, descends, and finally accomplishes the touchdown. Notably, the contact-based following stage is a critical issue in which an AM needs to follow the ground mobile platform while maintaining sustained contact in flight. In this circumstance, the UAV’s motion is restricted [Reference Ryll, Muscio, Pierri, Cataldi, Antonelli, Caccavale, Bicego and Franchi23], usually leading to the loss of some DOFs (degrees of freedom) [Reference Brunner, Giacomini, Siegwart and Tognon24]. Meanwhile, the platform rarely remains at rest or in long-time uniform motion and may change speed or direction. The AM’s flight state is susceptible to uncertain motion.

Figure 1. The contact-based landing concept drawing.

In this paper, the impact forces in the contact-based landing process are considered disturbances and eliminated in controller design. An AM and a mobile platform are designed, and landing flight experiments are conducted to evaluate the method. Note that the ultimate goal of the proposed scheme is achieving autonomous landing in an outdoor environment. Vision-based localization methods are studied and tested on a simulated ground platform to improve the position accuracy, laying the foundation for future applications. This paper’s main contributions are as follows: it proposes an innovative landing approach for aerial vehicles and is the first time to accomplish the contact-based landing experiment on a mobile platform.

The remainder of this paper is organized as follows. Section 2 introduces the system dynamics and the contact features. Section 3 presents a control framework for the contact-based landing mission. An AM system and a ground mobile platform are described in Section 4, and the contact-based landing experiment process is shown. Next, in Section 5, a cooperative target is designed for the landing detection task. The vision-based recognition and localization methods are studied and evaluated in several experimental scenarios. The conclusion and further work are in Section 6.

2. System dynamics and contact property

2.1. AM dynamics

A single-arm AM is studied, as depicted in Fig. 2. Define the aircraft body-fixed frame {B} (forward, right, and downward) and the earth inertial frame {I} (north, east, and downward). ${{\boldsymbol{{p}}}}={ [{x,y,z} ]^{\text{T}}}$ and $\boldsymbol{\varPhi }={[\phi,\theta,\psi ]^{\text{T}}}$ represent AM’s position and attitude states. $l_a$ is the onboard robot arm length, and the arm is connected to the aircraft body center by a hinge, driven by a motor to realize a lateral contact with the environmental surface.

Figure 2. Frames of the AM system.

The AM system dynamics comprises three parts: UAV position, attitude, and robot arm dynamics. They are

(1) \begin{equation}{\ddot{\boldsymbol{{p}}}} = -{1/ m}{{\boldsymbol{{R}}}_{\varPhi }}{{\boldsymbol{{u}}}_{{1}}} + {{\boldsymbol{{F\!}}}_{{g}}}-{1/ m}{{\boldsymbol{{F\!}}}_{{c}}}({\boldsymbol{\varPhi }},{a_{{p}}}) \end{equation}
(2) \begin{equation}{\ddot{\boldsymbol\varPhi }} = -{{\boldsymbol{{I}}}^{-1}}{\dot{\boldsymbol\varPhi }} \times \boldsymbol{{I}}{{ \dot{\boldsymbol \varPhi }}} + {{\boldsymbol{{I}}}^{-{\rm{1}}}}{\boldsymbol{{B}}\boldsymbol{\tau }} + {{\boldsymbol{\tau }}_m}({\tau _a},{l_a},{m_a}) \end{equation}
(3) \begin{equation}{J_a}{\ddot \theta _a} = {\tau _a} +{\tau _{r}} \end{equation}

where ${\boldsymbol{{u}}_1}={ [{0,{\rm{ }}0,{u_1}} ]^{\text{T}}}$ and $\boldsymbol{\tau }={ [{{u_2},{u_3},{u_4}} ]^{\text{T}}}$ are UAV thrust and attitude torque vectors, and $\boldsymbol{{R}}_\varPhi$ is the transition matrix from {B} to {I}. $\boldsymbol{{F}}_{\!g}={ [{0,{\rm{ }}0,mg} ]^{\text{T}}}$ is system gravity vector, $\boldsymbol{{I}}=\text{diag} \{{{I_x},{I_y},{I_z}} \}$ is inertia matrix, and $\boldsymbol{{B}}=\text{diag} \{{l,l,{\rm{ }}1} \}$ is coefficient matrix. ${m_a},{\theta _a},{\tau _a},{\rm{and}}\,\,{J_a}$ are the robot arm’s mass, joint angle, motor torque, and rotational inertia. $\tau _r$ is the reaction torque generated from UAV attitude control. $\boldsymbol{{F}}_{\!c}$ is the exerted force on the AM during the contact-based landing process, and $\boldsymbol{{\tau }}_{m}$ is the torque effect generated from robot arm joint control.

2.2. Contact feature

The UAV platform has two features: non-self-stabled and under-actuated. Thus, a closed-loop control structure is usually applied to a UAV, making it behave as a spring-mass-damper-like system [Reference Meng, He, Li, Gu, Yang, Yan and Han25]. The following equation expresses the relationship between UAV position response $\boldsymbol{{X}}$ and the exerted external force $\boldsymbol{{F}}^{\text{ext}}$ :

(4) \begin{equation} \begin{array}{l}{\boldsymbol{{X}}}(s) = \cfrac{1}{{{s^2}m + sK_2^i + K_1^i}}{{\boldsymbol{{F}}}^{{\rm{ext}}}}(s), \quad (K_2^i = k_2^im,K_{\rm{1}}^i = k_1^im, \,i = {\rm{{\textit{x,y,z}}}}) \end{array} \end{equation}

where $\boldsymbol{{X}}={ [{x,y,z} ]^{\text{T}}}$ represents the position change. $k^i_{1,2}(i= x,{\rm{ }}y,{\rm{ }}z)$ are position controller gains. The mass-weighted parameters $K_1^i$ can be treated as the system stiffness, and $K_2^i$ are considered damping. The AM’s mass $m$ is the spring-mass-damper system mass.

In addition, owing to UAV’s under-actuation property, when an AM exerts a horizontal contact force on the environment, the AM needs to maintain a tilt angle. The heading $\psi$ is set at 0 for the forward contact-based landing, and the contact force $F_x$ can be derived and expressed in terms of the AM attitude angle $\theta$ and system gravity $mg$ :

(5) \begin{equation}{F_x}=-mg\tan \theta \end{equation}

3. Control system design

3.1. Control issue

While an AM conducts the contact-based landing on a mobile platform, it faces several control issues: one controls AM’s steady flight, the second needs to maintain sustained contact with the target, and the last is to realize the robust contact-based following. In this case, the interactive force and torque produced by the relative motion of the mobile platform and the robot arm can be treated as external disturbances for the aircraft and will be eliminated by a robust controller. The steady flight and contact-based following issues can be addressed using a hybrid force and motion control framework [Reference Meng, He and Han26].

3.2. Controller design

An adaptive backstepping controller is designed for the AM system to deal with the unknown bounded disturbance. Firstly, variable substitution and feedback linearization are applied to (1)−(3), and a unified model is obtained. Next, the controller is designed in two steps: the basic controller structure is based on the backstepping method; the disturbance is estimated and eliminated by the sliding mode strategy.

Define the intermediate variable ${\boldsymbol{{h}}}_1={[{{h_{1,x}},{h_{1,y}},{h_{1,z}}} ]^{\text{T}}}$ , and the conversion relationship is:

(6) \begin{equation} \left \{ \begin{array}{l}{h_{ 1,{x}}} = -{{{u_1} \text{cos}\phi \text{sin}\theta }/ m}\\[5pt]{h_{ 1,{y}}} = {{{u_1} \sin \phi }/ m}\\[5pt]{h_{ 1,{z}}} = -{{{u_1} \text{cos}\phi \text{sin}\theta }/ m} + g \end{array} \right. \end{equation}

Thus, (1) can be rewritten as:

(7) \begin{equation}{\ddot{\boldsymbol{{p}}}} = {{\boldsymbol{{h}}}_1}-{1/ m}{{\boldsymbol{{F\!}}}_{{c}}} \end{equation}

Apply feedback linearization to (2), the relationship between the virtual input $\boldsymbol{\tau }^*=[ u_1^*,u_2^*,u_3^*]^{\text{T}}$ and the original input $\boldsymbol{\tau }={ [{{u_2},{u_3},{u_4}} ]^{\text{T}}}$ is

(8) \begin{equation} \left \{ \begin{array}{l}{u_2} = -{{({I_y}-{I_z})\dot \theta \dot \psi }/ l} + u_2^ * \\[5pt]{u_3}= - ({I_z}-{I_x}){{\dot \phi \dot \psi }/ l} + u_3^ * \\[5pt]{u_4}= - ({I_x}-{I_y})\dot \phi \dot \theta + u_4^* \end{array} \right. \end{equation}

Let ${\boldsymbol{{B}}}_{\tau }={\boldsymbol{{I}}^{-1}}\boldsymbol{{B}}$ in (2) and define the new variable ${\boldsymbol{\tau }}_v^*={\boldsymbol{{B}}}_{\tau }{{\boldsymbol{\tau }}^*}$ . Hence, (2) is expressed by the equation:

(9) \begin{equation} \ddot{{{\boldsymbol\varPhi}}} ={\boldsymbol{\tau }}_v^* +{\boldsymbol{\tau }}_{m} \end{equation}

From the above analysis, AM dynamics (1)−(3) can be expressed in a unified form:

(10) \begin{equation} \ddot{\boldsymbol{{x}}}=\boldsymbol{\rho }+\boldsymbol{{d}} \end{equation}

where $\boldsymbol{{x}}$ is the state vector of each sub-system (Making a distinction, the variable $x$ defined in Section 2 represents the position element.), $\boldsymbol{\rho }$ is the system input vector, and $\boldsymbol{{d}}$ is the disturbance vector.

Define state variables

(11) \begin{equation} \begin{array}{l}{{\dot{\boldsymbol{{x}}}}_1} = {{\boldsymbol{{x}}}_2}\\[5pt]{{\dot{\boldsymbol{{x}}}}_2} = {\boldsymbol{\rho }} + \boldsymbol{{d}} \end{array} \end{equation}

The variable errors are

(12) \begin{equation} \begin{array}{l} \Delta{{\boldsymbol{{x}}}_{1}} = {{\boldsymbol{{x}}}_{1}}-{{\boldsymbol{{x}}}_{1{\rm{ref}}}}\\[5pt] \Delta{{\boldsymbol{{x}}}_{2}} = {{\boldsymbol{{x}}}_{2}}-{{\boldsymbol{{x}}}_{2{\rm{ref}}}} \end{array} \end{equation}

Based on (11) and (12), the following relationships can be obtained

(13) \begin{equation} \Delta{{\dot{\boldsymbol{{x}}}}_{1}}={{\dot{\boldsymbol{{x}}}}_{1}}-{{\dot{\boldsymbol{{x}}}}_{1{\rm{ref}}}} ={{\dot{\boldsymbol{{x}}}}_{2}}-{{\dot{\boldsymbol{{x}}}}_{1{\rm{ref}}}} \end{equation}
(14) \begin{equation} \Delta{{\dot{\boldsymbol{{x}}}}_{1}}={{\dot{\boldsymbol{{x}}}}_{2{\rm{ref}}}}+({{\boldsymbol{{x}}}_{2}}-{{\boldsymbol{{x}}}_{2{\rm{ref}}}})-{{\dot{\boldsymbol{{x}}}}_{1{\rm{ref}}}} \end{equation}

Then, according to the backstepping control, the virtual control input is designed:

(15) \begin{equation}{{\boldsymbol{{x}}}_{2{\rm{ref}}}}= -{{\boldsymbol{{K}}}_1}\cdot \Delta{{\boldsymbol{{x}}}_{1}}+{{\dot{\boldsymbol{{x}}}}_{1{\rm{ref}}}} \end{equation}

Thus, based on (14), (15), and (11), the error state space equations are expressed as:

(16) \begin{equation} \begin{array}{l} \Delta{{{\dot{\boldsymbol{{x}}}}}_1}= -{{\boldsymbol{{K}}}_1}\cdot \Delta{{\boldsymbol{{x}}}_{1}}+\Delta{{\boldsymbol{{x}}}_2}\\[5pt] \Delta{{{\dot{\boldsymbol{{x}}}}}_2}={{{\dot{\boldsymbol{{x}}}}}_2}-{{{\dot{\boldsymbol{{x}}}}}_{2{\rm{ref}}}}={\boldsymbol{\rho }}+{\boldsymbol{{d}}}-{{{\dot{\boldsymbol{{x}}}}}_{2{\rm{ref}}}} \end{array} \end{equation}

The system control law is designed:

(17) \begin{equation}{\boldsymbol{\rho }}=-{{\boldsymbol{{K}}}_2}\cdot \Delta{{\boldsymbol{{x}}}_{2}}- \Delta{{\boldsymbol{{x}}}_1} +{{\dot{\boldsymbol{{x}}}}_{2{\rm{ref}}}} \end{equation}

For the unknown bounded disturbance $\boldsymbol{{d}}$ , $ |{ | \boldsymbol{{d}} |} | \le \bar d$ , and $\bar d$ is an unknown constant. Design an adaptive law for $\bar d$ to get the estimated boundary:

(18) \begin{equation} \dot{\hat{{\bar d}}}={k_{d}}\cdot \left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \| \end{equation}

where the initialization and gain conditions are

(19) \begin{equation} \left \{ \begin{array}{l} \hat{{\bar d}}(0) = 0\\[5pt]{k_{d}} \gt 0 \end{array} \right. \end{equation}

A sliding mode term is introduced in control law to address the disturbance impact $\boldsymbol{{d}}$ in (10). Thus, the robust control law is designed:

(20) \begin{equation}{\boldsymbol{\rho }} =-{{\boldsymbol{{K}}}_2}\cdot \Delta{{\boldsymbol{{x}}}_{2}}-\Delta{{\boldsymbol{{x}}}_1} +{{\dot{\boldsymbol{{x}}}}_{2{\rm{ref}}}}-{\frac{{\Delta{{\bf{x}}_2}}}{{\left \|{\Delta{{\bf{x}}_2}} \right \|}}}\cdot{\hat{\bar d}} \end{equation}

From (16) and (20), the closed-loop system is expressed:

(21) \begin{equation} \begin{array}{l} \Delta{{{\dot{\boldsymbol{{x}}}}}_1}=-{{\boldsymbol{{K}}}_1}\cdot \Delta{{\boldsymbol{{x}}}_{1}}+\Delta{{\boldsymbol{{x}}}_2}\\[5pt] \Delta{{{\dot{\boldsymbol{{x}}}}}_2}=-{{\boldsymbol{{K}}}_2}\cdot \Delta{{\boldsymbol{{x}}}_{2}}-\Delta{{\boldsymbol{{x}}}_1}-\dfrac{{\Delta{{\bf{x}}_2}}}{{\left \|{\Delta{{\bf{x}}_2}} \right \|}}\cdot{\hat{\bar d}}+{\boldsymbol{{d}}} \end{array} \end{equation}

where ${\boldsymbol{{K}}}_1$ and ${\boldsymbol{{K}}}_2$ are positive diagonal matrixes.

The Lyapunov function is defined:

(22) \begin{equation} \begin{array}{l} V(\Delta{{\boldsymbol{{x}}}_1},\Delta{{\boldsymbol{{x}}}_2},\hat{\bar{d}} - \bar d)\\[5pt] ={\dfrac{1}{2}}\Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{\boldsymbol{{x}}}_1}+{{1 \over 2}}\Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{\boldsymbol{{x}}}_2} +{\dfrac{1}{{2{k_{d}}}}}{(\hat{\bar{d}}- \bar d)^2} \end{array} \end{equation}

Substitute (18) and (21) into (22). The derivative is

(23) \begin{align} \dot V &= \Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{{\dot{\boldsymbol{{x}}}}}_1} + \Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{{\dot{\boldsymbol{{x}}}}}_2} + {\dfrac{1}{{{k_{d}}}}}(\hat{\bar{d}}- \bar d)\dot{\hat{\bar{d}}} \nonumber \\[5pt] &= \Delta{\boldsymbol{{x}}}_1^{\rm{T}}( -{{\boldsymbol{{K}}}_1}\Delta{{\boldsymbol{{x}}}_1} + \Delta{{\boldsymbol{{x}}}_2}) + \Delta{\boldsymbol{{x}}}_2^{\rm{T}}( -{{\boldsymbol{{K}}}_2}\Delta{{\boldsymbol{{x}}}_2} - \Delta{{\boldsymbol{{x}}}_1}-{\dfrac{{\Delta{{\bf{x}}_2}} }{{\left \|{\Delta{{\bf{x}}_2}} \right \|}}} \hat{\bar{d}} + {\boldsymbol{{d}}}) + {\dfrac{1}{{{k_{d}}}}}(\hat{\bar{d}} - \bar d){k_{d}}\left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \|\\[5pt] &= -{{\boldsymbol{{K}}}_1}\Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{\boldsymbol{{x}}}_1} -{{\boldsymbol{{K}}}_2}\Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{\boldsymbol{{x}}}_2} - \left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \|\hat{\bar{d}} + \Delta{\boldsymbol{{x}}}_2^{\rm{T}}{\boldsymbol{{d}}}{\rm{\,+\,(}}\hat{\bar{d}}- \bar d{\rm{)}}\left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \| \nonumber \end{align}

For the item $\Delta{\boldsymbol{{x}}}_2^{\rm{T}}{\boldsymbol{{d}}}$ and $ \|{\boldsymbol{{d}}} \|\le{\bar d}$ , the relationship $\Delta{\boldsymbol{{x}}}_2^{\rm{T}}{\boldsymbol{{d}}}\le \|{\Delta{{\boldsymbol{{x}}}_2}} \|\bar d$ can be obtained. Thus, (23) can be rewritten:

(24) \begin{align} \dot V & \le -{{\boldsymbol{{K}}}_1}\Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{\boldsymbol{{x}}}_1} -{{\boldsymbol{{K}}}_2}\Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{\boldsymbol{{x}}}_2} - \left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \|\hat{\bar{d}} + \left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \|\bar d + (\hat{\bar{d}}- \bar d)\left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \| \nonumber\\[5pt] & = -{{\boldsymbol{{K}}}_1}\Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{\boldsymbol{{x}}}_1} -{{\boldsymbol{{K}}}_2}\Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{\boldsymbol{{x}}}_2} - \left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \|(\hat{\bar{d}} - \bar d) + (\hat{\bar{d}}- \bar d)\left \|{\Delta{{\boldsymbol{{x}}}_2}} \right \| \nonumber\\[5pt] & = -{{\boldsymbol{{K}}}_1}\Delta{\boldsymbol{{x}}}_1^{\rm{T}}\Delta{{\boldsymbol{{x}}}_1} -{{\boldsymbol{{K}}}_2}\Delta{\boldsymbol{{x}}}_2^{\rm{T}}\Delta{{\boldsymbol{{x}}}_2} \\[5pt] & \lt 0 \nonumber \end{align}

The system is asymptotically stable. The state $\boldsymbol{{x}}$ will asymptotically converge to the desired value $\boldsymbol{{x}}_{d}$ , and in the process, the adaptive strategy could eliminate the disturbance impact.

3.3. Control framework implementation

The system control framework is depicted in Fig. 3. While an AM performs the contact-based landing task in forward flight, the workspace can be divided into two orthogonally decoupled subspaces—constrained space and free-flight space. The former is the normal direction of the interaction target, and the latter is the descent space along the surface. In the free-flight space, the proposed backstepping controller is used to ensure steady flight and is implemented under the hierarchical inner-outer loop structure. In the constrained space, the contact control can be transformed into position control based on the relationship (4). Meanwhile, an attitude feed-forward strategy is designed based on (5). The above process applies the hybrid force and motion control framework, which assists AMs in maintaining reliable and sustained contact in flight. The onboard robot manipulator is individually controlled based on arm kinematics, maintaining horizontal contact with the target. The trajectory generator serves two purposes: the first generates safe and steady descending trajectory points for the contact-flying AM; the second outputs the desired set-point of horizontal pose for the AM’s end-effector. The end height of the descent motion is set about 10 cm above the touchdown region and can be calculated based on the relative height in the visual recognition process. For the trajectory planning issue, the start and end positions are known. Thus, common methods can be applied, such as the Bezier function method [Reference Faigl and Vana27].

Figure 3. The control framework for contact-based landing.

4. System description and experiment

The experimental setup consists of two sub-systems: an AM system and a ground mobile platform. In the following experiment, the mobile platform is controlled to move forward, and meanwhile, the AM is controlled to conduct the contact-based landing mission. The result and analysis are presented.

4.1. AM system

A hexa-rotor aircraft is designed as the aerial platform for the larger load capacity, and a one-DOF robot arm is installed, forming an AM system, as shown in Fig. 4. A roller-type end-effector is designed for sliding motion along the contact surface, as depicted in Fig. 5, and it is connected to the hinge by a carbon fiber link. A single-axis force sensor measures the force in the contact process. The battery balances AM system’s center of gravity, contributing to the steady flight in take-off and following stages. The AM’s other parameters are presented in Table I.

Figure 4. AM system structure.

Figure 5. The roller-type end-effector design drawing.

Table I. AM parameters.

The roller is made of aluminum and carbon fiber to reduce weight, as shown in Fig. 5. A switch-lock structure, driven by a mini servo, is designed to enable or disable the roller’s spin and stop. The force sensor is installed at the arm link end, on which the roller is fixed.

4.2. Mobile platform

A ground mobile platform is designed, as depicted in Fig. 6. A slide rail is mounted on the support frame. A rectangular deck is fixed on the slider and driven by an end motor, enabling movement along the rail. A vertical board (i.e., the contact board) is installed on the deck platform for the contact-based landing task, providing sufficient support for sustained contact. The mobile platform’s velocity can be set freely according to different motion scenes. Two limit switches are installed on both ends of the slide rail to ensure safe movement. The mobile platform structural parameters are in Table II.

Figure 6. The ground mobile platform.

4.3. Contact-based landing experiment and analysis

The experiment is conducted in a motion capture environment, and the experimental setting is shown in Table III.

The contact-based landing mission consists of the following steps: approach, contact, descent, and touchdown, as illustrated in Fig. 7 and the attached video (Video #1). The detailed process is as follows: the AM starts moving from the position [−1.0 m, −0.5 m, −1.85 m] $^{\text{T}}$ and contacts the platform’s vertical board in the position [0.0 m, −0.5 m, −1.85 m] $^{\text{T}}$ . The ground platform remains stationary at first and starts to move at a certain speed. Meanwhile, the AM slides down until a safe free-fall height and stops the propellers. Figures 812 show AM’s state curves in the contact-based landing process, and it lasts 35 s:

Table II. Mobile platform parameters.

Table III. The experimental setting.

$^*$ The value can be set artificially. Others are programmed settings.

Figure 7. The contact-based landing on a mobile platform.

Figure 8. The position changes in the laning process.

  • Contact stage, ${\quad }{\quad }{\quad }{\quad }{\quad }{\;}{\,}0\sim$ 17.0 s;

  • Descent motion, ${\quad }{\quad }{\quad } 17.0\sim$ 25.2 s;

  • Complete landing, ${\quad }{\quad }{\;} 25.2\sim$ 35.0 s.

Contact stage $-$ The AM approaches forward and maintains sustained contact with the mobile platform while flying in the air. Descent motion $-$ The AM descends to approach the platform deck until a safe set-point height while maintaining sustained contact. Complete landing $-$ The AM stops propellers and achieves a final free-fall on the mobile platform.

In these figures, subscript $r$ represents the real value, and $d$ is the desired value or the set-point. In Fig. 8, in the beginning, there exists an error in the initial position $x$ . Before the contact stage, the AM is in free-flight mode and approaching the mobile platform in the forward. In the trajectory tracking process, the position error is inevitable. The first stage (the yellow part) indicates the static contact stage. Thus, the three-dimensional position ( $x_r$ , $y_r$ , and $z_r$ ) are constant [0.0 m, $-$ 0.5 m, $-$ 1.85 m] $^{\text{T}}$ . Then, the platform moves forward (the green part in Fig. 8). In the two stages (from 0 to 25.2 s), the AM maintains sustained contact with the mobile platform, showing that the controller can ensure steady contact and tracking flight. In the meanwhile, the AM is commanded to descend from $-$ 1.85 to $-$ 1.45 m (indicated by z-axis position curve in Fig. 8), and it is very close to the platform deck (the relative distance can be found in Fig. 7(b)), which is safe for the touchdown. Therefore, at the time of 25.2 s, the AM stops propellers and successfully reaches the ground platform, completing the contact-based landing mission. The remaining time (the red part, from 25.2 s to the end 35 s) means the AM is on the platform. $x$ trajectory still has a target value in the last stage. The desired trajectory value $x_d$ is generated by AM current position $x_r$ and the position error $\Delta x$ (calculated by (4)).

The position error curves are shown in Fig. 9, and the mean and variance are calculated in Table IV. The range 6−17 s is the constant contact force maintaining stage, and 17−25 s is the descent motion. The mean and variance of $\Delta x$ and $\Delta y$ demonstrate no significant differences. However, the absolute value of $\Delta z$ mean is larger, especially in the range 17−25 s, and the same trend can be observed for the variance. It is because the AM’s height is controlled to realize landing and the tracking error is inevitable.

Figure 9. The position error curve.

Table IV. Mean and variance of position.

AM’s attitude angle is recorded in the landing process, and two experimental results are shown in Fig. 10. From 0 to 17 s (the yellow part), the roll and yaw angle errors are in the range $\pm$ 0.03 rad ( $\lt 2^{\circ }$ ), and they are the same precision as in the regular free flight. Due to the aircraft’s under-actuation feature, the AM relies on pitch adjustment to generate a forward contact force. The beginning contact occurs at 3 s, and the desired pitch angle is 0.12 rad. In this case, the pitch error still meets the criteria of $\pm$ 0.03 rad, which means the AM attitude control loop is effective. In the descent stage (the green part), the real roll and pitch angle curves track the desired values and are not obviously affected by the descending motion. The yaw angle fluctuates due to the heading control error in the contact-based landing process.

Figure 10. AM attitude angles in the landing process.

The attitude error curves are shown in Fig. 11. The larger mean and variance in the two experimental results are calculated in Table V. The attitude variances show no significant differences numerically. The error mean of $\Delta \psi$ in the range 17−25 s is larger than others. It shows that the physical contact affects AM’s heading control more than roll and pitch angles.

Figure 11. The attitude error curve.

The onboard force sensor measures the real-time contact force data in the flight process and is plotted in Fig. 12. It also presents two experimental results. The desired force for the contact-based landing mission $F_d$ is 3 N (the red dashed line). From 0 to 3 s, the AM is in the transitional period and prepares for the contact motion. From 3 s to 25.2 s, the force $F_r$ is always >0 N, showing that the sustained contact is achieved. The larger mean and variance of the contact force error in the two experimental results are calculated in Table VI. The mean in the range 17−25 s has a greater absolute value than in 6 17 s. The error of $F_r$ may seem larger than regular robots such as industrial manipulators. It is mainly because the AM’s characteristics are different. The AM’s base is an under-actuated aircraft platform, a floating robot. Its position precision can only reach $\pm$ 2 cm in a motion capture environment. However, the ground mobile or industrial manipulators have heavy or fixed bases, and the robot position precision is less than one millimeter. In this work, the AM’s contact force is controlled based on position control, and thus, the force precision will be lower than other manipulators. Nevertheless, the force can make the AM maintain steady contact with the external environment to fulfill complex missions such as contact-based landing. From this point, the proposed method shows the effectiveness in the coordinated control of contact and motion for AMs.

5. Vision-based localization research

An AM is expected to implement the autonomous contact-based landing outdoors or in a GPS-denied environment. Vision-based strategies are usually applied to the circumstances. Thus, this section studies the vision-based recognition and localization issues based on a cooperative target. The methods are tested on a ground platform in the laboratory environment.

5.1. Cooperative target design

For AM’s landing task, the onboard camera needs to recognize the specific markers to obtain the relative position and attitude information [Reference Zhang, Fang, Zhang, Jiang and Chen28], and an artificial marker (i.e., cooperative target) is usually used. The AM’s height will decrease from several meters (the hovering stage) to a few centimeters (the touchdown stage). Thus, the marker design should be suitable for far-distance recognition, close-range recognition, or even the partially occluded case. The existing markers (such as April Tag [Reference Borowczyk, Nguyen, Phu-Van Nguyen, Nguyen, Saussie and Ny29]) are usually designed very large for far-distance recognition and placed with several small markers around for the close-range case, which will bring larger errors in the relative pose calculation. In this paper, a concentric and nested circle target is designed, as depicted in Fig. 13. The marker consists of 2 $i$ +1 black-and-white circles, and $i$ can be set 2−4 in regular scenarios. Two circles (main $M_i$ and auxiliary $N_i$ ) are distributed in each layer. All $M_i$ circles are concentric, $N_i$ circles are placed in the interval, and their circle centers are collinear. The circle radiuses are proportionally designed for quick calculation. For the landing and recognition task, the marker can provide relative position and heading angle for the landing AM.

5.2. Vision-based recognition and localization

The cooperative target recognition relies on image edge detection. Bilateral filtering algorithm is applied [Reference Tomasi and Manduchi30]. Compared with the regular Gaussian filter, it replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels. In this paper, the range is more concerned than the domain parameter to obtain the clear edge of the cooperative target (as shown in Fig. 13).

A contour extraction method is applied to acquire the nested information [Reference Suzuki31]. The circle detection is determined by the circumference-area ratio:

(25) \begin{equation}{{{L^2}}/ S} = {{{{(2\pi r)}^2}}/{(\pi{r^2})}} = 4\pi \end{equation}

Table V. Mean and variance of attitude.

Figure 12. The contact force curve.

Algorithm 1: Target detection and state calculation

In this way, it costs less time than the Hough circle transform method and is effective for all circle detection in the cooperative target. The outermost circle pair is selected as the extracted feature according to the descent height, and the circle center and radius are determined by the minimum circle covering method. The relative height $h$ and heading angle $\psi$ are calculated in the following:

(26) \begin{equation} h = {{({h_1} + {h_2})}/ 2} = {{\left({\frac{{f \cdot{R_1}} }{{{r_1}}}} + {\frac{{f \cdot{R_2}}}{{{r_2}}}}\right)}/ 2} \end{equation}
(27) \begin{equation} \psi = \left \{{\begin{array}{*{20}{c}}{\arctan \left({\dfrac{{{x_2} -{x_1}}}{{{y_2} -{y_1}}}}\right),\quad \quad \;\; \;\;{y_2} \ge{y_1}}\\[14pt]{\pi + \arctan \left({\dfrac{{{x_2} -{x_1}}}{{{y_2} -{y_1}}}}\right),\quad \;{y_2} \lt{y_1}}\\[14pt]{\psi - 2\pi,\quad \quad \quad \qquad \quad \quad \psi \gt \pi } \end{array}} \right. \end{equation}

where $f$ is focal length, $i(1,2)$ means the circle pair. $R_1$ and $R_2$ are the real radiuses of the cooperative target, $r_1$ and $r_2$ are the radiuses on the image plane, and $x_i$ and $y_i$ are pixel positions of the circle center. Algorithm 1 presents the circle’s detection process and the state calculation.

5.3. Method evaluation

The proposed algorithm is tested on the ground platform by using a mini PC-Intel NUC with the CPU as Intel Core i5-6260, 8G RAM, and 128G SSD. The image processing algorithms are running based on OpenCV. The start height, end height, and descent speed simulate AM’s states in the landing descent stage. Therefore, the camera starts to move at 1.85 m and descends to a height of 0.10 m. The experiments are conducted to simulate several scenarios, and the real-time recognition results are presented in Figs. 1417 and also shown in the attached video (Video #2−#5). The largest red circle is used to calculate the relative height in the following figures. The circle pair (i.e., the same color circles) is used to calculate the heading angle, indicated by a green arrow.

Table VI. Mean and variance of force.

Figure 13. The cooperative target design.

Figure 14. Target detection in Case A.

5.3.1. Case A: Landing on a static platform

The target remains stationary on the ground, and the camera descends. The proposed method can detect the landing target in the descent process, as depicted in Fig. 14. This is a regular and easy scenario for target recognition.

5.3.2. Case B: Landing on a moving platform

The target is placed on the ground and rotated by a stick, as shown in Fig. 15(a), and in this process, the camera descends. Figure 15(b) is clockwise rotation, Fig. 15(c) is counter-clockwise rotation, the rotation speed is about two revolutions per second, and in this case, the real-time heading angle can still be captured.

Figure 15. Target detection in Case B.

Figure 16. Target detection in Case C.

Figure 17. Target detection in Case D.

Figure 18. The height information.

5.3.3. Case C: Landing on an inclined platform

Landing on an inclined surface is demanded in actual applications. Thus, the target is swayed in the air to simulate the inclined platform while the camera descends. The target’s left and right rotating angles can exceed ${45}^{\circ }$ , as shown in Fig. 16. The real-time height and heading angle are output.

5.3.4. Case D: Landing at a close range

The camera cannot get the full target image when an AM descends to a low height. The nested cooperative target shows the advantage, as depicted in Fig. 17. The target can be detected even if it is partially occluded (10 $\%$ −40 $\%$ occluded). If most parts of the outer circle cannot be captured, the inner circle will be detected to calculate the height and heading angle information, as shown in Fig. 17(c).

In Fig. 18, the real-time height and heading angle information are presented for Case A. In Fig. 18(a), the blue curve data are calculated according to the image recognition, and the ultrasonic sensor measurement is the red curve. Figure 18(b) shows that the height error $\Delta h=h_c-h_u$ in the descent process is in −2−2 cm, indicating that the measured height by the vision method is reliable. In Fig. 18(c), the marker’s heading $\psi$ on the ground is set to 0 rad, and the maximum heading error obtained by the calculated manner is $\pm$ 0.02 rad which can be used for AM’s heading fusion and estimation in actual flight.

6. Conclusion

This paper first introduces the concept of contact-based landing on a mobile platform and provides the implementation of an AM system and flight experiments.

An adaptive backstepping controller is designed to deal with the impact of the unknown bounded disturbance. The contact-based landing control is implemented under the hybrid force and motion control framework. An AM system based on a multi-rotor UAV is designed to conduct the actual flight contact-based landing experiment. Vision-based recognition and localization methods are studied and tested to improve system autonomy on a ground platform in a laboratory environment. The recognition results are evaluated in several simulated scenarios and indicate the algorithm’s reliability.

Future work will focus on the vision method evaluation on AM systems and the outdoor experiment of contact-based landing.

Acknowledgements

Thank Lingyi Meng and Lingchong Meng for supporting in experiments.

Author contributions

Xiangdong Meng and Yuqing He designed the study and wrote the article. Haoyang Xi fulfilled the vision-based recognition method. Jinghe Wei performed statistical analyses. Jianda Han and Aiguo Song conceived the study and the article.

Financial support

This work is supported by the National Natural Science Foundation of China (Grant No. 62103388), China Postdoctoral Science Foundation (Grant No. 2022M712976), the Natural Science Foundation of Jiangsu Province of China (Grant No. BK20210070), Joint Fund of Science & Technology Department of Liaoning Province and State Key Laboratory of Robotics, China (Grant No. 2021KF2205), and Jiangsu Shuangchuang Project (Grant No. JSSCBS20211448).

Conflicts of interest

The authors declare no conflicts of interest exist.

Ethical approval

Not applicable.

References

Eker, R., Aydın, A. and Hubl, J., “Unmanned aerial vehicle (UAV)-based monitoring of a landslide: Gallenzerkogel landslide (Ybbs-Lower Austria) case study,” Environ. Monit. Assess. 190(1), 114 (2018).CrossRefGoogle Scholar
Oishi, K. and Jimbo, T., “Autonomous Cooperative Transportation System Involving Multi-Aerial Robots with Variable Attachment Mechanism,” In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2021) pp. 63226328.CrossRefGoogle Scholar
Lan, Y. and Chen, S., “Current status and trends of plant protection UAV and its spraying technology in China,” Int. J. Precis. Agricultural Aviation 1(1), 19 (2018).CrossRefGoogle Scholar
Polvara, R., Sharma, S., Wan, J., Manning, A. and Sutton, R., “Autonomous vehicular landings on the deck of an unmanned surface vehicle using deep reinforcement learning,” Robotica 37(11), 18671882 (2019).CrossRefGoogle Scholar
Duranti, S. and Aerosystems, S., “Autonomous Take-Off and Landing of the SHARC Technology Demonstrator,” In: 25th International Congress of the Aeronautical Sciences (2006) pp. 110.Google Scholar
Quenzel, J., Splietker, M., Pavlichenko, D., Schleich, D., Lenz, C., Schwarz, M., Schreiber, M., Beul, M. and Behnke, S., “Autonomous Firefighting with a UAV-UGV Team at MBZIRC 2020,” In: 2021 International Conference on Unmanned Aircraft Systems (ICUAS) (IEEE, 2021) pp. 934941.CrossRefGoogle Scholar
Baca, T., Stepan, P., Spurny, V., Hert, D., Penicka, R., Saska, M., Thomas, J., Loianno, G. and Kumar, V., “Autonomous landing on a moving vehicle with an unmanned aerial vehicle,” J. Field Robot. 36(5), 874891 (2019).CrossRefGoogle Scholar
Beul, M., Nieuwenhuisen, M., Quenzel, J., Rosu, R. A., Horn, J., Pavlichenko, D., Houben, S. and Behnke, S., “Team NimbRo at MBZIRC 2017: Fast landing on a moving target and treasure hunting with a team of micro aerial vehicles,” J. Field Robot. 36(1), 204229 (2019).CrossRefGoogle Scholar
Jin, R., Owais, H. M., Lin, D., Song, T. and Yuan, Y., “Ellipse proposal and convolutional neural network discriminant for autonomous landing marker detection,” J Field Robot. 36(1), 616 (2019).CrossRefGoogle Scholar
Li, Z., Meng, C., Zhou, F., Ding, X., Wang, X., Zhang, H., Guo, P. and Meng, X., “Fast vision based autonomous detection of moving cooperative target for unmanned aerial vehicle landing,” J Field Robot. 36(1), 3448 (2019).CrossRefGoogle Scholar
Kumar, A., Vohra, M., Prakash, R. and Behera, L., “Towards Deep Learning Assisted Autonomous UAVs for Manipulation Tasks in GPS-Denied Environments,” In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2020) pp. 16131620.CrossRefGoogle Scholar
Paris, A., Lopez, B. T. and How, J. P., “Dynamic Landing of an Autonomous Quadrotor on a Moving Platform in Turbulent Wind Conditions,” In: 2020 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2020) pp. 95779583.CrossRefGoogle Scholar
Zhang, X., Fang, Y., Zhang, X., Jiang, J. and Chen, X., “A novel geometric hierarchical approach for dynamic visual servoing of quadrotors,” IEEE Trans. Ind. Electron. 67(5), 38403849 (2019).CrossRefGoogle Scholar
Rodriguez-Ramos, A., Sampedro, C., Bavle, H., Puente, P. D. L. and Campoy, P., “A deep reinforcement learning strategy for UAV autonomous landing on a moving platform,” J. Intell. Robot. Syst. 93(1), 351366 (2019).CrossRefGoogle Scholar
Kooi, J. E. and Babuska, R., “Inclined Quadrotor Landing Using Deep Reinforcement Learning,” In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2021) pp. 23612368.CrossRefGoogle Scholar
Paul, H., Miyazaki, R., Ladig, R. and Shimonomura, K., “Landing of a Multirotor Aerial Vehicle on an Uneven Surface Using Multiple On-Board Manipulators,” In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2019) pp. 19261933.CrossRefGoogle Scholar
Meng, X., He, Y. and Han, J., “Survey on aerial manipulator: System, modeling, and control,” Robotica 38(7), 12881317 (2020).CrossRefGoogle Scholar
Ollero, A., Heredia, G., Franchi, A., Antonelli, G., Kondak, K., Sanfeliu, A., Viguria, A., Martinez-de Dios, J. R., Pierri, F., Cortes, J., Santamaria-Navarro, A., Trujillo, M. A., Balachandran, R., Andrade-Cetto, J. and Rodriguez, A., “The AEROARMS project: Aerial robots with advanced manipulation capabilities for inspection and maintenance,” IEEE Robot. Autom. Mag. 25(4), 1223 (2018).CrossRefGoogle Scholar
Ryll, M., Muscio, G., Pierri, F., Cataldi, E., Antonelli, G.,Caccavale, F., Bicego, D. and Franchi, A., “6D interaction control with aerial robots: The flying end-effector paradigm,” Int. J. Robot. Res. 38(9), 10451062 (2019).CrossRefGoogle Scholar
Brunner, M., Giacomini, L., Siegwart, R. and Tognon, M., “Energy tank-based policies for robust aerial physical interaction with moving objects,” arXiv preprint arXiv: 2202.06755, 2022.Google Scholar
Meng, X., He, Y., Li, Q., Gu, F., Yang, L., Yan, T. and Han, J., “Contact Force Control of an Aerial Manipulator in Pressing an Emergency Switch Process,” In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2018) pp. 21072113.CrossRefGoogle Scholar
Meng, X., He, Y. and Han, J., “Design and Implementation of a Contact Aerial Manipulator System for Glass-Wall Inspection Tasks,” In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2019) pp. 215220.CrossRefGoogle Scholar
Faigl, J. and Vana, P., “Surveillance planning with Bezier curves,” IEEE Robot. Autom. Lett. 3(2), 750757 (2018).CrossRefGoogle Scholar
Zhang, X., Fang, Y., Zhang, X., Jiang, J. and Chen, X., “Dynamic image-based output feedback control for visual servoing of multirotors,” IEEE Trans. Ind. Inform. 16(12), 76247636 (2020).CrossRefGoogle Scholar
Borowczyk, A., Nguyen, D.T., Phu-Van Nguyen, A., Nguyen, D. Q., Saussie, D. and Ny, J. L., “Autonomous landing of a multirotor micro air vehicle on a high velocity ground vehicle,” IFAC-PapersOnline 50(1), 1048810494 (2017).CrossRefGoogle Scholar
Tomasi, C. and Manduchi, R., “Bilateral Filtering for Gray and Color Images,” In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271) (IEEE, 1998) pp. 839846.Google Scholar
Suzuki, S., “Topological structural analysis of digitized binary images by border following,” Compu. Vis. Graph. Image Process. 30(1), 3246 (1985).CrossRefGoogle Scholar
Figure 0

Figure 1. The contact-based landing concept drawing.

Figure 1

Figure 2. Frames of the AM system.

Figure 2

Figure 3. The control framework for contact-based landing.

Figure 3

Figure 4. AM system structure.

Figure 4

Figure 5. The roller-type end-effector design drawing.

Figure 5

Table I. AM parameters.

Figure 6

Figure 6. The ground mobile platform.

Figure 7

Table II. Mobile platform parameters.

Figure 8

Table III. The experimental setting.

Figure 9

Figure 7. The contact-based landing on a mobile platform.

Figure 10

Figure 8. The position changes in the laning process.

Figure 11

Figure 9. The position error curve.

Figure 12

Table IV. Mean and variance of position.

Figure 13

Figure 10. AM attitude angles in the landing process.

Figure 14

Figure 11. The attitude error curve.

Figure 15

Table V. Mean and variance of attitude.

Figure 16

Figure 12. The contact force curve.

Figure 17

Table VI. Mean and variance of force.

Figure 18

Figure 13. The cooperative target design.

Figure 19

Figure 14. Target detection in Case A.

Figure 20

Figure 15. Target detection in Case B.

Figure 21

Figure 16. Target detection in Case C.

Figure 22

Figure 17. Target detection in Case D.

Figure 23

Figure 18. The height information.