한국센서학회 학술지영문홈페이지
[ Article ]
JOURNAL OF SENSOR SCIENCE AND TECHNOLOGY - Vol. 34, No. 3, pp.180-188
ISSN: 1225-5475 (Print) 2093-7563 (Online)
Print publication date 31 May 2025
Received 23 Apr 2025 Revised 28 Apr 2025 Accepted 03 May 2025
DOI: https://doi.org/10.46670/JSST.2025.34.3.180

High-Precision Underwater Image Sonar Mapping in Aquatic Structural Environment using LiDAR Odometry and Mapping for Safety Inspection

Sehwan Rho1 ; Bonchul Ku1 ; Byeongjin Kim2 ; Minsung Sung3 ; Son-Cheol Yu1, +
1Dept. of Convergence IT Engineering, Pohang University of Science and Technology (POSTECH), 37673, Pohang, South Korea
2Department of Industrial Machinery DX, Korea Institute of Machinery & Materials (KIMM), 34103, Daejeon, South Korea
3Fraunhofer Institute for Factory Operation and Automation IFF, 39106 Magdeburg, Germany

Correspondence to: + sncyu@postech.ac.kr

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License(https://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study proposes a method for underwater sonar image mapping for aquatic structural environments. Sonar is commonly used in autonomous maritime vehicles (AMVs) owing to the limitations of optical sensors; however, its low resolution and noise make pose estimation difficult, requiring support from global positioning system (GPS) or Doppler velocity log (DVL)-based navigation. However, GPS is not appropriate for all cases owing to jamming and interference in structural environments such as floating infrastructures, dams, and power plants, and underwater navigation sensors are vulnerable to several situations such as shallow water, interference with underwater structures, and unstable signal processing. Consequently, for the aquatic structural areas, we used the results of light detection and ranging (LiDAR) odometry and mapping in water surface environments to improve the accuracy of underwater sonar mapping. When compared to using only sonar data, the proposed method could obtain navigation data for AMVs easily and accurately. We verified the proposed method by conducting field tests, and the results showed high-precision sonar image mapping of the underwater environment in a turbid and low-depth underwater environment. Therefore, the proposed LiDAR-based alignment enabled mosaic mapping with better continuity and fewer distortions, as well as localization trends in field tests.

Keywords:

Underwater sonar image mapping, LiDAR odometry and mapping, Turbid environment, Aquatic structural environment

1. INTRODUCTION

Autonomous maritime vehicles (AMVs) perform various missions, including underwater mapping, underwater environment investigation, and underwater operations. For the safety and accuracy of the missions, the AMVs have to know their location in the work environment. One of the most widely used localization methods is simultaneous localization and mapping (SLAM) [1]. However, underwater SLAM continues to face difficult challenges because many underwater conditions are worse than land conditions.

The limitations of positioning using sensors such as the global positioning system (GPS) are a typical problem in underwater localization. To solve the unavailability of GPS, underwater navigation sensors such as ultra-short baseline (USBL) and long baseline (LBL) have been used to collect navigation data using acoustic pings. These devices are unconstrained in underwater environments, but they are vulnerable to multipath propagation, echoes, and refraction at water layers [2,3].

These sensor limitations affect not only positioning but also underwater visualization. AMVs can perceive their surroundings primarily using optical or acoustic vision. Because optical cameras can produce high-resolution images researchers have focused on mapping underwater environments using optical vision [4-6]. Owing to water turbidity, low visibility, and saturation distortion, optical vision can only be used in short-range and clear environments underwater. Consequently, acoustic vision has been used to replace optical vision in underwater mapping because acoustic signals can travel longer distances even in turbid and dark conditions [7,8]. Studies have investigated object recognition and mapping using imaging sonars operating across different frequency bands [9-11].

However, despite its advantages in underwater environments, imaging sonar produces lower-resolution images than an optical camera, thereby significantly affecting the position estimation of AMVs as well as visibility. Multi-sensor systems have been proposed to improve the sonar image quality and localization information [12-17]. However, these methods are better suited for fast scanning in large areas rather than low-depth and narrow environments.

Moreover, underwater environment characteristics, similar to the aforementioned sensor constraints, adversely affect underwater SLAM. In contrast to land environments, where many landmarks exist and the robot easily recognizes features from its surroundings, underwater environments exhibit relatively monotonous and repetitive patterns. These differences cause the AMVs to recognize other landmarks as identical or fail in identifying features in a non-landmark environment, resulting in localization failure. Therefore, researchers have proposed scanning artificial underwater structures or installing known landmarks arbitrarily to solve these landmark problems [18-24].

In this study, we proposed a method for performing high-precision underwater sonar image mapping using the water surface localization method in aquatic structural environments such as nearshore structural areas, dams, and power plants. We obtained location information using light detection and ranging (LiDAR), which has high accuracy in water surface SLAM, as verified in previous studies [25-27]. Finally, by applying location information obtained from the water surface to underwater mapping, the proposed method allows AMVs to perform underwater mapping even in underwater areas where sonar imaging is degraded owing to a lack of distinctive features or sonar reflection issues and underwater navigation sensors fail to detect.

The paper is organized as follows. Section II describes the characteristics of underwater sonar imaging. Section III introduces a method to obtain location information through LiDAR and generate a sonar image mosaic map. Section IV presents the field test and its results. Finally, Section V presents the conclusion.


2. CHARACTERISTICS OF UNDERWATER SONAR IMAGING

The imaging sonar produces two-dimensional (2D) sonar images by insonifying multiple acoustic beams and measuring the intensity and time-of-flight (TOF) of the returned beams. The multiple acoustic beams are spread in the azimuth direction (θ) and elevation direction (ϕ), as shown in Fig. 2, which makes the imaging sonar have a field of view (FOV) in the azimuth direction and a spreading angle in the elevation direction. Because the acoustic beams are spread in the azimuth direction, the imaging sonar generates fan-shaped images of the observed area after the interpolation process, as shown in Fig. 2, with the intensity f the generated images determined by the range value (r), TOF value, and sound velocity.

Fig. 1.

Schematic view of proposed method.

Fig. 2.

Configuration of imaging sonar.

The three-dimensional (P3D) point, , of the spherical coordinates of the observed area is expressed in Cartesian coordinates as follows:

P3D=X3DY3DZ3D=rcosθcosϕrsinθcosϕrsinϕ(1) 

While generating sonar images, the 3D points are projected onto the sonar image and changed to 2D points, P2D, as follows:

P2D=X2DY2D=1cosϕX3DY3D=rcosθrsinθ(2) 

Therefore, because the resulting sonar image lacks elevation information (ϕ), it is difficult to estimate the height using only a single sonar image, which might be detrimental to several underwater missions.

The disadvantages of imaging sonar systems can be overcome in two methods: Using navigation data from other sensors or using a height-estimation method based on sequential 2D sonar images. Previous studies showed that high-precision 3D reconstruction and mapping could be performed using the highlight of sonar images. [18,31-33] Furthermore, it was verified that imaging mosaicking using sequential 2D sonar images generated a higher-precision sonar image map than using navigation data. [33] It used a frequency-analysis-based method to register successive sonar images rather than a feature-based method because sonar images contain fewer features than optical images. However, mapping methods that rely solely on sonar images perform poorly when the scanned sonar images contain excessive noise. For example, when imaging sonar scans a shallow-water region, such as a nearshore area, which includes structures such as dam outlets or cooling water intake facilities, 2D sonar images are difficult to match because the generated sonar images lack effective regions, as shown in Fig. 3.

Fig. 3.

Comparison of sonar images; (a) Valid image of object, (b) invalid image.


3. PROPOSED METHOD

In this study, we propose a sonar-based mapping method that uses localization information from LiDAR SLAM in an aquatic structural environment. The proposed method involves three main processes, as shown in Fig. 1: LiDAR SLAM on the water surface, transformation of localization data on the water surface and underwater navigation data, and sonar image mapping using processed localization data. Because transmitting and receiving underwater navigation data is difficult, obtaining position data on the water's surface is more convenient. Therefore, in this study, we first performed localization and mapping on the water surface environment before applying the results to underwater sonar image mapping.

3.1 Localization in water surface environments

We used LiDAR odometry and mapping methods on the water surface to scan landmarks such as nearshore structural areas, dams, and power plants. Several LiDAR SLAM methods have been investigated, with low-drift and real-time LiDAR odometry and mapping (LOAM) [28,29] being a representative example. LOAM extracts edge and plane features from LiDAR point clouds, estimates odometry by matching features, and maps, but has a long computation time and reduced performance with limited resources. To address these issues, LeGO-LOAM [30] was proposed, which increases efficiency by segmenting ground points prior to feature extraction and employs a two-step Levenberg–Marquardt optimization to reduce 6D transformation computation time.

We used LeGO-LOAM in the proposed method because of its short computational time, which is important for localizing robots operating in field environments. Although the proposed method targets water surface-underwater areas that include structures, and the LiDAR odometry and mapping algorithm is ground-optimized, the extraction of planar features is not significantly affected by the target area.

3.2 Application of LiDAR SLAM data for highprecision underwater image sonar mapping

To solve the problem of the underwater sonar image mapping method, we propose a method that uses localization data acquired from LiDAR odometry and mapping on a water surface. We extract 3D point data (x, y, z) from LiDAR odometry and mapping, integrate them with navigation data from the pressure sensor and Doppler velocity log (DVL), and generate optimized localization information, thereby enabling sonar image mapping.

To address the sensor synchronization issue, we used the imaging sonar timestamp as a synchronization reference because it operates at the lowest frequency among the sensors. To ensure temporal alignment, LiDAR and DVL system data closest in time to each sonar frame were selected and stored.

Considering that the proposed method's main purpose is underwater mapping, applying the entire LiDAR odometry and mapping results is inappropriate. For example, z data from LiDAR indicate the range in the z-direction from LiDAR to the water surface, which is not required for sonar image mapping. Therefore, we used x and y data in the Cartesian coordinates from the LiDAR mapping result, depth data from the pressure sensor, and roll, pitch, and yaw data from the DVL data to generate localization data for the proposed method.

Assuming that the DVL and pressure sensor data share the same x- and y-coordinate frames by locating them in the same position, the two sensors could be regarded as one system, that is, the "DVL system." When integrating the two data, that is, data from the LiDAR and DVL systems, we transform LiDAR mapping data to the expected DVL system data in the corresponding time.

Fig. 4 (a) shows a simple modeling of the AMV for converting LiDAR mapping data, and the coordinate frame of each sensor is shown in Fig. 4 (b). Sn is the position vector of the nth recording from the position of the DVL system to that of LiDAR in the global coordinate. Therefore, the initial vector So represents the distance between the two sensors prior to AMV operation, and the vector relationships can be expressed as follows:

Fig. 4.

Coordination of sensors’ position; (a) Modeling and coordinate frame of Autonomous Maritime Vehicle (AMV), (b) Coordinate frames of light detection and ranging (LiDAR) and doppler velocity log (DVL).

Sn=PLiDAR-PDVL=xLiDAR,n-xDVL,nyLiDAR,n-yDVL,nzLiDAR,n-zDVL,n=RnS0=RnxLiDAR,n-xDVL,nyLiDAR,n-yDVL,nzLiDAR,n-zDVL,n(3) 

where Rn is a 3D rotation transformation matrix determined by the roll, pitch, and yaw angles of the AMV measured by the DVL.

Because the proposed method targets aquatic structural environments or nearshore shallow water with artificial structures rather than the open sea, we assume that the wave-induced disturbance is not large enough to affect the control of the AMV. Therefore, the AMV is expected to experience limited roll and pitch variation, and the data are then processed using rotation transformation to align with the sonar image mapping.

Provided that the roll, pitch, and yaw angle data of the DVL and the initial position difference of the two sensors are known, the expected DVL system data, xDVL,n, yDVL,n, and zDVL,n could be calculated. Finally, using the x, y, and heading information from the integrated transformation data, a high-precision 2D mosaic sonar map could be generated.


4. EXPERIMENT & RESULT

4.1 Experimental setup

We used a hovering-type AMV, Cyclops, developed at Pohang University of Science and Technology (POSTECH), as shown in Fig. 5 (a). [35] Using eight attached thrusters, this AMV can move in surge and sway directions in a lawnmower trajectory without changing the yaw angle, as well as perform several underwater missions with sensors such as DVL, pressure sensors, fiber-optic gyros, and imaging sonar. Furthermore, we installed a LiDAR measurement system on the upper part of Cyclops. The specifications of Cyclops are listed in Table I.

Fig. 5.

AMV based sea-trial; (a) Hovering type AMV “Cyclops”, (b) Field test operation, (c) Field test environment.

Specification of hovering-type AMV CYCLOPS in this study

The maximum measurement range of LiDAR is 240 m, with a false positive rate of 1/10000, which is determined by Lambertian reflectivity, sunlight, and detection probability. It has a vertical FOV of 22.5° (±11.25°) with a resolution of 64 channels, a horizontal FOV of 360° with a resolution of 1024, and a scan rate of 10 Hz.

4.2 Field test

We conducted a field test in the nearshore area of Janggilbay, Pohang, Republic of Korea. Cyclops scanned a 14 × 17 m seabed area with a lawnmower trajectory, measuring depths ranging from 0.7–1.9 m and an average of 1.5 m. Cyclops is generally submerged for underwater missions; however, in this study, it was operated with the LiDAR system partially above the water surface while the rest of the vehicle remained submerged, as shown in Fig. 5 (b), because the proposed method requires scanning aquatic facilities with LiDAR as well as scanning the seabed using imaging sonar.

Seven artificial landmarks were used to conduct underwater scanning using imaging sonar. The following objects served as landmarks: an aluminum cylinder, a tire, a sphere-shaped plastic container, three designed landmarks made of bricks (arrow, C, and F shapes), and a red toy car. During the experiment, we controlled the speed of Cyclops to maintain adequate overlap between sonar images. The (b) detailed specifications of the imaging sonar and LiDAR used in the experiment are listed in Table II.

Specifications of imaging sonar(DIDSON) used in experiment

For LiDAR odometry and mapping, the nearshore environment with various artificial structures scanned by LiDAR in the field test is shown in Fig. 5 (c). LiDAR data was acquired using an Nvidia Jetson TX2, and the proposed method, LeGO-LOAM, was implemented in C++ on a laptop with an i7-6900K. Both processes were executed using the robot operating system (ROS) in Ubuntu 18.04 Linux.

4.3 Experimental Result

All of the sonar images and DVL system data in the experiment contained 21638 frames, and the trajectory data acquired from LeGO-LOAM contained 535 points. After excluding frames with overly overlapped images and trajectory points with fewer movements, 409 datasets were used.

The point cloud mapping and trajectory results of LiDAR odometry and mapping are shown in Fig. 6. Furthermore, we compared our results to other ones by controlling the ground estimation in the LeGO-LOAM algorithm, as shown in Fig. 7. The groundScan factor shown in Fig. 7 indicates the approximate number of scans representing the ground, and we controlled the factor to 1, 10, and 15 (default). The resulting trajectories showed that the average errors in the x and y coordinates were approximately 7 cm, indicating that applying ground-based LiDAR odometry and mapping algorithms to the water surface environment can provide sufficient accuracy when adequate landmarks are available.

Fig. 6.

Point cloud mapping with LiDAR and trajectory. Right upper is a satellite map [34] for comparison.

Fig. 7.

Trajectory comparison of LiDAR mapping based on groundestimation factor.

Based on the processed localization data, we generated sonar image maps and trajectories of the three cases to compare, as shown in Figs. 8 and 11. Fig. 9 shows detailed sonar mosaic images of all landmarks used in the experiment, arranged alphabetically based on the scanning sequence in the experiments shown in Fig. 8 (b) and (c), using the proposed method. In comparison to other cases, using sonar images alone made it impossible to generate a sonar image map because image mosaicking occurred at the same location owing to several low-depth and monotonous pattern areas in the experiment. Considering that the mapping method with sonar images only generates a trajectory after matching sequential sonar images, the trajectory showed less accurate results than the other cases.

Fig. 8.

Comparison of 2D mosaic sonar maps of nearshore seabed area, Descriptions of the objects labeled a–g are provided in Fig. 9.: (a) Sonar images only, (b) DVL data only, (c) Proposed method.

Fig. 9.

Detailed sonar mosaic images of landmarks through proposed method: (a) Aluminum Cylinder, (b) Arrow shape, (c) Sphere-shaped plastic container, (d) F-shape, (e) C-shape, (f) Toy car, (g) Tire.

The trajectory and mapping results from using only DVL system data were similar to those of the proposed method. However, a significant error occurred in the middle section when only DVL system data were used. This difference shows that the DVL system alone may malfunction in specific situations, such as drift current, communication error, and the accumulation of calculation errors during the navigation process, whereas LiDAR can collect precise data. Comparing the two cases, the first four installed landmark— an aluminum cylinder, arrow-shaped, F-shaped artificial landmarks, and a sphere-shaped plastic container—showed similar results. However, the results of the other three landmarks, a C-shaped artificial landmark, a toy car, and a tire, showed clear differences between the two cases. Images were incorrectly projected onto the wrong location owing to trajectory errors in the method that only used the DVL system, even though they did not have overlapping areas. Therefore, the method using the DVL system only generated partially interfered mosaic images of a C-shaped artificial landmark and a toy car, whereas the proposed method generated accurate results consistent with recorded sonar images, as shown in Fig. 10 (a) and (b). In the case of the tire, only a partial image was obtained because it was located slightly outside the planned scanning trajectory. The proposed method generated a mosaic sonar map where the observed part of the tire could be found, as shown in Fig. 9 (g). However, in the case of the method using only the DVL system, the tire could not be found in the mosaic sonar map, indicating a clear difference between the two methods, as shown in Fig. 10 (c).

Fig. 10.

Comparison of detailed results for the landmarks which were showing clear differences between two methods; (a) C-shaped landmark, (b) Toy car, (c) Tire.

Fig. 11.

Trajectory comparison of sonar image mapping


5. CONCLUSIONS

We proposed a method for high-precision imaging sonar mapping in aquatic environments with artificial structures, based on LiDAR odometry and mapping on the water surface. The proposed method used LiDAR odometry and mapping methods, and the LiDAR trajectory was combined with DVL system data to optimize localization data for underwater sonar image mapping. The proposed method verified that it was capable of generating a mosaic sonar image map with higher resolution than a sonar sensor or DVL system only. Furthermore, the method may be less affected by drift current and the error of underwater navigation sensors.

Future research can improve AMV localization and mapping by developing systems that generate accurate 3D maps of underwater structures using high-precision localization. These maps can be used to detect and visualize defects such as cracks, corrosion, or damage on the structural model, allowing more accurate inspection and management. Furthermore, addressing challenging environments, such as those with sparse landmarks or floating debris on the water surface, can be expanded to include those that may interfere with LiDAR scans. Considering that the validation of the proposed method is confined owing to practical constraints, future research can focus on a more robust evaluation method. Therefore, with such capabilities, autonomous robots are expected to be deployed in confined or extreme environments with limited human access.

Acknowledgments

This study was supported by KOREA HYDRO & NUCLEAR POWER CO., LTD (No. 2024-로봇기술-001).

References

  • H. Durrant-Whyte, T. Bailey, Simultaneous localization and mapping: Part I, IEEE Robot. Autom. Mag. 13 (2006) 99– 110. [https://doi.org/10.1109/MRA.2006.1638022]
  • K. Köser, U. Frese, Challenges in underwater visual navigation and SLAM, In: Y. Petillot (Ed.), AI Technology for Underwater Robots, Springer, Berlin, 2019, pp. 125–135. [https://doi.org/10.1007/978-3-030-30683-0_11]
  • W.L. Zhao, T. He, A.Y.M. Sani, T.T. Yao, Review of SLAM techniques for autonomous underwater vehicles, Proceedings of the 2019 Int. Conf. Robot., Intell. Control Artif. Intell., Beijing, China, 2019, pp. 384–389. [https://doi.org/10.1145/3366194.3366262]
  • R. García, J. Batlle, X. Cufí, J. Amat, Positioning an underwater vehicle through image mosaicking, Proceedings of the 2001 ICRA. IEEE Int. Conf. Robot. Autom., Seoul, Korea, 2001, pp. 2779–2784. [https://doi.org/10.1109/ROBOT.2001.933043]
  • S. Hong, J. Kim, Three-dimensional visual mapping of underwater ship-hull surface using piecewise-planar SLAM, Int. J. Control Autom. Syst. 18 (2020) 564–574. [https://doi.org/10.1007/s12555-019-0646-8]
  • A. Kim, R.M. Eustice, Real-time visual SLAM for autonomous underwater hull inspection using visual saliency, IEEE Trans. Robot. 29 (2013) 719–733. [https://doi.org/10.1109/TRO.2012.2235699]
  • S. Yu, T. Kim, A. Asada, S. Weatherwax, B. Collins, J. Yuh, Development of high-resolution acoustic-camera-based real-time object-recognition system by AUVs, Proceedings of the OCEANS 2006, Boston, USA, 2006, pp. 1–6. [https://doi.org/10.1109/OCEANS.2006.307011]
  • M. Sung, J. Kim, M. Lee, B. Kim, T. Kim, J. Kim, et al., Realistic sonar image simulation using deep learning for underwater object detection, Int. J. Control Autom. Syst. 18 (2020) 523–534. [https://doi.org/10.1007/s12555-019-0691-3]
  • S. Reed, I.T. Ruiz, C. Capus, Y. Petillot, The fusion of large-scale classified side-scan sonar image mosaics, IEEE Trans. Image Process. 15 (2006) 2049–2060. [https://doi.org/10.1109/TIP.2006.873448]
  • M. Ho, S. El-Borgi, D. Patil, G. Song, Inspection and monitoring systems of subsea pipelines: A review, Struct. Health Monit. 19 (2019) 606–645. [https://doi.org/10.1177/1475921719837718]
  • P.V. Teixeira, F.S. Hover, J.J. Leonard, M. Kaess, Multibeam data processing for underwater mapping, Proceedings of the 2018 IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Madrid, Spain, 2018, pp. 1877–1884. [https://doi.org/10.1109/IROS.2018.8594128]
  • Y. Yeu, J.-J. Yee, H. Yun, K. Kim, Evaluation of the accuracy of bathymetry on the nearshore coastlines of western Korea from satellite altimetry, multi-beam, and airborne bathymetric LiDAR, Sensors 18 (2018) 2926. [https://doi.org/10.3390/s18092926]
  • J.D. Do, J.-Y. Jin, C.H. Kim, W.-H. Kim, B.-G. Lee, G.J. Wie, et al., Measurement of nearshore seabed bathymetry using airborne/mobile LiDAR and multibeam sonar at Hujeong Beach, Korea, J. Coast. Res. 95 (2020) 1067–1071. [https://doi.org/10.2112/SI95-208.1]
  • G. Popescu, D. Iordan, An overall view of LiDAR and sonar systems used in geomatics applications for hydrology, Proceedings of Int. Conf. Agric. Life Life Agric., Bucharest, Romania, 2018, pp. 174–181.
  • D.F. Campos, A. Matos, A.M. Pinto, Multi-domain inspection of offshore wind farms using an autonomous surface vehicle, SN Appl. Sci. 3 (2021) 455. [https://doi.org/10.1007/s42452-021-04451-5]
  • ]A. Stateczny, M. Wlodarczyk-Sielicka, D. Gronska, W. Motyl, Multibeam echosounder and LiDAR in process of 360-degree numerical map production for restricted waters with HydroDron, Proceedings of the Baltic Geodetic Congr. (BGC Geomatics), Olsztyn, Poland, 2018, pp. 1–6. [https://doi.org/10.1109/BGC-Geomatics.2018.00061]
  • J. Han, J. Kim, Three-dimensional reconstruction of a marine floating structure with an unmanned surface vessel, IEEE J. Ocean. Eng. 44 (2019) 984–996. [https://doi.org/10.1109/JOE.2018.2862618]
  • H. Joe, J. Kim, S.-C. Yu, 3-D reconstruction using two sonar devices in a Monte-Carlo approach for AUV application, Int. J. Control Autom. Syst. 18 (2020) 587–596. [https://doi.org/10.1007/s12555-019-0692-2]
  • H. Lin, H. Zhang, Y. Li, H. Wang, J. Li, S. Wang, 3-D point-cloud capture method for underwater structures in a turbid environment, Meas. Sci. Technol. 32 (2020) 025106. [https://doi.org/10.1088/1361-6501/abba4a]
  • T. Guerneve, K. Subr, Y. Petillot, Underwater 3-D structures as semantic landmarks in sonar mapping, Proceedings of the 2017 IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Vancouver, Canada, 2017, pp. 614–619. [https://doi.org/10.1109/IROS.2017.8202215]
  • A. Cardaillac, M. Ludvigsen, Camera-sonar combination for improved underwater localization and mapping, IEEE Access 11 (2023) 123070–123079. [https://doi.org/10.1109/ACCESS.2023.3329834]
  • H. Du, K. Wang, Z. Liu, G. Li, H. Jing, Sea trial experiment of AUV obstacle recognition and localization based on multi-beam sonar, Proceedings of the 2024 4th Int. Conf. Electron. Inf. Eng. Comput. Commun. (EIECC), Wuhan, China, 2024, pp. 409–413. [https://doi.org/10.1109/EIECC64539.2024.10929249]
  • B. Kim, J. Kim, H. Cho, J. Kim, S.-C. Yu, AUV-based multi-view scanning method for 3-D reconstruction of underwater objects using forward-scan sonar, IEEE Sens. J. 20 (2020) 1592–1606. [https://doi.org/10.1109/JSEN.2019.2946587]
  • J. Lee, K. Lee, Y. Myung, LiDAR-aided SLAM in underground and GPS-denied environments: A hybrid approach using vision and ranging sensors, IEEE Access 11 (2023) 74532–74544.
  • J. Han, Y. Cho, J. Kim, Coastal SLAM with marine radar for USV operation in GPS-restricted situations, IEEE J. Ocean. Eng. 44 (2019) 300–309. [https://doi.org/10.1109/JOE.2018.2883887]
  • J. Villa, J. Aaltonen, K.T. Koskinen, Path-following with LiDAR-based obstacle avoidance of an unmanned surface vehicle in harbor conditions, IEEE/ASME Trans. Mechatron. 25 (2020) 1812–1820. [https://doi.org/10.1109/TMECH.2020.2997970]
  • D. Thompson, E. Coyle, J. Brown, Efficient LiDAR-based object segmentation and mapping for maritime environments, IEEE J. Ocean. Eng. 44 (2019) 352–362. [https://doi.org/10.1109/JOE.2019.2898762]
  • J. Zhang, S. Singh, LOAM: Lidar odometry and mapping in real time, Proceedings of Robot. Sci. Syst. (RSS'14), Berkeley, USA, 2014, 1–9. [https://doi.org/10.15607/RSS.2014.X.007]
  • J. Zhang, S. Singh, Low-drift and real-time lidar odometry and mapping, Auton. Robots 41 (2016) 401–416. [https://doi.org/10.1007/s10514-016-9548-2]
  • T. Shan, B. Englot, LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain, Proceedings of the 2018 IEEE/RSJ Int. Conf. Intell. Robots Syst., Madrid, Spain, 2018, pp. 4758–4765. [https://doi.org/10.1109/IROS.2018.8594299]
  • H. Cho, B. Kim, S.-C. Yu, AUV-based underwater 3-D point-cloud generation using acoustic-lens-based multibeam sonar, IEEE J. Ocean. Eng. 43 (2018) 856–872. [https://doi.org/10.1109/JOE.2017.2751139]
  • H. Joe, H. Cho, B. Kim, J. Pyo, S.-C. Yu, Profiling and imaging sonar fusion-based 3-D normal-distribution transform mapping for AUV application, Proceedings of the 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 2018, pp. 1–6. [https://doi.org/10.1109/OCEANSKOBE.2018.8559337]
  • B. Kim, H. Joe, S.-C. Yu, High-precision underwater 3-D mapping using imaging sonar for navigation of autonomous underwater vehicle, Int. J. Control Autom. Syst. 19 (2021) 3199–3208. [https://doi.org/10.1007/s12555-020-0581-8]
  • https://map.naver.com/, (accessed 12 May 2025).
  • J. Pyo, H. Cho, H. Joe, T. Ura, S.-C. Yu, Development of hovering-type AUV "Cyclops" and its performance evaluation using image mosaicing, Ocean Eng. 109 (2015) 517–530. [https://doi.org/10.1016/j.oceaneng.2015.09.023]

Fig. 1.

Fig. 1.
Schematic view of proposed method.

Fig. 2.

Fig. 2.
Configuration of imaging sonar.

Fig. 3.

Fig. 3.
Comparison of sonar images; (a) Valid image of object, (b) invalid image.

Fig. 4.

Fig. 4.
Coordination of sensors’ position; (a) Modeling and coordinate frame of Autonomous Maritime Vehicle (AMV), (b) Coordinate frames of light detection and ranging (LiDAR) and doppler velocity log (DVL).

Fig. 5.

Fig. 5.
AMV based sea-trial; (a) Hovering type AMV “Cyclops”, (b) Field test operation, (c) Field test environment.

Fig. 6.

Fig. 6.
Point cloud mapping with LiDAR and trajectory. Right upper is a satellite map [34] for comparison.

Fig. 7.

Fig. 7.
Trajectory comparison of LiDAR mapping based on groundestimation factor.

Fig. 8.

Fig. 8.
Comparison of 2D mosaic sonar maps of nearshore seabed area, Descriptions of the objects labeled a–g are provided in Fig. 9.: (a) Sonar images only, (b) DVL data only, (c) Proposed method.

Fig. 9.

Fig. 9.
Detailed sonar mosaic images of landmarks through proposed method: (a) Aluminum Cylinder, (b) Arrow shape, (c) Sphere-shaped plastic container, (d) F-shape, (e) C-shape, (f) Toy car, (g) Tire.

Fig. 10.

Fig. 10.
Comparison of detailed results for the landmarks which were showing clear differences between two methods; (a) C-shaped landmark, (b) Toy car, (c) Tire.

Fig. 11.

Fig. 11.
Trajectory comparison of sonar image mapping

Table 1.

Specification of hovering-type AMV CYCLOPS in this study

Weight 220 kg in air
Dimension 900 mm × 1500 mm × 900 mm
Depth rating 100 m
Propulsion 8 thrusters (475 W)
Max. speed 2 knots
Power source 24 VDC (600 Wh with 2 Li-Po batteries)
Computer system PC-104 × 2 (Window), Jetson TX2 (Linux)
Sensors Imaging sonar (1.1 MHz/1.8 MHz) Digital pressure transducer
Doppler velocity log (1.2 MHz)
Strobe light
Fiber-optic gyro
3D LiDAR(OS2-64)

Table 2.

Specifications of imaging sonar(DIDSON) used in experiment

Operating frequency 1.1/1.8 MHz
Number of acoustic beams 96
Tilt angle 30°
Spreading angle 14°
Range 0.5 – 10 m
Frame rate 5-20 fps