Monocular 3D Reconstruction in Poorly Visible Environments

Authors

  • Nivinya Samarutilake University of Moratuwa
  • Tharusha Lekamge University of Moratuwa
  • Thilina Ilesinghe University of Moratuwa

Keywords:

monocular 3D reconstruction, domain adaptation, poor visibility conditions, GAN

Abstract

3D reconstruction of real physical environments can be a challenging task, often requiring depth cameras such as LIDAR or RGB-D to capture the necessary depth information. However, this method is resource-intensive and expensive. To counter this problem, monocular 3D reconstruction has emerged as a research area of interest, leveraging
deep learning techniques to reconstruct 3D environments using only sequences of RGB images, thus reducing the need for specialized hardware. Existing research has primarily focused on environments with good lighting conditions, leaving a gap in research for environments with poor visibility. In response, we propose a solution that addresses this limitation by enhancing the visibility of images taken in poorly visible environments. These enhanced images are then used for 3D reconstruction, resulting in the extraction of more features and producing a 3D mesh with improved visibility. Our solution employs a Generative Adversarial Network (GAN) to enhance the images, providing a complete pipeline from inputting images with poor visibility to generating an output mesh file for 3D reconstruction. Through visualization of these mesh files, we observe that our solution improves the lighting conditions of the environment, resulting in a more detailed and
readable 3D reconstruction.

Link: https://www.ijrcom.org/download/issues/v3i1/IJRC31_04.pdf

References

A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser and M. Nießner, "ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes," in Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.

A. Geiger, P. Lenz, C. Stiller and R. Urtasun, "Vision meets robotics: The kitti dataset," The International Journal of Robotics Research, vol. 32, p. 1231–1237, 2013.

C. Zhao, Y. Tang and Q. Sun, "Unsupervised monocular depth estimation in highly complex environments," IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, p. 1237–1246, 2022.

J.-g. Kwak, Y. Jin, Y. Li, D. Yoon, D. Kim and H. Ko, "Adverse weather image translation with asymmetric and uncertainty-aware GAN," arXiv preprint arXiv:2112.04283, 2021.

M. Sayed, J. Gibson, J. Watson, V. Prisacariu, M. Firman and C. Godard, "SimpleRecon: 3D Reconstruction Without 3D Convolutions," in Proceedings of the European Conference on Computer Vision (ECCV), 2022.

P. Isola, J.-Y. Zhu, T. Zhou and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.

J.-Y. Zhu, T. Park, P. Isola and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017.

M.-Y. Liu, T. Breuel and J. Kautz, "Unsupervised image-to-image translation networks," Advances in neural information processing systems, vol. 30, 2017.

Z. Zheng, Y. Wu, X. Han and J. Shi, "ForkGAN: Seeing into the Rainy Night," in The IEEE European Conference on Computer Vision (ECCV), 2020.

F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan and T. Darrell, BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning, 2020.

Published

07/17/2024

How to Cite

Samarutilake, N., Lekamge, T., & Ilesinghe, T. (2024). Monocular 3D Reconstruction in Poorly Visible Environments. International Journal of Research in Computing, 3(1), 27–34. Retrieved from http://ijrcom.org/index.php/ijrc/article/view/134