Formation of a Three-dimensional Relief Model Based on a 2D Image

Authors

DOI:

https://doi.org/10.31861/sisiot2025.2.02011

Keywords:

bas-relief, 3D modeling, depth estimation, neural networks, STL

Abstract

This paper introduces an automated pipeline for generating high-quality 3D bas-relief models directly from single 2D images. Our method replaces laborious manual height‐map editing with ZoeDepth, a state-of-the-art neural network that outputs dense depth maps without camera calibration or manual annotations. Input photographs, whether of architectural facades, artwork reproductions, or industrial scenes, are first contrast-normalized and preprocessed prior to depth estimation. The raw depth output then undergoes metric‐attractor correction, which refines depth-bin centers using multiple predicted attractor points per pixel to improve continuity and reduce quantization artifacts. A custom depth-aware triangulation algorithm subsequently converts the refined depth field into a surface mesh, with user-controlled parameters for real-world scale, spatial resolution, and triangulation density. The final mesh is exported as an STL file, enabling immediate compatibility with CAD software and 3D printers. The performance, robustness and fidelity of the elaborated pipeline were evaluated on a quad-core Intel i5 CPU and a variety of image domains. It turned out that depth inference for an 800 × 800 pixel image took around 120 seconds, while mesh generation and STL export took approximately 110 seconds. These times scale linearly with image resolution. Quantitative assessment yields a mean relative error below 7.7 % and threshold accuracy above 95.3 %, indicating that over 95 % of pixel depth estimates fall within 25 % of true values. A qualitative inspection has confirmed that the obtained reliefs preserve critical geometric details and maintain surface smoothness, even on previously unnoticeable inputs. Comparative analysis highlights significant reductions in manual effort and total modeling time versus traditional Blender-based sculpting workflows, without sacrificing mesh quality.

Downloads

Download data is not yet available.

Author Biographies

  • Vitalii Ariichuk, Yuriy Fedkovych Chernivtsi National University

    PhD student in Computer Engineering at the Department of Computer Systems and Networks, Yurii Fedkovych Chernivtsi National University, Ukraine. Software Development Engineer. Interests and expertise: applied geometry, computational geometry, feature recognition, CAD systems. His work focuses on 3D bas-relief modeling with neural networks and Python.

  • Yuliya Tanasyuk, Yuriy Fedkovych Chernivtsi National University

    PhD, Associate Professor at the Department of Computer Systems and Networks of Physical, Technical and Computer Sciences Institute, Yuriy Fedkovych Chernivtsi National University, Ukraine. Research interests and academic activities are as follows: programming, network information technologies, cybersecurity, IoT, software engineering.

References

M. M. Lybovets and O. V. Oletsky, Artificial intelligence: a textbook [for students of higher education]. Kyiv: KM Academy, 2002, 336 p.

N. B. Shakhovska, R. M. Kaminsky, and O. B. Vovk, Artificial intelligence systems: a textbook. Lviv: Lviv Polytechnic Publishing House, 2018, 392 p.

DaeEun Kim and Dosik Hwang, Eds., Intelligent Imaging and Analysis. Switzerland, Basel: MDPI, 2020, 492 p. [Online]. Available: https://mdpi.com/books/pdfview/book/2059. DOI: 10.3390/books978-3-03921-921-6.

L. Zhu and P. Spachos, “Towards Image Classification with Machine Learning Methodologies for Smartphones,” Machine Learning and Knowledge Extraction, 2019, No. 1(4), pp. 1039–1057.

S. D. Shtovba and V. V. Mazurenko, Intelligent technologies for identifying dependencies. Laboratory practical: electronic textbook. Vinnytsia: VNTU, 2014, 113 p.

R. Gonzalez and R. Woods, Digital image processing, 4th edidion. NY: Pearson/ Prentice Hall, 2018, 1192 p.

S. Krigg, Computer Vision Metrics. Survey, Taxonomy, and Analysis. Spredd Open, 2014, 498 p.

J. C. Russ, The Image Processing. Handbook [Sixth Edition]. Taylor & Francis Group, LLC, 2011, 853 p.

J. L. Schonberger and J. M. Frahm, “Structure-from-Motion Revisited,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27–30 June 2016, pp. 4104–4113.

“COLMAP Documentation.” [Online]. Available: https://colmap.github.io/.

“OpenMVG Documentation.” [Online]. Available: https://github.com/openMVG/openMVG/wiki.

“OpenMVS Documentation.” [Online]. Available: https://github.com/cdcseacave/openMVS/wiki.

K. N. Kutulakos and S. M. Seitz, “A Theory of Shape by Space Carving. A theory of shape by space carving,” International Journal of Computer Vision, 38 (3), 2000, pp. 199–218.

“OpenCV Documentation.” [Online]. Available: https://docs.opencv.org/4.x/.

“MiDaS: Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer.” [Online]. Available: https://arxiv.org/pdf/1907.01341v3.

“ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth.” [Online]. Available: https://arxiv.org/pdf/2302.12288.

Blender Foundation, Blender – Open Source 3D Creation Software. [Online]. Available: https://www.blender.org/.

“Python Documentation.” [Online]. Available: https://docs.python.org/3/.

“Depth Maps: How Software Encodes 3D Space.” [Online]. Available: https://blog.lookingglassfactory.com/depth-maps-how-software-encodes-3d-space/.

“Image to Lithophane.” [Online]. Available: https://3dp.rocks/lithophane/.

Downloads


Abstract views: 14

Published

2025-12-30

Issue

Section

Articles

How to Cite

[1]
V. Ariichuk and Y. Tanasyuk, “Formation of a Three-dimensional Relief Model Based on a 2D Image”, SISIOT, vol. 3, no. 2, p. 02011, Dec. 2025, doi: 10.31861/sisiot2025.2.02011.

Similar Articles

1-10 of 47

You may also start an advanced similarity search for this article.