Integrasi YOLOv9 Dan Fine-Tuned Segment Anything Model Untuk Pengenalan Komponen Alat Pengukur Analog

Main Article Content

Diah Asmawati
Chastine Fathichah

Abstract

Otomatisasi pembacaan alat pengukur analog merupakan tantangan yang signifikan di lingkungan industri, terutama karena citra alat pengukur sering diambil dalam kondisi pencahayaan dan kualitas visual yang bervariasi. Penelitian ini mengusulkan pendekatan hibrida berbasis visi komputer yang mengintegrasikan model deteksi YOLOv9 dan model segmentasi Segment Anything Model yang telah disesuaikan menggunakan dataset citra alat pengukur analog. YOLOv9 digunakan untuk mendeteksi area komponen penting seperti jarum dan garis-garis skala, menghasilkan area kotak pembatas yang berfungsi sebagai masukan prompt bagi SAM. Model SAM kemudian melakukan segmentasi presisi tinggi pada area tersebut untuk memisahkan objek utama dari latar belakang citra. Dataset pelatihan terdiri dari 1010 citra dengan anotasi mask biner. Hasil evaluasi menunjukkan bahwa model Fine-Tuned SAM ViT-B mencapai nilai Precision 0.81, Recall 0.946, dan IoU sebesar 0.778 melampaui performa model SAM dasar dan YOLOv9 Segmentation. Segmentasi jarum penunjuk juga menunjukkan peningkatan signifikan dengan nilai IoU mencapai 0.837. Pendekatan menunjukkan efektivitas integrasi deteksi dan segmentasi berbasis model transformer dalam menghasilkan sistem pengenalan komponen alat pengukur analog yang presisi, efisien dan adaptif untuk aplikasi pembacaan nilai otomatis di industri.

Article Details

How to Cite
Asmawati, D., & Fathichah, C. (2026). Integrasi YOLOv9 Dan Fine-Tuned Segment Anything Model Untuk Pengenalan Komponen Alat Pengukur Analog. TELKA - Telekomunikasi, Elektronika, Komputasi Dan Kontrol, 12(1), 55–65. https://doi.org/10.15575/telka.v12n1.55-65
Section
Articles

References

H. Ninama et al., “Computer vision and deep transfer learning for automatic gauge reading detection,” Sci Rep, vol. 14, no. 1, Dec. 2024, doi: 10.1038/s41598-024-71270-0.

G. Salomon, R. Laroca, and D. Menotti, “Image-based Automatic Dial Meter Reading in Unconstrained Scenarios,” Measurement (Lond), vol. 204, Nov. 2022, doi: 10.1016/j.measurement.2022.112025.

Vue.ai, “The Hidden Costs of Manual Meter Reading: Why Your Utility Company Can’t Afford to Wait for Automation,” 2025. Accessed: Nov. 25, 2025. [Online]. Available: https://www.vue.ai/blog/ai-transformation/hidden-costs-manual-meter-reading/.

J. Peixoto et al., “Development of an Analog Gauge Reading Solution Based on Computer Vision and Deep Learning for an IoT Application,” Telecom, vol. 3, no. 4, pp. 564–580, Dec. 2022, doi: 10.3390/telecom3040032.

H. Fan and Y. Li, “Image Recognition and Reading of Single Pointer Meter Based on Deep Learning,” IEEE Sens J, vol. 24, no. 15, pp. 25163–25174, 2024, doi: 10.1109/JSEN.2024.3416436.

Y. Shu, S. Liu, H. Xu, and F. Jiang, “Read Pointer Meters in complex environments based on a Human-like Alignment and Recognition Algorithm,” Feb. 2023, doi: 10.1007/978-981-99-8761-0_13.

P. Ni and P. Mao, “Reading recognition method of pointer instrument based on YOLOv8+U2-Net,” in 2024 6th International Conference on Internet of Things, Automation and Artificial Intelligence, IoTAAI 2024, Institute of Electrical and Electronics Engineers Inc., 2024, pp. 664–668. doi: 10.1109/IoTAAI62601.2024.10692585.

J. Leon-Alcazar, Y. Alnumay, C. Zheng, H. Trigui, S. Patel, and B. Ghanem, “Learning to Read Analog Gauges from Synthetic Data,” in Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Institute of Electrical and Electronics Engineers Inc., Jan. 2024, pp. 8601–8610. doi: 10.1109/WACV57701.2024.00842.

A. Kirillov et al., “Segment Anything,” 2023. [Online]. Available: https://segment-anything.com.

L. Ali, F. Alnajjar, M. Swavaf, O. Elharrouss, A. Abd-alrazaq, and R. Damseh, “Evaluating segment anything model (SAM) on MRI scans of brain tumors,” Sci Rep, vol. 14, no. 1, Dec. 2024, doi: 10.1038/s41598-024-72342-x.

E. Khalili, B. Priego-Torres, A. Leon-Jimenez, and D. Sanchez-Morillo, “Automatic Lung Segmentation in Chest X-Ray Images Using SAM with Prompts from YOLO,” IEEE Access, vol. 12, pp. 122805–122819, 2024, doi: 10.1109/ACCESS.2024.3454188.

M. Azimi and T. Y. Yang, “Transformer-based framework for accurate segmentation of high-resolution images in structural health monitoring,” Computer-Aided Civil and Infrastructure Engineering, Dec. 2024, doi: 10.1111/mice.13211.

A. Moghimi, M. Welzel, T. Celik, and T. Schlurmann, “A Comparative Performance Analysis of Popular Deep Learning Models and Segment Anything Model (SAM) for River Water Segmentation in Close-Range Remote Sensing Imagery,” IEEE Access, vol. 12, pp. 52067–52085, 2024, doi: 10.1109/ACCESS.2024.3385425.

C.-Y. Wang, I.-H. Yeh, and H.-Y. M. Liao, “YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information,” Computer Vision -- ECCV 2024, vol. 15089, pp. 1–21, Feb. 2024, doi: https://doi.org/10.1007/978-3-031-72751-1_1.

A. Imran, M. Shashishekhara, H. Hamza, and A. A. Gardi, “Real Time American Sign Language Detection Using Yolo-v9,” Jul. 2024. doi: 10.48550/arXiv.2407.17950.

M. F. Tariq and M. A. Javed, “Small Object Detection with YOLO: A Performance Analysis Across Model Versions and Hardware,” Apr. 2025, [Online]. Available: http://arxiv.org/abs/2504.09900

M. Reitsma, J. Keller, K. Blomqvist, and R. Siegwart, “Under pressure: Learning-based analog gauge reading in the wild,” in Proceedings - IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers Inc., 2024, pp. 14–20. doi: 10.1109/ICRA57147.2024.10610793.

C. H. Wang, K. K. Huang, R. I. Chang, and C. K. Huang, “Scale-Mark-Based Gauge Reading for Gauge Sensors in Real Environments with Light and Perspective Distortions,” Sensors, vol. 22, no. 19, Oct. 2022, doi: 10.3390/s22197490.

X. Jin, J. Hu, J. Lin, S. Zhang, and L. Cao, “U-SAM: Upgrade Segment Anything Model With Semantic-Aware and Memory-Efficient,” Institute of Electrical and Electronics Engineers (IEEE), Mar. 2025, pp. 1–5. doi: 10.1109/icassp49660.2025.10889270.