Michal Haindl | Visual Texture Inpainting | Best Researcher Award

Prof. Dr. Michal Haindl | Visual Texture Inpainting | Best Researcher Award

Institute of Information Theory and Automation of the Czech Acadaemy of Sciences | Czech Republic

Prof. Dr. Michal Haindl is a leading Czech researcher recognized internationally for his extensive contributions to pattern recognition, texture analysis, material appearance modeling, and computational imaging, with a prolific body of work spanning more than two decades and over 130 publications. As a pioneer in advanced texture modeling, he has developed influential methods such as Bidirectional Texture Function (BTF) models, multispectral and 3D causal random field texture representations, anisotropic BRDF modeling, and rotationally invariant textural features that have shaped modern approaches in computer vision. His research addresses fundamental challenges in texture fidelity, similarity criteria, segmentation, scale and illumination invariance, and material recognition, providing robust frameworks widely applied in remote sensing, biomedical imaging, cultural heritage restoration, forestry classification, and disease detection. Prof. Dr. Haindl has also significantly advanced unsupervised learning and benchmarking for image segmentation, contributing datasets, evaluation metrics, and criteria that have become reference standards in the field. His work on medical imaging—including mammogram enhancement, melanoma recognition, and disease survival modeling—reflects his interdisciplinary impact across health analytics and AI-driven diagnostic support. Additionally, he has contributed to computational methods for evaluating physical and rendered materials, transfer learning for texture models, and structural detection in archeology. Through sustained innovation, extensive collaborations, and consistent publication in high-impact journals and conferences, Prof. Dr. Michal Haindl has established himself as a foundational figure in texture-based pattern recognition and material appearance research, continuously driving forward the scientific understanding and practical applications of computational vision.

Profiles: Orcid | Scopus

Featured Publications

  • Haindl, M., & Mikes, S. (2023). Optimal activation function for anisotropic BRDF modeling. Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), 1–9.

  • Mikes, S., & Haindl, M. (2022). Texture segmentation benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12), 1–15.

  • Vacha, P., & Haindl, M. (2023). Texture recognition under scale and illumination variations. Journal of Information and Telecommunication, 7(4), 1–14.

  • Remes, V., & Haindl, M. (2019). Bark recognition using novel rotationally invariant multispectral textural features. Pattern Recognition Letters, 128, 1–8.

  • Haindl, M. (2022). Bidirectional texture function modeling. In Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging (pp. 1–15). Springer.

Dhruv Sharma | Computer Vision | Best Researcher Award

Dr. Dhruv Sharma | Computer Vision | Best Researcher Award

Amity University | India

Dr. Dhruv Sharma has made extensive contributions to the domains of artificial intelligence, deep learning, and multimodal systems through a wide range of impactful publications. His research encompasses visual data captioning, adaptive attention mechanisms, and transformer-based models that enhance image understanding and description generation. Notable works include Evolution of Visual Data Captioning Methods, Datasets, and Evaluation Metrics: A Comprehensive Survey, Automated Image Caption Generation Framework using Adaptive Attention and Bi-LSTM, and XGL-T Transformer Model for Intelligent Image Captioning, which collectively advance the field of vision-language integration. His studies such as Lightweight Transformer with GRU Integrated Decoder for Image Captioning and Control With Style: Style Embedding-based Variational Autoencoder for Controlled Stylized Caption Generation Framework propose innovative architectures for stylistic and efficient captioning. In addition, he has developed frameworks like FDT–Dr2T: A Unified Dense Radiology Report Generation Transformer Framework for X-ray Images and Unma-Capsumt: Unified and Multi-Head Attention-Driven Caption Summarization Transformer, highlighting his interest in medical AI and caption summarization. His earlier works, including Memory-Based FIR Digital Filter using Modified OMS-LUT Design and Modified Efficient OMS LUT-Design for Memory-Based Multiplication, show his foundational expertise in signal processing and hardware-efficient algorithms. Moreover, his contributions such as Obscenity Detection Transformer and DVRGNet reflect his commitment to developing socially responsible AI for content moderation. Overall, Dr. Sharma’s scholarly output demonstrates a consistent trajectory from traditional signal processing to cutting-edge multimodal AI, bridging research innovation with practical applications in intelligent computing and human-centered artificial intelligence.

Profile: Google Scholar

Featured Publications

  • Sharma, D., Dhiman, C., & Kumar, D. (2023). Evolution of visual data captioning methods, datasets, and evaluation metrics: A comprehensive survey. Expert Systems with Applications, 221, 119773.

  • Sharma, D., Dhiman, C., & Kumar, D. (2024). XGL-T transformer model for intelligent image captioning. Multimedia Tools and Applications, 83(2), 4219–4240.

  • Sharma, D., Dhiman, C., & Kumar, D. (2024). Control with style: Style embedding-based variational autoencoder for controlled stylized caption generation framework. IEEE Transactions on Cognitive and Developmental Systems, 1–11.

  • Sharma, D., Dhiman, C., & Kumar, D. (2024). FDT–Dr2T: A unified dense radiology report generation transformer framework for X-ray images. Machine Vision and Applications, 35, 1–13.

  • Sharma, D., Dhiman, C., & Kumar, D. (2022). Automated image caption generation framework using adaptive attention and Bi-LSTM. In 2022 IEEE Delhi Section Conference (DELCON) (pp. 1–5). IEEE.