Sungeun Suh | Human–Computer Interaction | Research Excellence Award

Assoc. Prof. Dr. Sungeun Suh | Human–Computer Interaction | Research Excellence Award 

Gachon Unniversity | South Korea

Suh, S. E. is a leading researcher and educator in fashion design, recognized for her innovative integration of digital technologies, artificial intelligence, and sustainable practices into contemporary and traditional fashion contexts. Her scholarship focuses on the intersection of technology, culture, and design, exploring how digital platforms like Instagram and YouTube influence personal style, fashion consumption, and cultural representation. She has conducted extensive research using big data and text mining to decode global and Korean fashion trends, as well as generative AI to enhance creativity in textile and fashion design, including the development of customized designs based on traditional Korean patterns. Suh’s work emphasizes sustainability, addressing circular fashion, the eco-conscious practices of Generation Z, and the creation of performance-oriented ‘New Hanbok’ utilizing traditional craft techniques such as Najeon and Hanji, bridging heritage with contemporary design innovation. Her research also investigates historical and subcultural fashion phenomena, from 1970s–1980s casual subculture and British soccer hooliganism to Yosemite rock climbing communities, highlighting the sociocultural significance of fashion across contexts. She has published widely in top-tier journals including Sustainability, Fashion and Textiles, Journal of Fashion Design, and Humanities and Social Sciences Communications, reflecting her interdisciplinary approach spanning design, technology, and cultural studies. Beyond academia, Suh has actively translated her research into applied projects, including AI-based fashion pattern datasets (K-Deep Fashion), smart fashion product development supported by the Korean Ministry of Science and ICT, and practical design contributions such as the Pyeongchang Olympic private security uniforms and government facility uniforms. She has been recognized with multiple awards for excellence in research and presentation, including Best Reviewer Awards and numerous Awards of Excellence for poster and oral presentations at national and international conferences. Suh has also participated in prominent exhibitions, including the International Fashion Art Biennale in Busan (2022) and KSFD Fashion Exhibition (2021), demonstrating her ability to engage both scholarly and public audiences. Overall, her work represents a unique convergence of fashion, technology, and sustainability, positioning her as a leading figure in contemporary fashion research and education, whose contributions consistently bridge theoretical insight with practical, industry-relevant innovation.

Profile: Google Scholar

Featured Publications

Lee, N., & Suh, S. (2025). Decoding Korean men’s fashion trends: A text mining analysis of YouTube content. Humanities and Social Sciences Communications, 12(1), 1–17.

Lee, N., & Suh, S. (2024). How does digital technology inspire global fashion design trends? Big data analysis on design elements. Applied Sciences, 14(13), 5693.

Jung, D., & Suh, S. (2024). Enhancing soft skills through generative AI in sustainable fashion textile design education. Sustainability, 16(16), 6973.

Lee, J., & Suh, S. (2024). AI technology integrated education model for empowering fashion design ideation. Sustainability, 16(17), 7262.

Zeyu Peng | Human–Computer Interaction | Research Excellence Award 

Mr. Zeyu Peng | Human–Computer Interaction | Research Excellence Award 

Migu Culture Technology Co.,Ltd | China

Zeyu Peng is an accomplished Graphics AI Algorithm Engineer based in Shenzhen, Guangdong, with a strong academic foundation in Mathematics and Applied Mathematics, earning an MSc from Wuhan University (2015–2018) specializing in Optimization Theory, Algorithms, and Applications, and a BSc from Central South University (2011–2015). Professionally, Zeyu has extensive experience in AI-driven graphics and operations research, currently contributing to Migu Culture Technology as a Graphics AI Algorithm Engineer since 2021, and previously serving as an Algorithm Engineer at China Southern Airlines IT Research Institute (2018–2021). His notable projects include developing a Speech-to-3D Facial Animation Generation engine, creating multi-style performance rule sets, and implementing both combination optimization and diffusion-based facial animation synthesis frameworks for digital humans. He has also engineered a Co-Speech Gesture Generation system leveraging language models, inverse kinematics, and diffusion models to produce contextually appropriate gestures synchronized with speech. Additionally, Zeyu has worked on FlowMatch and FastPitch-based AI voice conversion models for speech and singing applications, and contributed to large-scale aviation optimization projects such as flight pairing and crew scheduling, involving graph-based algorithms, branch-and-bound frameworks, and operations research algorithm libraries. He has developed a Facial Detection SDK with MTCNN for autonomous airport check-in systems and a luggage damage detection system utilizing object detection and classification techniques. Zeyu’s technical expertise spans C++, Python, Rust, and frameworks like PyTorch and ONNX, complemented by proficiency in LaTeX, Maya, Blender, and certifications including CET-6 and TOEFL. His research contributions are evidenced by publications in high-impact venues such as The Journal of Supercomputing and CGI2025, along with multiple patents in facial animation synthesis, video generation, and digital human animation technologies. Overall, Zeyu demonstrates a rare combination of deep theoretical knowledge, practical algorithmic implementation, and impactful contributions to AI-driven virtual reality, digital human animation, and optimization systems, establishing him as a leading engineer and innovator in graphics AI and virtual human technologies.

Profile: Google Scholar

Featured Publications

Peng, Z., Ma, D., & Lv, X. (2025, July). A two stage co-speech gesture generation method. Oral Presentation at CGI 2025.

Peng, Z., Li, H., & Wang, S. (2025, April). FaceImitate: Speech-driven 3D facial animation synthesis from imitation. The Journal of Supercomputing.

Peng, Z., & Wang, S. (2024, October 11). A video synthesis method, device, storage medium, and program product based on a classification model [Patent No. CN118764575A]. Migu Culture Technology Co., Ltd.; China Mobile Communications Group Co., Ltd.

Peng, Z., Wang, S., & Li, H. (2024, July 9). Facial motion prediction methods, devices, media, and computer program products [Patent No. CN118314614A]. Migu Culture Technology Co., Ltd.; China Mobile Communications Group Co., Ltd.