Sang Ho Yoon

Toward the Future of Generative Haptics: Insights from HapticGen


Leveraging Generative AI offers new opportunities to make haptic design more accessible, scalable, and expressive. In this workshop, we explore future directions in generative haptics through the development of HapticGen, a text-to-vibration model that streamlines the creation of vibrotactile effects. We began with a formative workshop to identify key requirements for AI-driven haptic generation. To address the limitations of existing haptic datasets, we trained HapticGen on a large-scale dataset of 335,000 labeled audio samples using an automated audio-to-haptic conversion pipeline. Expert haptic designers then interacted with the model via an integrated prompting interface, providing signal ratings that enabled fine-tuning through Direct Preference Optimization (DPO). We evaluated the fine-tuned model with 32 users in an A/B comparison against a baseline text-to-audio-to-haptic approach. Results demonstrate significant improvements in five dimensions of haptic experience (e.g., realism, nuance) and system usability (e.g., intent to reuse). These findings highlight how generative models like HapticGen can support future haptic workflows, offering insights into scalable, personalized, and intuitive haptic authoring.