In a groundbreaking approach to dataset creation for segmentation models, researchers have turned to Blender, a 3D creation suite, to automate the process. Traditionally, the bottleneck in training segmentation models has been the labor-intensive task of image annotation. However, by leveraging Blender’s capabilities, the team generated a diverse dataset of hand images with automatic labeling, bypassing the need for manual annotation. They manipulated parameters like finger positions, skin tones, and camera angles to ensure variety. The method involved using a freely available 3D hand model, adjusting its features through Python scripting in Blender, and then blending these hands onto various backgrounds from Unsplash. This innovative approach resulted in a significant improvement in model performance, achieving an Intersection over Union (IoU) score of 0.76 on the Ego Hands dataset, up from 0.72. The entire process and code are shared on GitHub, offering a scalable solution for similar segmentation tasks.
Source: towardsdatascience.com
