Singapore Sign Language Detection with Deep Learning
Developed a deep learning model to recognize Singapore Sign Language (SgSL) gestures in real-time using computer vision. This project aims to bridge communication gaps between the Deaf community and the general public in Singapore, with special consideration for both masked and unmasked scenarios.
Task
Build a real-time sign language detection system for Singapore Sign Language using deep learning.
-
Role
Machine Learning Engineer
-
Organization
Personal Project
-
Date
2023
-
Tools
Python, OpenCV, MediaPipe, TensorFlow
- Source Code

TECHNICAL ARCHITECTURE
Implementation
- Implemented real-time pose detection using MediaPipe Holistic
- Built LSTM neural network for sequential gesture recognition
- Optimized model performance using Keras Tuner
- Integrated OpenCV for video capture and processing
- Developed custom data pipeline for gesture sequence processing
DEVELOPMENT PROCESS
Methodology
- Data Collection: Captured diverse sign language gestures with both masked and unmasked subjects
- Feature Engineering: Extracted keypoints from face, pose, and hand landmarks using MediaPipe
- Model Architecture: Designed LSTM network for temporal sequence recognition
- Hyperparameter Optimization: Used Keras Tuner to find optimal model parameters
- Performance Testing: Evaluated model performance on CPU vs GPU environments
CHALLENGES
Key Challenges
- Frame Rate Optimization: Implemented efficient processing pipeline to improve FPS
- Mask Compatibility: Enhanced model robustness for masked scenarios
- Resource Constraints: Optimized model for CPU performance while maintaining accuracy
- Sequence Recognition: Developed custom algorithms for continuous gesture detection
IMPACT
Learnings
This project provided valuable insights into:
- Real-world applications of computer vision in accessibility
- Deep learning model optimization techniques
- Hardware considerations in ML deployment
- Importance of inclusive design in technology
Share this post