Sign language detection on Cyclone V FPGA using the OV7670 camera
Has anyone done a sign language detection system using FPGAs? I feel like I'm just copying what Claude tells me without understanding and I don't know if what I'm doing is right. I'm aiming to make the system detect 1,4,5,8,A,B,C,F, W,V,Y,G, I, L, and O since they have different gestures that makes the classifying easier. Moreover, what I understand from Claude is that we will collect 1-bit silhouette pictures where the hand is white and the background is black so that the trained model has less weights when we want to apply the model on the FPGA board since it has only 4mb memory. I collected 150 pictures for each class then trained the model to classify, then tested it on my laptop but it doesn't classify well. I don't know where to continue, where to stop, or what to do. Can anyone help or give steps for where to go? I'd appreciate it