After the last post I've been working on the pattern recognizer. The initial one show in the previous post was simple slap together job.
Trouble I had with the new one include customisable breakpoints.
Deciding when a gesture is running and sensitivity were the main concerns.
For refence the range is about -150 to 150.
Looked at 2 different ways of deciding when the gesture was started. The first version used a level (ussually 20, 30, or 40) of acceleration. This cause problem because the motion had to be jerk to get the higher values and the lower values were to easy to hit. Hence crap sensitivity.
The second versin uses rate of change in acceleration ( a jerk, i think is it's real name?). It's value is acctually only 1. It has to be low else the speed increase needed is insane. Had trouble keeping the motion going on only 1 axis but as soon as 3 were included it's not hard to create a fluid motion that keeps picking up values. Since a change is only needed on 1 axis top record all values. and the change is positive or negative.
I think the wrist itself actually helps this by give minor rotations around alternate axis.
Another problem was reducing the numbers. Pevious literature talks about noramlising the values. They used all the data and normalise properly. Both versions just took the average of set portions of the data. (I.E. quartiles if uding 4 as the spilt_size). The second version can change the split value in the code easily "# define".
The new version adoesn;'t actually look fopr patterns instead it stores each motion and which each of the spilt vaues X1, X2, X3, Y1,Y2... can be averaged. This works as a simple normalisation.
Since it simple int division the remained data value are ignored. This I think is acceptable because the simple gestures such as swipe only need to split the data into 2 to see the pattern. And for more complex actions, the data size increase so the remainder becomes less inportant.
The new version can rewset the trainer, read the averages so far. It doesn't save values, so it must be used in combination with writing the recogniser. Also it can't remove bad motion I.E. mistaken actions.
Might work on adding these things and then testing it with a few people to get the atual values to be used. Or aiming higher includeing a training program in the final product. Sounds good but the nuances of the motions are easiy confused if you don';t understand how the values are beiung read.
Found it particularly difficult to explain a jerking motion (for the initial program) to a friend from work. One the upside, once they had it, they understood the entire idea behind the gesture part of thesis. This finding from a hand's on approach might be useful later one.

No comments:
Post a Comment