Friday, October 12, 2007

Gesture's Hard Coded

The gesture's have been hard coded in Finally.

Each gesture cariable (x, y, z, Roll, Pitch, Orientaiton) split in to 5 values. Each value is the average of a the rasw data. First 4 each the quartiles of the data values that are over/under the breakpoint. The 5th value isn't dependant on the breakpoint, but it's size is set at the same as the of quartiles. It's most useful for working out final orientation when the remote is held still (since the pitch and roll values aren't accurate while in motion. Turns out the easiest gestures to recognise are the mode gesture such as the picture and video mode since their orientation is really different from anything else.

The hard coded values are based on two things 1 experience (Doing the gesture over and over and over...) and looking for the recurrent relationships between the vaules. And 2 an excel spreadsheet with a whole bunch of pretty graphs. Excel used as a vidual aid for the gestures. Through not entirely useful because of the amount of data being compared. Data works out to be 5 values per 6 variables for 15 gestures. 450 values.


The gestures still need to classified further to determine the decision order and/or stringer or weaker restraints. So glad I added the buttons to it just in case I fall face first during the demo because a lot of the gesture rely on smooth fluid motion not "oh crap it's not working and I'm standing in front of a room full of people" type motion. Can already say that the system is going to break down because of the nature of recognition technology and that more feedback is required.

Planning to Start using around the house this weekend. Finally I'll have a remote back. All it's taken is 9 months, lots of stress and WAY to much redbull.

1 comment:

Stephen said...

Looking good Simon.

I've emailed you about a time for your demo.

--Stephen.