Friday, October 12, 2007

Gesture's Hard Coded

The gesture's have been hard coded in Finally.

Each gesture cariable (x, y, z, Roll, Pitch, Orientaiton) split in to 5 values. Each value is the average of a the rasw data. First 4 each the quartiles of the data values that are over/under the breakpoint. The 5th value isn't dependant on the breakpoint, but it's size is set at the same as the of quartiles. It's most useful for working out final orientation when the remote is held still (since the pitch and roll values aren't accurate while in motion. Turns out the easiest gestures to recognise are the mode gesture such as the picture and video mode since their orientation is really different from anything else.

The hard coded values are based on two things 1 experience (Doing the gesture over and over and over...) and looking for the recurrent relationships between the vaules. And 2 an excel spreadsheet with a whole bunch of pretty graphs. Excel used as a vidual aid for the gestures. Through not entirely useful because of the amount of data being compared. Data works out to be 5 values per 6 variables for 15 gestures. 450 values.


The gestures still need to classified further to determine the decision order and/or stringer or weaker restraints. So glad I added the buttons to it just in case I fall face first during the demo because a lot of the gesture rely on smooth fluid motion not "oh crap it's not working and I'm standing in front of a room full of people" type motion. Can already say that the system is going to break down because of the nature of recognition technology and that more feedback is required.

Planning to Start using around the house this weekend. Finally I'll have a remote back. All it's taken is 9 months, lots of stress and WAY to much redbull.

Tuesday, October 9, 2007

Fully functioning

It's early but all functionality is in Wiida. Well almost. I implemented the buttons for testing purposes and the program runs with full capabilities using them.

Control of Winamp, Window's Media Player, and Windows Picture and Fax viewer all run smoothly. Task list restricted to basically iPod controls (Play, Pause, ff, rewnd, skip, Vol, Zoom, Rnd, Repeat). I;'m pretty sure I'll implement tv control but probly not for the thesis since it doesn't add anything and theirs no way to demo it.

The Mode is controled 2 ways. A charades version to select a specific mode and a mode control switch to define either 1 mode or two. If running video the mode is considered to be singular.

Work that needs to be done:
1. Get the gesture breakpoints and settings
This will take a while but so thought should solve it. I think I'll try doing it the opposite way I've been doing it. Start with all gestures and remove gestures that don't work. While testing I've been playing around with the simple math (ie try using x, try using x *10, try using x * x) for the values. Show some interesting things about some of the gestures. For example Stop is quick and sharp whereas go is a lot more flowing.

2. Get some "randomly selected users" round for drinks and work out the best ways to implement the mode and discuss how it would be represented on the Wiimote. (got the lights to represent mode)


On Document Side of things:
Sat down for couple hour last night and went through acm and itee for papers again. Rippied a bunch but need to go through them. In Regard to Modality. The Wiidia is a "multidevice" system not a multimodal system.


Stephen pretty sure I'll forget but do I include the building experience into the Thesis? Certain programs where invaluable while making this. Not just media play application but programs like Spy++.

Monday, October 1, 2007

Abstract Version I

Getting really stressed about the actual build (the gesture recognizer is killing me) and limiting the scope. Not implementing the menu display unless I have time. Need to have a chat about this tomorrow.

Here a version of the abstract below. Blue bits just some guidelines for me.
Question relating to it:
  1. Not sure about the tense needed. Written this if future because I assume it's based on the reader reading it BEFORE the document
  2. Conclusion before the document is written. ?

------------------8<-----------------8<------------------

Wiidia – A Gestural Media Interface

Abstract:

Purpose:
To develop a 3 dimensional control interface for systems that are controlled either by 1 or 2 dimensional devices. The aim is control of multiple media system which can run concurrently using a ‘Nintendo Wii Remote Control’.

Scope
The control device is restricted to the device’s gestural control abilities. The single dimensional capacities of the remote are implemented for control device comparison.

The systems included are media player applications. Functional control for the interface is limited to tasks considered common and popular to the media systems.

The interface includes the ability to control multiple media systems concurrently. The interface itself is multimodal, whicl the systems it controls are to be considered as single mode. The interface provides a solution to the problems associated with mode error whi9ch arises from this.

Method

The control method is reached through analysis of current input devices related to the systems, the capabilities of the selected controller and the issues related to gestural interfaces.

The gestures are developed and explained through examination of gestural notations, movement analysis of proposed gestures and a gestural review with users.

Reviews of current and popular media player interfaces are performed and include examination of their menu structures and the functionality provided. Furthermore, important functionality related to media player is examined through user interviews. The results of these reviews and interviews are developed into the working functionality provided by the system.

Results

Recommendation

Conclusions

The interface provides a unique control method for media player control. The interface provides a fun and intuitive way of controlling media devices, expert users will find the interface only useful in specific contexts.

Friday, September 28, 2007

Quick Reivew

Been coding 24/7 on this lately and been getting nowhere fast. Been a streesed because of the lack iof progress .

Major updates are :
Changed the way the wiimote reads values, using the different between consecutive values now instead of difference from calibrated values. Finally worked oout how to get the class name for other windows. This is a follow on from desideing to use other programs and not the skin.

Now I have to use this window name and add the second/third/forth app to the program. Once I get the second app done the abstract will be up on the here. Withpout the second app the modal portion is missing.

-SRE

Quick Abstrc:

Using Wiimote to control media on a computer.
Not using the butons, only gestures.

The goal is to give most person the wiimote and they should be able to control a media plater without assistance for the most commonly used tasks.
The application helps the resolve modal error.

Furthermore other gestures should be easily understood and memorable.

Designed for multiple applications, though not implemented.
Design for mutplie control metods, through npot impemented (functionality for wiimote buttons is there, but not mapped to any thing)

Sunday, September 23, 2007

Hit a wall

Not sure where to go now. The two possible ways are;

1. To write a skin and plug-in for Media portal.
Been skinning the last few days (learning xml and c#). Realized that media portal doesn't have a decent SDK to get to the players themselves.

To get to them I'm sure that I'm going to have to make use of the GPL and use their own code for the implementations for their media players. I'm don't think this is plagiarism due to the GPL . I know it technically doesn't show any academic merit but I'm not building a media player I'm building a way to control it.

2. Treat the remote as a keyboard with custom keys (I.E. volume button, play pause)
This allows a more global approach, for example right now I'm wanting to reach for the Wiimote to turn the sound up. Implications of this is letting windows do the work and giving the playlist creation controls to Windows Explorer.

These arbitrary gesture, i think, more reflect the spirit of this project of controlling the 2D space with 3D control and pretty much already exist from previous development cycles (there are arbitrary gestures in the skinned version)


To do this the design change needed is one gesture to represent enter the volume level selection (currently it clashed with Audio mode)

Friday, September 21, 2007

Well that didn't take long


Little less stessed. Turns out friends drinks to much and can't anymore. The project has almost caught up to the hackt version. Just a matter of determining the best Break levels. I.E. when it starts and when high. low and no values are recorded.
Now for the Gestures

Thursday, September 20, 2007

Not at Ipswich

Stephen,

You won't have seen me at the lecture today. I've stayed in Bris. Left a msg on facebook for you with the reason.

In respect to the thesis,

Quiet;y starting to freak out myself. Bugs lately have been eating me alive. Their places in the code which are obfuscated to the nth degree, hence I just can't sit down and start. It's been taking about half hour to remember the place I was at. On top of that Internet is running at snails pace.

Things remaining:
Thesis
(Done by end of Holidays, 8 days remaining)
1. Parse the Input correctly (2 days)
Normalized values are driving me nuts. I had to convert the raw values into a float[] held by a generic gesture. Hence the normalized valued is an array of lengths like 150+ beiung spilt into 35ish average values. Long story short, slow to work out which values are actually being used. Not enough time now to convert the program back to the wii specific version.

2. Calculate the gesture vals based on the Parsed input (Dependant on 1.) (1 day)
Once the normalized values are working this is just going through the motions. Perform a gesture 20 times, see the identifying values. Hardcore the values. Perform each gesture look for common clashes. If clash occurs, restrict/increase the identifying values.
Ideally (I.e. not enough time left) get other people to perform the gestures and uysed their mannerisms as the identifiers.

3. Create the skin for media portal (2 days)
Exegesis in working through tutorials on the web. Languages is c#. This may cause problems, but the workaround if it does is explained below in 4.

4. Throw the gesture calls to the media portal skin. (1 days)
Ideally the calls are from c++ (Wiidia) to c#(Wiidia skin). Plan to get done through external calls to WindowCommand calls. Failing this Media Portal allows Keyboard mappings to be allocated, this is the work around by having media C++ throw event handlers for the logical keyboard mapping. I think this keyboard mapping lends itself to a nice comparison in the thesis.

Assessment
1. Poster (14 days remain)
Thinking background pictures of the program superimposed with explanations of the design reasoning and features will be a good layout. Invites trouble with logical flow to the poster through.

2. Abstract (14 days remain)
Going to get a version of this up here really soon. Begin with a version of the elevator pitch. This is dependent on the skin for the program. Ultimately the skin shows the features.

3. Thesis Document (32 days remain)
Drawn out from abstract. Really need to start writing.

On a personal note,
The bugs in the program are really starting to stress me out. Work is killing me, getting as many shifts as the full-timer, but I'm think I have a solution for that. Sooooo many group assignments for uni as well, I always stress when others are involved and 4 group assignments is just insane.
Hopefully, reason for starting in Bris today will be nothing, and I get the input to parse correctly before work tonight.

-SRE

Wednesday, September 12, 2007

Found out how to demo,.

Spent most of the day installing windows onto my laptop. Finally got the blue tooth connection working. It run really slow but it shows the point. I'll try and bring it in Friday and have a bit of show and tell. had to install VS 2005 on the laptop.

whole other kettle of fish transferring the files over. I'm always suprised how well MS manages to upgrade the dependence on their software. Then hid the setting 4 deep in the menu to change it back with some obscure reference, for a simple builder for me anyway.

Going to get the old version onto the laptop, with winamp. I'm pretty sure looking at the speed now, I think the program will be to slow to read the gestures proply.

-SRE

EDIT: The laptop just needed drivers. but running find now. Of course I had no idea where they were. Who uses CDs seriously.

Saturday, September 8, 2007

Life 2.0: Systemless and Directionless systems

Another Interesting though was about 3D space while I have been using the remote. I kept turning to face the computer to increase the volume yesterday. Took me an hour to remember that blue tooth is non-directional. and furthermore music is a completely directionless medium. Got me thinking about how people are going to be controlling systems in the future. With displays getting bigger and systems becoming smaller their actual pervasiveness/opbviousness in homes will eventually disappear. I.E no system apparent and displays on every surface (think those smart LCD(?) screens that are mirror when turned off)

Two points come form this musing.
1. Some media and communication don't need a point of physical reference, how then can the physical space not being use improve/added to?

2. I feel that Web 2.0 only provide some physical reference point that happens to be in the same place at the same time. The control of the applications are defined by the Input device. Give a application directionless and systemless control and everything can be used for input. Assign arbitrary object in space a virtual reflection (I.E. pen in pocket is an input device and can write on wall in public, Milk in fridge is email, check email in the morning while making coffee). Add systemless and directionless control to the applications the only restrictions of the system is the display device.

Unfortunaley I think this will be the last thing to be brought into the new milenium. Where's the contact len displays already? ... or failing that wearable projectors.

Updated Gesture Module


Just a quick update.
Got the geture module recieving from the wii in the new format. This is going to be so much easier to work with. Coded the print functions in this time to save some trouble. Also picks up the buttons (not shown).

Also, I found the notes about the classificaiton of gestures I asked you about. It was Neural mappings vs Cognitive functions. IE patterns vs purpose. In this respect that orientation value is going to be gold. I had a lot of trouble on the inital version differentiating the cognitive gesture such as pointing to your ear. noe it no trouble at all because the orientation is really strange. The actually direcatio the top of the Wiimote points is Forwards and down, which is impossible to reach by accident with neural mappings unless your steady eddie of course.

Also been playing around with the Media Portal media center and getting some strong design directions from it. I'm going to have a play around and see if I can find an API or the Window Commands to throw to it to get it to do what I want. The program also has the ability to add external player to use (Such as Winamp). I'm think that developing the program for it will reach my inital goal of developing a program that I will use around the house.

Thursday, September 6, 2007

Media Portal

Just found out a new Version of Media Portal is out...
http://www.team-mediaportal.com/

it works with my TV card...
looking through the list of feataures ( I think it's all there)...
its open source...

...Just when I had Winamp starting to work to.

Need to think about it but could reduce the programing a bit and allow a focus on user's gestures which can be much more complex with the new gesture system I'm in the progress of debugging.

-SRE

Tuesday, September 4, 2007

Review of Project Plan

Finally found an decent open version of msProject called OpenProj. Still relatively new but imports project fine for my purposes. Good to have something to map the reminder out on. Funly enough, according to the old project plan I finished the system 2 weeks ago and have just started writing the final document. Time to start getting really serious with this.

Also had a look at the Student Night Rules. Looks like fun. Also throws down a good but tough deadline. Curious if I qualify because I won't graduate till next sem because of the work experience requirement, but it is my final year technically. I think it would be great experience for the actual demo in respect to the the theory behind it, since demoing it isn't actually allowed.

-SRE

On a personal note:

Finally got around to cleaning out Start and "My Documents", nice and organized. Was looking for a zoom app like the macs at uni have and found a nice app called WinSplit which basically put windows into quadrants/halfs of the screen evenly. Brilliant because I have had trouble with that ever since I got the widescreen.

Monday, September 3, 2007

Kicking ass and taking Names

Got all basic winamp functions working...and it's kinda modular. Usiung virtual funcitons which pass perform down the line from mode to device to task. Need to test it with other mode but audio is working. The setup for wii buttons is implements but recieveing from the actual wii needs to be done. This is the next step.

Friday, August 31, 2007

Thesis, Twitter, Bugzilla

Thinking about the mash-up I've got to do for CSCW. Need an archive to recorded the coding process. Using twiotter as an updater. Planning on using it like a private bugzilla.

Sry to the person following.

-SRE

Break up and Build Order

  1. Main Screen and Settings with Winamp using arbitrary input
    1. Input is the buttions of the Wiimote. This removes the gesture significance and allows for the base system to be build around easily defined input and output.
  2. Main Screen and Settings with Winamp using gesture input
    1. Include gestures into the control method. This adds the gesture significance of both arbitrary gesture control (I.E. volume/position setting) and cognitive gesture control (I.E. gesture representing what the user wants to do). Also demonstrates the modular nature of the program.
  3. Picture mode
    1. Add multimodal interaction using the gestur
  4. GUI Interface
    1. This is either adding a wrapper for the Main and Setting screens or adding the playlist control screen
  5. TV Mode
    1. Adds Mutlimode with a different control method
  6. Video Mode
    1. Highlights some of the other problem in multimodal interaction such as mode dominance

Well thats the build process. Looking at it theres a lot to get done so I'm getting started.

-SRE

Wednesday, August 29, 2007

Latest UML Design

Still haven't started the conversion. Been working on the class's in UML. Been having trouble trying to link the modular gesture to actions. But I think I have it now. Redesign the gesture recogniser. Going to have to recode and test all the gestures...Sigh. but since the gestures I've got now work ok the new gen gesture recogniser should be much better.

Even through the links aren't there this shows the separation away from device specific input and output. Going to run it to trouble when I add a new mode.

-SRE

Friday, August 24, 2007

Hackt Build Evaluation

Got the Hackt version running. The actual gestures require some work. Order of comparson is going to be important. Think I need something better than an if statement. Not looking forwards to that.

Biggest noticeable stuff:
  1. Stop need most work, the gesture's to subtle
  2. Make it so/Playball, roll out still makes sense for play got to check how people think of it for representaiton.
  3. Shuffle on and off as separate gesture is actually how winamp implements shuffle. I.E. it's not a toggle functions.
  4. Shuffle is going to need an onscreen prompt if people are going to use it for the first time. Alternately skip, and volume are performed by people as the expect. Justiufication. Gave someone the remote and said goto the next track, goto the previsou track and turn vol up and down and got the right actions.
  5. Down and up are not as their seem. Down actually is more natural as swipe left because of the way the wiimote is held. (This prompts the need for orientation when doing the clean gesture module)
  6. No fast forwards in winamp only skip a set amount. Could prove people difficulty, especially when looking for a point in a song. I.E People would use the continuous setting more possibly
  7. On the coding side the gesture needs a bigger array. Overflow error occurs each time the wiimote is held(?) in constant motion. Might be fixiable with higher break points.
Really need a gesture to auto play something or an alarm to use around the house. Currently not included on the thesis but jeez that would be good. But I think I'm going to run into problems concering battery power of the wiimote. Have the program running with the wiimote connected, say overnight, the program is constantly pinging the wiimote and therefore draining the battery. The problem is stringly related to the gesture module and how to know when a gesture is being read. Ideally, I want a sleep function on the remote itself, maybe better control of the heartbeat is possible. For the Final project, might be able to tie the ping to the refresh function in GLUT, if I end up going that way.

On a personal note,
This kicks ass, seriously.

Cleaning the Hackt version in UML


The code as it stands is sloppy. This is a UML to clean it up. It's just been so long since I've done any UML. Having a bit of trouble separating the input/output device from the gestures and tasks respectively.

Major differences are the abstraction towards the screen state and the modularization of the input/output devices. Any hints welcome. Going to make the hackt version represent the gestures from user studies (where the gestures are already resolved, IE no shuffle/repeat). Then convert the whole thing to this map if no comments appear.

Tuesday, August 21, 2007

The break down of parts

I'm pretty sure there is a better way of doing this, but at this time of night this will do. It needs to be redone using the topic focus of the parts. I.E. multi-mode, gestural, Gestures based on the tasks vs gestures based on design etc...

  1. Single Mode - Main Screen - No Gestures - Text Interface
  2. Single Mode - Setting Screens - No Gestures - Text Interface
  3. Single Mode - Main Screen - Basic Gesture Control - Text Interface
  4. Single Mode - Setting Screens - All Gestures - Text Interface
  5. Single Mode - Main Screen - Most Gestures (no link) - Text Interface
  6. Single Mode - PlayList Display - Minimal Gesture - Text Interface
  7. Single Mode - Gesture Links - There and return - Text Interface
  8. Single Mode - Mode Display - Add from Single List - Text Interface
  9. Single Mode - Mode Display - Add Item Sorting and Gestures - Text interface
  10. Dual Mode - Main Screen - Basic Mode Gestures - Text Interface
  11. Dual Mode - Setting Screens - All Gestures - Text Interface
  12. Dual Mode - PlayList Display - Minimal Gesture - Text Interface
  13. Dual Mode - Gesture Links - There and Return and Swap - Text Interface
  14. Dual Mode - Mode Display - Add from Single Lists - Text Interface
  15. Dual Mode - Mode Display - Add Item Sorting and Gestures - Text interface
  16. TV Mode - Main Screen - Gesture Mapping - Text Interface
  17. Multi Mode - Main Screen - All Gestures - Text Interface
  18. Multi Mode - Setting Screens - All Gestures - Text Interface
  19. Multi Mode - PlayList Display - Minimal Gesture - Text Interface
  20. Multi Mode - Gesture Links - There and Return and Swap - Text Interface
  21. Multi Mode - Mode Display - Add from Single Lists - Text Interface
  22. Multi Mode - Mode Display - Add Item Sorting and Gestures - Text interface
  23. Basic Gesture Creation - Main Screen - Play, Pause, FF, Rewind, Skip, Volume, Zoom (Keep gestures separate, keep it modular)
  24. Advanced Gesture Creation - Mode representations, Swap Mode, Repeat, Random On, Random Off
  25. Subtle Gesture Creation - Open Drawer, Close Drawer, Reset, Next Item, Prev Item, Add Item, Remove Item
  26. Basic Wrapper for Text Interface
  27. Advanced Wrapper for Text Interface (Use jpg textures, keep it skinable)
  28. Create a Library sorting/addition Tool to go with

The Design for Online Access

Looks like the photo's aren't going to upload. O-well Fix it soon. As an Aside got the new Bluetooth dongle.

Working like a dream, Got the Wiimote back up. Threw some vids of it test programs onto F8.

Not to speak to soon but 2 week seems do-able. Though the final gestures are going to be death to program and since the program runs around them the break-up of of modules will have to be Dependant on them. Problem is going to come when adding extra gestures. The subtle gesture only occur in a few places and aren't needed with the gross gestures. I hope this is going to help differentiating them during coding. I stress hope.

-SRE

Note from Aug 21st

These notes are getting a bit abstract.

Fodder
Understand how the TV is, and understand the the TV is not how it used to be

Understanding the Thesis Document.
  • Given the problem space, with enough levels of abstraction the main topics/chapters for the document can emerge.
  • All Information must back up the Design
  • Don't make the same mistakes as others, I.E. Use previous research and understanding as a guide and justification method
  • Use the document to communicate what you have learnt
  • "Standing on the shoulders of giants" - Take what is known and extend on it, or take what is already known and proved the inherent problems with it or test the inherent problems if thier can be
Basic outline from other theses
  • Background/Background --> Factors effecting the links between the background (What makes up the connections) --> How can these factor be addressed to create a solution --> What is the solution (design, implemention , Evaluation, Conclusion)
  • Concurrent development of Ideas may lend itself towards a map of the document, either a classical text or pictorial view. As long as the representation aids in the readers understanding of the document.
  • For mine, Media/ Control Interaction --> Remote Slatings --> Differnt devices --> Wiimote
  • The Poster for demo day is aimed at 3 people. Title --> abstract --> content - Hook: Get people curios about the topic, Line: Explain the concptes involved, Sinker: Tell them what happen. Not using too many word but not using too little.
Task for the Week
1. Brake up the specs into logical build order pieces that expanse incrementally on the scope of the project. IE. One Mode (Simple Menu deisgn) --> One Mode with Gestures (Getural Control)--> Two Modes (Mode Error) etc.
2. Build half so the other half can be built next week.

Simple

Thursday, August 16, 2007

Social Design - The servient developer

Note to self.
Remember to ask Stephen about the place of Social Design. It seems to be right in the middle of the subject, but not really in either.

I'm sure there is a movie being produced (or script at least) using only user contributions. Thinking it could be an interesting project to look at and see what comes from it. The link on f8 is yielding little interest, but I think if users could be engaged some really amazing apps could be created. The idea of involveing users in the deveoplment process is always problematic because of their lack of experience and difficulty because of numbers. Using social tools to develop something would increase the timeframe, but ...

if done correctly could remove the designer from the process and focus purely on the user. Problems of course are going to come from the moderation of how the development progresses.

The set of users would stroingly influence the application developed. f8 took off because it suited the needs of the users and changes to reflect this. It changes "have an idea then have user then use the users to aid development" to "Let the users have an idea and then aid the users in development". Sound similar to the idea of Servient leadership, basically a lead from behind approach.

From Web 2.0 group on f8 "Your idea is not original". If I can';t think of something new, find the non-original ideas people have and help them make it happen. It's like Joe Everyman coming to a software developer and saying here's my idea except there could be hundreds of Joe Everyman saying, but what if it did this.

Where I am

Not going to make it out to Ipswich today. Woke up to late this morning after a late night at work. I will definitely be enrolled in both subjects this semester. Sitting here doing the social assignment.

Plans for the Weekend is;
1. Get the Wiimote working again on my PC
2. Get the Outline into something usable (I'll need you to check because I'm still not sure about the topics to include in a thesis)
3. Build the specs for the non-gesture bit of the project. I'm pretty sure the gesture specs will have to reverse engineered because I have no clue of the levels needed for the new gestures.

-SRE

Wednesday, August 15, 2007

Blogger Topics / Thesis Overview

I know this is looking very sloppy. Got a head act and am going to bed. I'll sort in out tomorrow.


  • Possible Directions
    • Critical view of 2D vs. 3D mismatch
    • Using New technology in old format
  • Media Player Examples
  • The Wii Menu Structure
  • Exploring the Problem Space
    • Wii Characteristic View
    • Current System Characteristic
  • Problem Space and Description
    • Effect of Wiimote of Menu Design
    • Modification of Current menu to use Wiimote
  • Media Center Reviews
  • User centered design based around a Media Center
  • Methods of Prototyping and performing surveys
    • I.E. Flash, sketching, Internet survey
  • Other examples of Wiimote usage
  • How to build a gesture Capture device
  • Definitions of 'word' to be used for a gestural syntax
    • Movement Classification
  • Task Definitions
  • Mode Error
    • Possible Solutions
  • Physically possible Gesture Set Definition
  • Task List Classification
  • Basic UML Overview
  • Similar Thesis Project
  • project Plan
  • Uasability Test 1
  • Menu Idea 1
  • Which came first mode or Task
  • Interaction as a conversation
  • Gesture Set Usable basic list
  • Mapping Actions to Display
  • Menu Hierarchy
  • Gesture Notation
    • Labanian
    • Baineriff
    • Movement Analysis
  • Gesture Applicability with Sal
  • Gesture Pattern Recognizer
  • Application Wrapper
    • Other Application Commands (Win amp command calls)
  • User interviews and gesture suggestions
  • Wiimote Ergonomics (Alternative Grip)
  • Participatory Design
  • Aesthetic Usability Effects

Tuesday, August 14, 2007

Concerns about not actually doing new stuff

Not really concerns but an idea to keep me doping new and different stuff concerning with usability and desing. From the UCD facebook group there is a topic about Aesthetic Usabilty Effect. This may be added as the system ius designed. Building a text version of the system then adding the graphics gives the opportunity to examine the phenomenon.

This give usability stuff I never thought about to the thesis and the opportunity for User Evaluations.

Friday, August 10, 2007

Notes from Matt

Problems
1. Does Shuffle shuffle both modes? If it cycles what order.
Their will be a dominant mode. Music will usually be dominant over pictures who's order rarely needs to be changed and videos less so

2. Does play begin both modes or 1?
Both

3. Does zoom remain constant or reset for each picture
Remain constant

4. If seeking is the system Paused?
Music = Playing
Video = Paused
Pictures= Paused

5. How is the dominant mode shown?
No Idea, Some form of on-screen icon.

6. How doe people sort their media?
Music = Artist, Albums, Singles
Video = Genre, Actors
Pictures = Location, Chronologically, Who, When

New Gestures:
1. Repeat Cycle works around the vertical plane
2. Play /Go = Create a unique gesture to represent the system ie branding / refs whistle still unknown or Go by pointing forward
3. Playlist = Since everyone said that skip was left and right this implies the playlist should be represented in a horizontal fashion., Therefore the gesture decided on was a pulling up action that lifts the playlist from lying down
4. HUD = the same as the Playlist. No Need to have Playlist without the volume and seek
5. Hide = the opposite action from pulling the HUD up was to push it down, this can be used from volume, seek, and the HUD
6. Swap = the word swap implies 2 modes other words were exchange, alternate, cycle. The gestures suggested were folding a corner of the page over, spin a coin around, rotation and Duffman (participant has had previous use of the system ) which represent flipping the page over and turning a page over like a book
7. Go To Playlist creation = Creation of playlist = "Something magical" DaDa (?), Using a Wand Settled on waving (which represent saying hello lets begin, wipe this slate clean or goodbye to the current playlist)
8 The gestures to represent the modes were thus:
8.1 TV -> point a remote
8.2 Video -> Charades Video
8.3 Camera -> Hands holding a camera and pumping down representing the button press
8.4 Music -> Cupping the Ear
NOTE: Because Cupping the Ear is the same as the volume control, their nbeeds to be the gesture for bringing up the Playlist menu, Going to try and find a new gesture for volume or music so I can remove the dependence on an arbitrary action to access the mode/Playlist Menu
8.5 Add = Come Here, Bringing hands towards yourself
8.7 Remove = Go Away, Flicking hands away from yourself

Photo 1. Development of the Playlist Screen with Matt.
The idea is to pull CD/Videos Picture albums out of a draw and onto the floor.



Photo 2. Playlist Screen Development 2 with Matt
Similar to the bottom right of previous, Draws at the top of screen to flick through, Come brings the current selection down screen into the playlist. A Playlist gesture is needed to select the Playlist to remove items, this may be solved by an undo function. The Drawers to the sides at the top sort the media in different fashion depending on the media type. IE. Movies are Actors, Titles, Genres etc... At the bottom right show the available gestures for a beginner user. The Playlist is sorted by the meiea which is added to it.


Photo 3. Beginner User Gesture Icons with Matt
Wanted to do some participatory design with someone. These Icons are based roughly on my original Headphones, camera, video recorder and TV. The icon is superimposed with a representation of the action needed to be performed by the Wiimote.
Ideas directly taken from Matt include the Axis of the video camera, The hands to represent the Wiimote and then the Wiimote to represent the Wiimote, I was using arrows, Matt wants animations.

A set of these icons from each of the other actions needs to be developed and tested.

Exception to the rule

Yah for Eames (/Eames rolls his Eyes),
I just remembered the exception to the rule for the problem of A DVD playing with Music playing at the same time. I've been working off the assumption that mute is not needed and the no-one would every want music playing while a video is playing.

Remember them playing Floyd in a chill out section of splendor and recalled the interesting concept of watching "The Wizard of OZ" with "The Dark Side of the Moon Playing". Every time I tried doing it syncing became a problem. This media player can solve that through connecting play to the same action but thanks to user studies mute has been removed.

Brilliant. Oh well....who would every want to watch the cash register of "Money" kick in when the movie goes from back and white to color. Might plan for at one "undocumented feature".

Altrnative grip

To preface this I got a digital camera yesterday and have been playing around with it.

I remeber reading a web page about token usability design awards. I'm trying to find the page but one awards was for a kids camera which had plastic bumps on it. The caption was something like "It looks like it wants to be held." These bumps are common on most cameras you can find today, I.E. where you right hand curls around the side.

Tries to test some gestures with mum yesterday but she kept pressing the buttons saying I would point in and press up. Not to helpful but....
I took the Wiimote and turned it around so she couldn't press the button. BAM the depression from where the B-button is melded into her palm.

Two side effects from this. She stopped pressing the buttons and also she grips the remote when performing most gestures. The grip presses the b button automatically. What a sweet and simple way to understand some gestures are being performed.

The Alternative grip


The Depression into the hand.

Sry bout the darkness (I still can't find flash)

People are also puting this ideas into remotes being designed now. Just wish I could find the camera example
http://uselog.blogspot.com/2007/06/i-love-my-remote-control.html

Tuesday, August 7, 2007

Let get started again.

Coming back to bris tonite. Time to get back into gear. In respect to thesis fodder, I saw the add fro mario party aznd also wario something gmae on the Wii. What struck me as curious is that my friends where surporise at all the gestures you could do. While it seem like a lot the reason it picks them up is that they only need to recognise one at a time. I think a section looking into the difficulties around differentiating between gesture for the wiimote will be good fodder because previous devices will be different from the wiimote. Even a look into the action uised in the games compared to each other might be appropriate.

Sunday, July 29, 2007

Organise a Meeting

Stephen, I'll see you at Ipswich tomorrow but need to organize a meeting with you before Friday. On Friday I will be going on holidays for a week down south and won't have a chance to see you next week.

Also I haven't competed the sign on tutorial for Social and Mobile Computing because I might be dropping the subject, pending an graduation check and I don't like having ghost accounts on the web. I feel it is going to be a same if I have to drop the subject because I'm doing Se rice Oriented Architecture as well this semester and the lecturer has already suggested the assignment will be to design a social web page using web services.

See you tomorrow

-SRE

Another Menu Hierachy Based on Interviews

After looking at the interview information and using informed design, the new menu hierarchy is thus;

Mode Select Screen
Select TV
Select Picture/Video/Audio Draw
Open Picture/Video/Audio Draw
Close Picture/Video/Audio Draw
Turn on TV
Play Playlist

Add Draw to Playlist
Add Album/Song/Movie to Play List
Remove Album/Song/Move From Playlist
Clear Playlist
Select Item in Playlist


Display Screen
Play
Pause
FF/Rewind
Skip
Skip Other Mode
Volume Up/Down
Zoom In/Out
Reset
Random
Repeat
Equalizer
Snap Shot
Full screen
Partial Screen

Access To
Volume Setting
Zoom Setting
Seek
Mode Priority

I have a set of gestures for the different functions. and there is some extra functionality based oin the interviews such as record and pirate (as in pirate a cd). I have only briefly gone through the gestures from the interviews briefly, so I'm working off the assumption I can differentiate between them effectively.

The Drawer analogue was one suggested from the interviews and fixes the problem with TV not needing a playlist. As it stands the system visually will be limited to suppling information to the user about the current state and not the available movements.

I'm thinking a beginner user representation showing hints towards the gesture which can be taken off when needed will supply the information about available gestures which can then let the user guess the right gesture. This comes back to the problem of making the gestures able to be recognized easily so the user can know quickly what the gestures are.

I've got some sketches of the system but since the gestures actully have nothing on screen they are a bit simplistic.

Saturday, July 28, 2007

Gesture Plan and update

Been a while. Here are the results from the second gesture interviews

P1.
Play,
Pause
Stop
FF/Rewind
Skip
Pyhsical Eject
HUD
Mute
Volume

P2.
Play
Pause
Volume
Random
Burn
Next/Previous
Repeat
Pause/Play
Search/Seek
FF/Rewind
HUD
Visualisation
Equalizer
Volume
Mute
Focus
Reset

The Gestures suggested
Play:
Imitate Remote
Flag Dropping
Ref Blowing Whistle
Bounce a Ball
Putting a needle onto vinyl
Jumping
Clapping
Running In Circles
Pause
Time Out
Stop like a cop
Wheres the Remote
Pick Up
Random Movementy
Spirit Fingers
Bounces Ball
Imitate Remote
Mute
Shhh
Safe (base-ball)
kill it
Pledge
Cover ears
Next
Skip Rope
Swipe
Jump forward
Cast
Previous
Opposite of next
Seek
Rolling
Point to a particular point
Magnifying Glass
Scan Horizon
Fast Forward
Rolling (hurry Up)
Cycle Swipe
Rewind
Car Slow Down
Opposite action of Fast forward
Reapeat All (group action commonly above head and singular ussually below eye level)
Loop
Lasso (but throw at what)
Flight gear stick
Loop-de-loop
Wave Hands In air
Repeat 1
Point to a song
Attract attention (Hey you)
single action compared to double for all
Volume
Up/Down
Mix Vinyl
Cupping Ear
Turn Knob
Point to ear
Visualisation
Album View -> Albums in a rack and sorted into draws
Flicking through CDs
Opening a book
Grabing from a shelf
Equalizer
Instead of changing the levels, change based on instrument
Bass - pump
Treble - tweaters
Microphone
Guitar
Piano
Drum
Trombone
Focus
Small circles
Twist camera
Twist microscope
Random On
Shuffle Cards
Stir in a bowl
Scrumple
Random Off
Iron
Lay out cards
Flatten
Reset
Blank/Wipe screen
Wiupe Slate Clean
Scrumple then throw away
Clear Desk
Burn
Lighter
Light match
Start fire
Rip paper
Eye patch (pirate)
Arghhh(pirate)

Friday, July 13, 2007

The Menu Heirachy

The more I work on this the more I realize I actually need expert users to examine the design choices needed and the more I realize non-expert users are going to make the design more convoluted.

I am still going to have a chat with some people about the menu desing but don't expect much to come of it. I plan to talk tosome people morte about how to represent the movement on the screen. This stuff I just don't have a clue about because I've been doing the action to many time while testing the gesture code. This proximity has made it like saying a word to many time until it loses it' meaning. Their fresh viewpoint should be helpful. From them in need stuff like if I "Reel It In" like fishing, If I have curved edges to the screen would the curve be on the bottom (implying the screen is pulled up from the bottom) or curved at the top (impying the screen is picked up at the top of the motion). Further more does the object move (I.e. apple) or the scene move (3d desktop).

Righto thats enough I think I might do up another version of the previous glut menu to help explain this stuff o the people I an going to ask.

This is a version of the menu hierarchy based purely on what I think would work. It also included running 2 modes at once, this goal seemed to have drifted out of the scope as of late.

Important design choices made are:
  1. The system will somehow show the arbitrary movements somehow (duffman in particular) to show the new menus.
  2. The system will be be able to guess the best course of action for some actions. I.e. it can tell when to pause and play on the make it so action.
  3. The double swipe action initiates the second mode version of the single swipe action.
  4. The modes are represented as layers on top of each other.
  5. The actions being performed will depend on the state to the remote. This is a problem because it into introduces more mode problems but they are one of modes. For Example as it stand performing duffman and holding the remote pointing high initiates the mode selection menu. Then the roll functions can be used to represent horizontal rotation instead of swiping since in that position the remote won't pick up swipeing because it's rotation around the y axis. But it might work.
    1. As a side note I don't want to recreate apple's album chooser here it show the modes. I would like something else, but we'll see.
  6. Even though this menu hierarchy is base on multiple modes. I would like to make a single mode version. One that make mode swapping simple but still separate from each other.
    1. This version of the menu allows access to the basic functions of the second mode but a version without global access to the second mode's function might be worth testing. Also coding this should be to hard since mode swapping is an arbitrary function to a second menu therefore it should just require a new set of links and allocation of screen icons.
    2. Single version would be interesting to contrast because it also could allow access to a couple more functions of the single mode on the main screen such a skip 5 sec, which aren't included in the multi modal version.
  • Top Level
    • Front Layer Next - Swipe Right
    • Front Layer Previous - Swipe Left
    • Volume Up / Zoom Out - Double Pitch Down
    • Volume Up / Zoom In- Double Pitch Down
    • Play / Pause / Choice Prompt Link - Make it so
    • Repeat - Roll Left
    • Random - Roll Right
    • Increase Play Speed / Fast Forward - Double Right Hold
    • Slow Play Speed / Rewind - Double Left Hold
    • Continuous Menu Link - Reel
    • Mode Select Link - Duffman (Hold)
    • 2nd Mode Next - Double Swipe Right
    • 2nd Mode Previous - Double Swipe Left
  • Continuous Menu
    • Volume Level - Change Pitch then Swipe
    • Seek Position - Roll then Swipe "up" Sun Ray
    • Mute - Double down
    • Return Volume / Max Volume - Double Up
    • Fast Forward - Double Roll Right (Hold)
    • Rewind - Double Roll Left (Hold)
    • Swap Mode - Duffman
    • Main Menu Link - Cast
  • Mode Selection Link
    • Add Layer to Front - Reel
    • Add Layer to Back - Cast
    • Reset Modes - Duffman
    • Next Mode - Roll / Swipe Right
    • Previous Mode - Roll / Swipe Left
  • Choice Prompt Menu
    • Play / Pause - Swipe Right
    • Stop / Swipe Left

Thursday, July 12, 2007

Menu Design




This is a picture of the progression through the gestures based on the state of the Wiimote.
Initial is the holding it like a normal remote, High is oriented towards positive Y, Rolling is turn around z with forward orientation, Pitch = around the x with forward orientation and wiping is rotation around z with high orientation.


The Gestures are reduced to their basic form. I E Duffman is the progression from Initial to High to High Following Reel then Pump. It also represent the feasible menu hiearchy choices that can be made.

Having people round to night and am going to try and deside which tasks are best represented by each state and the grouping of the task to fit into each of the states.

Wednesday, July 11, 2007

Application Wrapper

Thoughts on how to implement the final program
1. Write the whole thing. I.E. Create a media player
2. Put a wrapper over the desktop the contrals the other applications installed on the computer

Link to how to put a glut window as the active background.
http://groups.google.com.au/group/comp.graphics.api.opengl/browse_thread/thread/681fd01ff6a70dcf/100d4918574ae32c?lnk=st&q=opengl+show+desktop&amp;amp;rnum=1&hl=en#100d4918574ae32c

Link to how to change YDock
http://www.dockex.com/yzdock

Post script:
Just found out that Y'a dock got a cease and disist from apple for distribution of the program. the program apparently is an exact copy of apple's dock. Ha lucky I got a version. Mind you I really wish it had a panel from showing the prgoms open so I could feasible get ride of the task bar.

Ramblings on menu Design

Standard menus use hierachial structure. Aftre watching a MS surface demo on Uyube, the idea of a menu that follows the user. In the demo as pictures are looked at the user can swipe the picture to the side to keep it in focus of the current surface mode. Applying this to task the user would be effectively selecting their own menu.

This leads the menu design towards 2 separate controls. One mode to select the task being used, and the second to control the tasks which have been selected. This appear to be an innovative way of controling the application and a fun way to explore a different men design.

Easier Than I though

5 min later...

Got Hello World running from inside the program. Now to get to winamp.

EDIT

5 more minutes later ...

I just got win amp to skip tracks using the Wiimote.
This is so much fun.


EDIT

Just for Reference

Previous track button 40044
Next track button 40048
Play button 40045
Pause/Unpause button 40046
Stop button 40047
Fadeout and stop 40147
Stop after current track 40157
Fast-forward 5 seconds 40148
Fast-rewind 5 seconds 40144
Start of playlist 40154
Go to end of playlist 40158
Open file dialog 40029
Open URL dialog 40155
Open file info box 40188
Set time display mode to elapsed 40037
Set time display mode to remaining 40038
Toggle preferences screen 40012
Open visualization options 40190
Open visualization plug-in options 40191
Execute current visualization plug-in 40192
Toggle about box 40041
Toggle title Autoscrolling 40189
Toggle always on top 40019
Toggle Windowshade 40064
Toggle Playlist Windowshade 40266
Toggle doublesize mode 40165
Toggle EQ 40036
Toggle playlist editor 40040
Toggle main window visible 40258
Toggle minibrowser 40298
Toggle easymove 40186
Raise volume by 1% 40058
Lower volume by 1% 40059
Toggle repeat 40022
Toggle shuffle 40023
Open jump to time dialog 40193
Open jump to file dialog 40194
Open skin selector 40219
Configure current visualization plug-in 40221
Reload the current skin 40291
Close Winamp 40001
Moves back 10 tracks in playlist 40197
Show the edit bookmarks 40320
Adds current track as a bookmark 40321
Play audio CD 40323
Load a preset from EQ 40253
Save a preset to EQF 40254
Opens load presets dialog 40172
Opens auto-load presets dialog 40173
Load default preset 40174
Opens save preset dialog 40175
Opens auto-load save preset 40176
Opens delete preset dialog 40178
Opens delete an auto load preset dialog 40180



WM_USER messages are sent using SendMessage(). In C/C++, you can send these messages by calling:

code:

int ret=SendMessage(hwndWinamp,WM_USER, data, id);


data is used by many of the messages, but not all. For messages where the meaning of data is not defined, simply use 0.

Here is a list of the currently supported ids that you can use from within Winamp plug-ins or from other applications (see plug-in only WM_USER messages, below, for more):


0 Retrieves the version of Winamp running. Version will be 0x20yx for 2.yx. This is a good way to determine if you did in fact find the right window, etc.
100 Starts playback. A lot like hitting 'play' in Winamp, but not exactly the same
101 Clears Winamp's internal playlist.
102 Begins play of selected track.
103 Makes Winamp change to the directory C:\\download
104 Returns the status of playback. If 'ret' is 1, Winamp is playing. If 'ret' is 3, Winamp is paused. Otherwise, playback is stopped.
105 If data is 0, returns the position in milliseconds of playback. If data is 1, returns current track length in seconds. Returns -1 if not playing or if an error occurs.
106 Seeks within the current track. The offset is specified in 'data', in milliseconds.
120 Writes out the current playlist to Winampdir\winamp.m3u, and returns the current position in the playlist.
121 Sets the playlist position to the position specified in tracks in 'data'.
122 Sets the volume to 'data', which can be between 0 (silent) and 255 (maximum).
123 Sets the panning to 'data', which can be between 0 (all left) and 255 (all right).
124 Returns length of the current playlist, in tracks.
125 Returns the position in the current playlist, in tracks (requires Winamp 2.05+).
126 Retrieves info about the current playing track. Returns samplerate (i.e. 44100) if 'data' is set to 0, bitrate if 'data' is set to 1, and number of channels if 'data' is set to 2. (requires Winamp 2.05+)
127 Retrieves one element of equalizer data, based on what 'data' is set to.
0-9 The 10 bands of EQ data. Will return 0-63 (+20db - -20db)
10 The preamp value. Will return 0-63 (+20db - -20db)
11 Enabled. Will return zero if disabled, nonzero if enabled.
128 Autoload. Will return zero if disabled, nonzero if enabled. To set an element of equalizer data, simply query which item you wish to set using the message above (127), then call this message with data
129 Adds the specified file to the Winamp bookmark list
135 Restarts Winamp

Gesture Pattern Recogniser Part 2

Got the program to recognise the set of simple gestures. It doesn't include the functions that rely on rotation before action, for example double roll right and left. The other gestures are recognised, with some difficulty. The recotgniser has a lot of room for improvement such as putting in a real pattern recogniser, but that looks a bit to Mathematical for the holidays. It can also be the base for generating the training set if that is the way the program heads in the future.

The decision made during the creation was to split the data for each movement 4 ways. Therefore each motion has 12 values. The the data was codified into -2, -1, 0,1 ,2 based on high and low break points. Then watching which values appeared when each motion was executed. Then personal choice for each action was used to decide on each of the actions qualifiers.

Now to see if I can't get it to open some external program..

Monday, July 9, 2007

Gesture Pattern Recogniser


After the last post I've been working on the pattern recognizer. The initial one show in the previous post was simple slap together job.

Trouble I had with the new one include customisable breakpoints.
Deciding when a gesture is running and sensitivity were the main concerns.

For refence the range is about -150 to 150.

Looked at 2 different ways of deciding when the gesture was started. The first version used a level (ussually 20, 30, or 40) of acceleration. This cause problem because the motion had to be jerk to get the higher values and the lower values were to easy to hit. Hence crap sensitivity.

The second versin uses rate of change in acceleration ( a jerk, i think is it's real name?). It's value is acctually only 1. It has to be low else the speed increase needed is insane. Had trouble keeping the motion going on only 1 axis but as soon as 3 were included it's not hard to create a fluid motion that keeps picking up values. Since a change is only needed on 1 axis top record all values. and the change is positive or negative.
I think the wrist itself actually helps this by give minor rotations around alternate axis.

Another problem was reducing the numbers. Pevious literature talks about noramlising the values. They used all the data and normalise properly. Both versions just took the average of set portions of the data. (I.E. quartiles if uding 4 as the spilt_size). The second version can change the split value in the code easily "# define".

The new version adoesn;'t actually look fopr patterns instead it stores each motion and which each of the spilt vaues X1, X2, X3, Y1,Y2... can be averaged. This works as a simple normalisation.

Since it simple int division the remained data value are ignored. This I think is acceptable because the simple gestures such as swipe only need to split the data into 2 to see the pattern. And for more complex actions, the data size increase so the remainder becomes less inportant.

The new version can rewset the trainer, read the averages so far. It doesn't save values, so it must be used in combination with writing the recogniser. Also it can't remove bad motion I.E. mistaken actions.

Might work on adding these things and then testing it with a few people to get the atual values to be used. Or aiming higher includeing a training program in the final product. Sounds good but the nuances of the motions are easiy confused if you don';t understand how the values are beiung read.

Found it particularly difficult to explain a jerking motion (for the initial program) to a friend from work. One the upside, once they had it, they understood the entire idea behind the gesture part of thesis. This finding from a hand's on approach might be useful later one.

Friday, July 6, 2007

Gesture Recognition



This is serious fun.


Wednesday, July 4, 2007

Gesture Recognition Ideas and Exploration

2stage Recognition of Raw Acceleration Signals
Relys on on or off states
Recognition Framework (
1 . Calibration,
2. Preprocessing
1. Motion area detection: interval of a gesture motion in time
2. Norm Normalization: Allowing for gravity
3. Gaussian Smoothing: Remove hand trembling
4. Resampling: Normalise in respect to motion speed
3. Feature point Extraction: determine the distance between the sampling points
4. Recognition:
1. Bayesian Network modeling of relationships
2. find model with highest model likelehood
5. Confusing pair discrimination
1. Determine between confusing models I.E. 0 and 6
2. Double intergration doesn't work becaue of double intergration error accumulation

Chapter 10. Gesture Recognition, Mathew Turk
Hummels and Stappers 4 aspects of a gesture wich may be important to its meaning
1. Spatial: where it occurs, location gesture refer to
2. Pathic: the path which a gesture takes
3. Symbolic: the sign a gesture makes
4. Affective: the emoitional quality of a gesture
Sturman suggested a taxonomy of whole-hand input that categorizes input techniques along 2 dimnentsions
1. Classes of hand actions: continous or discrete
2. Interpretation of hand actions: direct, mapped, or symbolic.
System Design Suggestioncs
1. Do inform the user
2. Do give the user feedback
3. Do take advantages of uniqueness of gesture
4. Do understand the benifits and limitations of the particular technology
5. Do usability testing of the system
6. Do avoid temporal segmentation if fesible
7. Don't tire the user
8. Don't use gesture as a gimmick
9. Don't incrase the user's cognitive load
10. Don't require precise motion
11. Don't create new, unnatural gestural language

Hidden Markov Models for Spatio-temporal pattern recognitions
Conclusions
1. Baum-Welch algorithim is a hillclimbing technique that is generally unable to fiund the gobal maxima
2. Baum-Welch is often very good
3. HMM perform better than Baum-Welch on simulated HMM data, but htese result do not nessecarily translate into improved perfromance in real world applications.
Other Points
1. Fully connected topology: There is not necessarily a defined starting stte and all state transitions as possible such the Aij |= 0 V i,j E[1,N]
2. Left-Right Topology: popular in speech recognition application. There is a defined stateing stae and only state transitions to higher-index state are allowed
3. Left-Right-banded topologies: transition structure contains on;y self-transitions and nnext state transitions, I.E. state cannot be skipped
3. Gestures have a natural state and finish state and thus it is reasonable to adopt the LR model


When to recognise
Many applications require an on off state to be selected before gesture recognition becomes avaliable. Stoke-it relys on the right mouse being pressed down. Hollar et al Wireless statis hand gesture recognition suggests movement from the hand from stble and the iPhoine activates when the screen is touched.


From AiLive movie
Problems come from speed, orientation, and direction, left/right handed. Ussually uses 3 motions

Monday, June 25, 2007

Belated Hello

As a side note, Hello to Jared.

I was meaning to say hello as soon as I found out you were watching, but I managed to forget about that completely during exams.

I just found your paper of Meaning in Movement and agree with the importance of the researcher in gesture design. I like the idea of the role of the researcher to assist in drawing out personal experiences to help people describe movements. I easily liken the idea to getting customers to describe flavors in wine (I work as a waiter). Often they don't know how to describe something but with assistance won't stop talking about it and often come up with relationship between the flavors and their food that I previous never thought of.

I'm going to try and do Interaction Design subject next semester and am review the notes off the web. It's was good to see a pitcure or two from your papers in there.

Okay, that's enough, time to get back to work.

-SRE

Applicability of Actions 2

Using Sal's advice I now have a survey to test the applicability of the actions. The survey basically categorizes each action into one of the 3 group previously described.

To help the marriage of the tasks (I.E. play) and the action I added to the survey a classification of all the task into what are the same groups.

Stephen, I am about to email you the survey. Can you have a look at it to review for usefulness? All the stuff below is in the email in case or email errors.

Some of my concerns relating to the survey include;
  • I've used "different" categories for the actions and the tasks, but the categories themselves are actually the same groups. I think having the categories different makes the distinctions between the main stages easier, but create more difficulty for the participant in performing the survey.
  • For formatting, on each page I've repeated the definitions of the categories being used. This has been a problem (remembering definitions) for me when filling out other peoples surveys.
  • I'm not sure about the introductions. It seems brief and I feel it's missing something important.
  • I haven't written a administrator version (which would describe the actions and tasks) because I know I hill be performing the survey. But if submitted within the thesis fro reference will this administrator stuff be needed.
  • Lastly, How is an effective method of judging the sample size, for a project like this. I know that I depends on the sample selection method so for a convenience sample I assume, Until a pattern emerges will be good enough.

-----------------------------------------------------------------------------------------

PAGE 1

Task and Action Classification Survey

The survey occurs in two stages. The first stage involves selecting categories for commonly used task which are related to Media Player usage. The second phase involves selecting categories for set actions that will be explained to you by your survey administrator.

The First or Media Play Stage of this survey involves choosing categories for common media player tasks. There are three categories to select from, that are; Easy Fine Task, Complex Fine Task and Error Prone Task.

The Media Player Task categories are defined thus:

Easy Fine Tasks

An easy fine task is characterized by frequent usage and desirability for simple actions.

Complex Fine Tasks

A complex fine task is characterized by the task’s ability to control or select a level or option of a variable.

Error Prone Tasks

An gross task is characterized by the desirability for a user not to accidentally perform the action.

After selection of the category of the task, each of the tasks within the first Media Player Stage are to rated on a scale of disappointment if the task is performed but is not the task desired.

The Second or Action Stage of this survey involves choosing categories for a set of actions that use a Wii Remote Control. There are three categories to select from, that are; Basic Actions, Fine-Tuning Actions and Difficult Actions.

The Action Categories are defined thus;

Basic Actions

This category is represented by actions that are simple to perform and do not require accuracy and precision.

Fine-tuning Actions

This category is represented by task that are not simple to perform and require accuracy and precision.

Difficult Actions

This category is represented by task that are not easy to accidentally perform.

After selection of the category of the action, each of the actions within the Action Stage are to rated on a scale of difficulty to perform.

PAGE 2

The Media Play Stage

For each of the following tasks please;

  1. Select a category, and
  2. Select a level of disappointment (0 being no disappointment to 10 being completely disappointed) if the task were to be performed and was not the desired task.

If you have any questions please ask your administrator. The definitions of the categories are provided for your reference.

No

Task

Category

Disappointment Level

0 – No Disappointment, 10 – Completely Disappointed

Easy Fine

Complex Fine

Error Prone

1

Play

0

1

2

3

4

5

6

7

8

9

10

2

Pause

0

1

2

3

4

5

6

7

8

9

10

3

Stop

0

1

2

3

4

5

6

7

8

9

10

4

Channel Up

0

1

2

3

4

5

6

7

8

9

10

5

Volume Up

0

1

2

3

4

5

6

7

8

9

10

6

Next

0

1

2

3

4

5

6

7

8

9

10

7

Previous

0

1

2

3

4

5

6

7

8

9

10

8

Zoom In

0

1

2

3

4

5

6

7

8

9

10

9

Default Zoom

0

1

2

3

4

5

6

7

8

9

10

10

Zoom Best Fit

0

1

2

3

4

5

6

7

8

9

10

11

Repeat

0

1

2

3

4

5

6

7

8

9

10

12

Random

0

1

2

3

4

5

6

7

8

9

10

Easy Fine Tasks

An easy fine task is characterized by frequent usage and desirability for simple actions.

Complex Fine Tasks

A complex fine task is characterized by the task’s ability to control or select a level or option of a variable.

Error Prone Tasks

An gross task is characterized by the desirability for a user not to accidentally perform the action.


PAGE 3

The Media Player Stage


For each of the following tasks please;

  1. Select a category, and
  2. Select a level of disappointment (0 being no disappointment to 10 being completely disappointed) if the task were to be performed and was not the desired task.

If you have any questions please ask your administrator. The definitions of the categories are provided for your reference.

No

Task

Category

Disappointment Level

0 – No Disappointment, 10 – Completely Disappointed

Easy Fine

Complex Fine

Error Prone

13

Channel Jump

0

1

2

3

4

5

6

7

8

9

10

14

Volume Select

0

1

2

3

4

5

6

7

8

9

10

15

Set Speed

0

1

2

3

4

5

6

7

8

9

10

16

Fast Forward

0

1

2

3

4

5

6

7

8

9

10

17

Rewind

0

1

2

3

4

5

6

7

8

9

10

18

Mute

0

1

2

3

4

5

6

7

8

9

10

19

Record TV

0

1

2

3

4

5

6

7

8

9

10

20

Pause Recoding

0

1

2

3

4

5

6

7

8

9

10

21

Stop Recording

0

1

2

3

4

5

6

7

8

9

10

22

Select Source

0

1

2

3

4

5

6

7

8

9

10

23

Add Source

0

1

2

3

4

5

6

7

8

9

10

24

Stop Source

0

1

2

3

4

5

6

7

8

9

10

Easy Fine Tasks

An easy fine task is characterized by frequent usage and desirability for simple actions.

Complex Fine Tasks

A complex fine task is characterized by the task’s ability to control or select a level or option of a variable.

Error Prone Tasks

An gross task is characterized by the desirability for a user not to accidentally perform the action.


PAGE 4

The Action Stage

For each of the following actions, your administrator will describe and perform the action. Please copy the action for the administrator to review. If the action is not performed correctly, your administrator will direct you in the correct performing of the action. Once the administrator agrees that you are performing the correct action;;

  1. Select a category, and
  2. Select a level of difficulty to perform (0 being no difficulty to 10 being very disappointed) each action.

If you have any questions, please ask your administrator

No

Action

Category

Difficulty Level

0 – No Difficulty, 10 – Very High Difficult

Basic

Fine Tuning

Difficult

1

Swipe

0

1

2

3

4

5

6

7

8

9

10

2

Reel it in

0

1

2

3

4

5

6

7

8

9

10

3

Pitch

0

1

2

3

4

5

6

7

8

9

10

4

Roll

0

1

2

3

4

5

6

7

8

9

10

5

Make it so

0

1

2

3

4

5

6

7

8

9

10

6

Duffman

0

1

2

3

4

5

6

7

8

9

10

7

Wipe Eye

0

1

2

3

4

5

6

7

8

9

10

8

Reel Pitch

0

1

2

3

4

5

6

7

8

9

10

9

Sun Rays

0

1

2

3

4

5

6

7

8

9

10

10

Roll Select

0

1

2

3

4

5

6

7

8

9

10

11

Pitch Select

0

1

2

3

4

5

6

7

8

9

10

12

Double Swipe

0

1

2

3

4

5

6

7

8

9

10

13

Double Pitch

0

1

2

3

4

5

6

7

8

9

10

14

Swipe Reel

0

1

2

3

4

5

6

7

8

9

10

Basic Actions

This category is represented by actions that are simple to perform and do not require accuracy and precision.

Fine-tuning Actions

This category is represented by task that are not simple to perform and require accuracy and precision.

Difficult Actions

This category is represented by task that are not easy to accidentally perform.