The code as it stands is sloppy. This is a UML to clean it up. It's just been so long since I've done any UML. Having a bit of trouble separating the input/output device from the gestures and tasks respectively.
Major differences are the abstraction towards the screen state and the modularization of the input/output devices. Any hints welcome. Going to make the hackt version represent the gestures from user studies (where the gestures are already resolved, IE no shuffle/repeat). Then convert the whole thing to this map if no comments appear.



No comments:
Post a Comment