Wow Hi

Posted: June 30, 2014 in Uncategorized

Been a long time. I don’t generally have much to say publicly, but I figure I’ll see if I can’t reinvent this blog thing into a publicly-visible note-taking space. I usually write random crap in notepad, so hopefully by putting it here something more valuable might come out of it. :)

Stupid Emulator Tricks

Posted: July 26, 2011 in Uncategorized

Or… trick. Just one.

I’ve been poking around with Windows Phone more often lately. The system makes heavy use of multitouch gestures, and I thought it was a shame that it didn’t have some kind of multitouch emulation that came with its emulator. In the emulator all you get is the mouse, so you can’t pinch zoom or rotate or any other the other fun things MT can enable. I do not actually have a physical Windows Phone of my own, so this made me sad.

That was until I downloaded the Surface 2.0 SDK. It comes with a utility called ‘Input Simulator’ which is intended to allow you to emulate Surface-style interaction with an ordinary PC mouse.  It leverages the fundamental system multitouch support that Microsoft released with Windows 7.

The coolest side effect of this that I’ve seen is that it works perfectly with the Windows Phone Emulator. My friend Jobi put together this video to demonstrate.

Ta daaa

Double?!?

Posted: June 23, 2011 in Graphics, Rant

I feel like a quick rant….

Why on earth is the standard numeric data type used throughout WPF and Silverlight a double?

97% of the time, when you’re working with numbers in any app, a float will do you  just fine. Particularly when you’re merely dealing with layout and transforms in WPF/SL, I can scarcely imagine ever needing anything more than a float.

“Why NOT doubles?” you may ask. After all, if you just use them everywhere then you never have to worry about switching between double and float.

If you do a lot of work with special animations, as I do, you wind up using calls like TransformToDescendant/TransformToVisual a lot. And you may notice, as I have, that you are really limited on the number of these calls you can make per frame without degrading your application’s performance; they are not cheap in the slightest. These calls involve a series of matrix multiplications that can really add up quickly if you’re not careful.

It might surprise you to learn that, if you switch to floats and strip away the inherent overhead of C#, matrix multiplication is almost trivial. It’s one of the more common functions a CPU will perform, and as such their designers have optimized it on hardware umpteen different ways. On most CPUs a matrix multiply using floats can be performed with merely 3 or 4 machine instructions using SIMD extensions. Even if you don’t have access to those, CPUs are inherently designed to perform math on floats much faster than on doubles.

So by using doubles everywhere, Microsoft has effectively thrown out all of the optimized paths CPUs offer to applications that need to do this sort of extremely common math. I’m too lazy at the moment to put together a quick benchmark comparison, but if I get around to it I’ll post it here. Bottom line is we can’t have as much fun with animation as we might otherwise be able to, and I, for one, have no idea why.

It’s here, in beta form!

http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/download.aspx

As part of Microsoft’s SDK launch event, then held a 24-hour ‘code camp’ in which small teams had 24 hours to produce something cool with the Kinect. I was invited to participate, and made a simple app that turns the user into a cylon! It turned out quite well.

You can see the video currently up at http://channel9.msdn.com/live. Scroll to roughly 2:20:00 to see me bumble around on camera.

I also got a chance to wedge in a really basic Kinect-powered PowerPoint presentation viewer. It existed in OpenNI first… so not really in the spirit of the code camp, but what I thought was remarkable was that the Microsoft SDK is easy enough to get into that I was able to port it while sitting on the couch waiting to go on the air at the channel9 building. Well done Microsoft!

Well, that took entirely too long. Anyway, I tried out the idea I mentioned in my last post and it didn’t go very well. Oh well. Basically, the idea was to provide an analog of mouse down/mouse up events using the Kinect with a hand cursor by having the user touch their hands together. If both hands were together, consider that the equivalent of a ‘down’ event. When they came apart, consider it an ‘up’ event.

It doesn’t work that well because your non-cursor arm gets really tired moving up to reach the other hand, and then back down when you’re done. Particularly if you’re already reaching out pretty far with your cursor hand to reach a far corner of the screen, touching it with your other hand can be damn near impossible.

So, this idea fizzled. The search for something better than hover continues!

I’m excited! I don’t want to divulge details yet but I’ve thought of a way to interact with an application using Kinect that doesn’t involve hover-over, and if it works well it ought to have all the power and flexibility of a standard Apple mouse! (pretty low standard, I know… /rib /rib)

I’ll post a demo of it when I get the chance. The idea is just to easily gesture something that’s the equivalent of a mouse click. Pull it off and I’ll have saved people countless hours of hovering their hand in one place for ~1.5 seconds at a time. :)

In case you haven’ t seen, I have a series of blogs published on IdentityMine’s website regarding Kinect-based interaction and some of the thoughts and experiments I’ve done with it. You’ll find it here:

Part 1: Intro
Part 2: Gestures
Part 3: Cursor
Part 4: Buttons
Part 5: Multiple Users

My thinking surrounding a lot of these issues has already evolved quite a bit, so some of it is outdated at this point, but I’ll stand by it as an accurate reflection of my thinking at the time, and update here as things progress.

For example, although I feel it is a crutch, I do find myself using hover-activated buttons quite a bit. Finding alternative interactions that actually work well is difficult. It’s rather easy to create an interaction that’s tuned to my particular way of gesturing, but other people do things differently, and properly identifying them all is a real challenge.

It’s not just getting the machine to recognize gestures right, however. Getting the user to understand how to do a particular gesture is the other half of this. It’s the easier half, to be sure. The solution is to play an animation displaying exactly how to perform the gesture. I’ve learned that plain English descriptions of a gesture are never enough to properly convey the right information. But I’m no artist so it’ll be a bit before I get around to those. :)