The Augmented Reality segment focused on getting user attention when they view the world through the lens of a device, and then providing them with relevant information – for instance, in an unfamiliar place seeing info labels pop up about the location. A problem however with labels is the contextual linking to the described object – and ensuring that the size relative to the screen size is still large enough to be helpful, without clustering too greatly and causing clutter. Conversely, solving this problem will definitely help users navigate the scenes – which of course, would be real-world scenes, by having optimal placement for aid.
Eye tracking was a highlighted topic for the Augmented Reality – and when discussing label placement, this is definitely understandable. Knowing where a user is going to look can ensure labels contextual to that appear – and decreases the amount of labels one would need to populate at a time, causing the clutter problem to all but disappear. Eye tracking methods include infrared light detecting the pupil, and heat maps of vision. The latter is good for studying eye movements, but the former could be a technology integrated into devices that could actually be utilized in real software for users.
A follow up to the idea of contextually populating based on eye
tracking does however, raise a few issues of its own. For instance, how can one ensure that label transitions after the eye moves are not too distracting?
Sudden or jerking movements would bring the users gaze back to the label, which could definitely throw off eye tracking software. “Subtle Gaze
Modulation” is the concept of using the right movement to draw the eye,
but terminating the stimuli before the gaze reaches its destination. Think of a blinking or glowing-then-dim light, drawing you toward it but disappearing before your eye lands on the spot that was radiating. Photography “tricks” like dodge and burn or blur, can heighten contrast and create the same sort of gaze-catching effect. And for anyone interested: the mathematical formula used in the presentation for gaze modulation
theta = arc cas ([v * w] / [|vector v| * |vector w|]).
Where v is the line of vision from the
focus and w is the desired line of focus to find the angle between the two.
The Streaming Media presentation was fairly standard pertaining to
connection quality versus video quality. Adaptive Streaming, or the
“Auto” default on YouTube for example, is the concept of the request
for stream quality changing relative to signal strength. The ideal of Adaptive Streaming is to ensure the user’s stream is not interrupted – quality may flux, but there should be no buffer waits and video/media should always be visible.
The encoding can also play a huge factor in video: compression reduces file
size, but at the obvious consequence of quality. The quality available for a
video to choose from when attempting adaptive streaming is dependent upon the file size – factors such as Resolution (HD/SD), bitrate, or frames per second (fps). Reducing frames per second can speed a file with potentially minimal consequences: video files contain a lot of redundancy (think of all the frames – many are repeated), and there is no way the human eye is able to see them all. Codex are compression and decompression algorithms that can minimize the impact of video file reduction to humans by taking into account these redundancies humans cannot notice anyway.
As a budding UX professional, the eye tracking points were of
intrigue to me. I would love to play with techniques similar to these in
digital designs in an attempt to help my users follow the path, without over-correcting or pushing them as they themselves adapt and explore. It would be interesting to see how this could be refined to be more subtle but assistive as needed.
“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clark
All research copyright their respective owners