GHC Reflections: Front End Optimization

One of the single part workshops I attended was a discussion and exploration into front end optimization. As someone who works mostly in front-end design, this was an intriguing talk to me. It was rather technically oriented so the notes are a bit dry, but if you are stepping into this field at all, there are a few pearls you might find useful.

The first and most important note that the presenter made was to optimize your digital projects for the front end, contrary to popular belief. While it is of course important to build your systems on a strong framework and have clear channels to resources and reduce unnecessary clutter in back-end code, people often forget the impact front-end code can have to the end user. If your front-end development is sloppily thrown together, this is the layer that directly hits the user, and can easily result in a degradation of performance even if back-end code is flawlessly executed.

The next point the speaker hit on was minifying HTML, CSS, and Javascript files. The number of lines in a file counts toward the KBs needed to load the site and can slow it down. The speaker pointed out that users are unlikely to care about “pretty code” especially if it’s causing slower performance.

Minifying is a practice I’ve had trouble stepping into myself, if only because I like to “grab and go” with my code. I often hear of keeping two copies: your editing file and then uploading the minified version to the web – I just have had little reason to lately, as my own website’s pages are not incredibly line-heavy. Likely as I work more on larger projects, minifying will become more and more my practice – this speaker’s stressing of it was part of the motivation I needed to look into it more.

Next were a few basic points, like avoiding redirects and bad URLs. Not only can they be confusing and frustrating to the user, but redirects can cause the page load time to increase (as the request has to jump around more than usual), and bad URLs will likely destroy the flow of users actually using the application. Redirects like m.mysite.com for a separate mobile versus web version can also cause issues down the road: for instance, content missing from one version of the website and two sets of code to now maintain that have quite a large portion of duplicate content (which may cause issues for search engine optimization). Using responsive design can help fix this issue by allowing one set of code with varied breakpoints to function on all devices.  If you must do re-routing, try to limit it to the server side instead of client side to optimize the redirect’s speed and overhead. One last tip: if your redirects attempt to make a user download the app (such as a mobile version of a site redirecting or loading a modal saying you must visit the app store), stop what you’re doing right now. Not only is this annoying and likely to drive traffic away from your site, it’s a poor attempt at getting a hook in a user who isn’t even sure they enjoy your content and can leave a very bad first impression that might make them unlikely to come back. Furthermore, redirecting them to an app because developing your mobile site more robustly wasn’t in your plan shows a laziness to develop your site with their needs in mind.

Allowing GZip Compression was another point made, which required a little more research on my part as I hadn’t heard of it prior. GZip is a compression algorithm for websites that finds similar strings within a file and replaces them temporarily, which can make the file sizes a lot smaller – especially in documents made for the web, where phrases, tags, and whitespace are often repeatedly used. If you (like me) had never heard of GZip and would like more details, find out more here: https://developers.google.com/speed/articles/gzip

Page load times are obviously critical to the success of an application, and can often be an indicator of how optimized performance is (after external factors such as internet speed or evened out, of course). Typical metrics for average load times tend toward users losing interest in a web page if it hasn’t loaded (or at least, loaded something) within half a second. Mobile users tend to have more patience, but after about ten seconds their patience is gone – two seconds or less makes them quite happy though. This number has been one I utilize quite often now when asked “how long is too long” or doing quick load tests. It’s a simple note and numbers to remember, but ones that can really help in a pinch if you’re trying to quickly decide if more optimization of existing code is needed, or to move on to the next task or project as the code “loads reasonably”.

Applying web best practices is a key component of ensuring optimization. Not only will following best practices likely result in more efficient and optimized code, it will also typically result in cleaner code for developers to understand, and greater optimization for search engines, thus resulting in more end users.

Another practice for optimizing your user’s front end experience is to cache and consolidate your resources. Consolidation can consist of compression (such as GZip) for files and also image compression. Of course, with image resources there is always the fear of a quality trade-off with compression, but when done correctly images typically still have room for at least a bit of optimization with little to no loss in quality. If your site is image heavy, I recommend looking into image compression and load optimization – it can seem scary, especially on a portfolio site where quality is key – but the results can pay off in happier users. This is definitely something I myself need to get more comfortable with learning about and utilizing, especially as I build out my own portfolio projects and such – and so I’ll challenge you to it also.

If you’re still unsure about using compression on your images, you can at least dip your toe in the waters by ensuring you’re using the correct file types for your images. PNGs (portable network graphic) are almost always the most optimized file type for web and mobile use. GIFs (graphic interchange format) are typically best for very small images (think a Wingding style icon, at about the size of ten to twelve point font), or images containing very little color (typically three or less color points). GIF and PNG both support transparency in modern browsers (degradation for transparency can get spotty especially for PNGs in older versions of Internet Explorer. If you’re having issues in IE 7 or 8, the fix can be as simple as saving your PNGs as “Indexed” rather than “RGB” mode). GIF provides support for animation frames – meaning if you require animation in your image and cannot or do not wish to achieve the animation effect with several images and CSS (this can definitely be cumbersome), GIF is the ideal format. JPG (Joint Photographic Experts Group) is ideal for all photographic quality images. BMP (Bitmap Image File) and TIFF (Tagged Image File Format) are not ideally suited for use in web applications any longer.

Another key facet of front end optimization is ensuring you as a developer do everything in your power to combat device limitations for your users. This includes creating adaptively: load resources on user demand and customize images by screen size to ensure the fastest load time – to name a few ways. Practice progressive rendering – loading an image entirely at lower quality and progressively enhancing the image as more power to do so becomes available – helps ensure users with slow graphics cards still get the full experience, even if it starts off a bit fuzzy. JavaScript slowness can be a debilitating issue in slower CPUs; considering this and limiting your necessary JavaScript (of course, don’t betray your functionality needs!) can help every user enjoy your website easily and speedily.

The presenters finished out with a few tools that can be used to measure the performance of front-end and mobile devleopment. Webpagetest.org can be used on internal sites – which is great for entities with a large intranet presence. Pagespeed is a plugin that can be added to your page to test and gather data on load times. Mobitest is optimized for mobile speed testing, and the Chrome Remote Debugger and Safari Web Inspector allow you to plug in an Android or iOS device respectively and test for performance.

Overall a lot of great information here – some of which I was a bit leery of given my own ways and justifications for those, but could see the merit in what the speaker was suggesting and that it was, at the very least, worth considering and potentially implementing aspects of for each project as the struggle between optimizing and “getting it done” rages on. Regardless, there was plenty I learned or at least gained a stronger awareness of, and I’m very glad I attended the workshop to have my eyes opened a little bit wider.

“There are two ways of constructing software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” – C.A.R Hoare

GHC Reflections: Mobile Design & Security

This lightning panel was rather interesting, as the topics were fairly varied in point but all great to consider for mobile design and the future of data and security.

The first talk discussed a user’s “social fingerprint” – a mathematically unique sequence of how a user interacts with their mobile device on social networks, texting, calling, etc. Essentially, every user boils down to using their device in a slightly different way – when these patterns are calculated no two are exactly alike. This is an interesting concept: we often think everyone talks, texts, or checks Facebook identically – but apparently this could not be farther from the truth. Social fingerprint is more than just -how-, it is who and when: time zones, contacts frequented, and more all makeup the social fingerprint. This term is often used to describe our social usage in general, but it can be investigated deeper to create this truly unique representation of our habits.
The speaker pointed out how if our social fingerprints are indeed unique, they could be used in some capacity for security measures, such as fraud detection. Exploring secure measures beyond the password is definitely exciting territory. I worry though that social fingerprint is “too” unique – in the sense that it could consistently change. If you cut ties with someone you used to call every day, would that not raise an alarm in social fingerprint detection?Obviously social media has ways to trend anticipated life events and interactions between people based on the sheer amount of data – but can everything truly be boiled down to a mathematical signature? I’m excited by the prospect of using social fingerprints, but concerned at the actual application of them – especially if the math and inputs are as complex as they seem they may be.

Another take on security was utilizing GPS to ensure secure interactions. Specifically, the speaker discussed GPS as a means to identify “zones” in the real world that one anticipates accessing devices and the level of comfort they have that at those locations, they are indeed themselves. For instance: home and work may be level 1, where we are confident that if we are here, our device is being accessed by us. Level 2 may be the cafe or laundromat, where we would frequent, but may accidentally leave the device unattended. Level 3 could be our hometown, neighborhood, or even state: where we can be expected to be in general but could easily lose a device within. And level 4 might be anywhere else globally: access from these places would be irregular or unanticipated. The presenter discussed using these levels to give varying degrees of password/access assistance. If I’m at home and forget my password, I expect that I should be able to receive all my hints or assistance channels for logging in. On the town, I may want less options to appear, just in case someone else is on my device. And most definitely I would want heightened security to anyone attempting to access when I’m out of state/country/etc (or trying to access -from- these places), so their hints should be extremely restricted if there at all. The idea was to provide “secure spaces” to heighten security beyond just the password, but to further attempts to breach it or obtain information pertaining to it.

This topic is intriguing looking back because Microsoft has been implementing a similar feature in Outlook. While I appreciate their security at times it can be a bit too overbearing – my work’s servers ping off a cluster not near us geographically, and this triggers the “suspicious activity” login attempt any time I try to get to my email at work. The security concept is great – but something like the presenter discussed, where I have more of a choice in defining my regions, would definitely save headaches at times (like when I try to log in at work for one small thing only to have to go through a chain of security measures which the details for may be at home). Definitely interesting to see this idea being implemented, and curious where the next steps will be with it.

Another speaker in this panel discussed A/B Testing – something among many other versions of testing I’m hoping to become more familiar with in my job. They stated a strong A/B test can be made even more helpful by integrating code to retrieve data on user input or mouse movements – so patterns between sets A and B can be recognized and the user process more readily understood. Sessions and their data could be stored in buckets relative to their version and even the time/cycle or type of user for quicker retrieval and review.

The next topic was accessibility in mobile. This topic was fairly straightforward, but always refreshing to keep in mind. The presenter highly recommended considering the accelorometer – think of technologies like FitBit, and how relevantly accessible its use is beyond just software and screens. Other considerations for accessibility – touch and sound. Consider your feedback to users: a soft pulse/vibration when they press a button, a light ding when an alert appears. Remember to consider how these affordances effect the experience for users who are color-blind, deaf, etc. – are your notification color choices still visibly helpful or even viewable to someone who is color blind? Does your application give another form of feedback if a user is deaf and anticipating a ding (a glowing icon, tactile response, etc)?

The final presenter discussed flexible privacy controls. With the advancement of healthcare digital records and increasingly more sensitive information going digital, at times companies forget the affordances that could be made with physical/paper copies that need digital counterparts. The presenter used healthcare as an example: Certain health records you would like to be visible to your spouse, certain to your family, and certain to only yourself, your doctor (or only certain doctors), and so on. These preferences may also change over time: think a bank account in which a parent has access while a child is in school, but the child may need or wish to remove the parent’s access once they are grown. While these issues in the past were fixed with phone calls or paperwork, digital counterparts need flexible privacy controls to ensure users can take care of these privacy needs with the same ease (or at least, the same to less amount of headache) that they did in analog. These flexible privacy controls can even extend to securing applications themselves: if my healthcare app is linked to my phone, I may want to have additional security measures before starting the app to ensure that no one can tamper with my settings but me (and here we can even correlate to the talks before for more ways to secure our privacy!).

I loved the focus on users and their experiences interacting with their phones and how that relates to the real world in so many of these talks. They pointed out design imperatives and areas for continued development to continue to make phones and in turn technology overall an extension and addition to the “real world” – rather than purely a distraction or separate plane entirely.

“The mobile phone acts as a cursor to connect the digital and the physical” – Marissa Mayer

GHC Reflections: Video Games

The video games topic was definitely helpful not just from a video gaming perspective, but from future technologies and augmenting reality points of view as well. Even if you’re not an intrepid game developer, some of the points were definitely worth noting for any developers, and even interactive media/story planners. Intrigued? I was. Read on for more.

ReconstructMe (http://reconstructme.net) plus correct camera technologies (they suggested Asus Camera) was an interesting project, and was showcased specifically between Maya and Unity. The basic premise of ReconstructMe is using a camera rotating around an object to then render a life-perfect 3D model of that object: a backpack, a laptop, a tree. You could use the technology even on animals and humans – but of course you would only have them in a singular pose unless you were able to edit the model joints from the mesh (which I am uncertain of the capability for). You can then  retrieve (from a 3D technology such as Maya) and paint mesh skins for the models to use them in any 3D application (such as Unity), or even configure the models to 3D print replications (like making statues of yourself to put on trophies – for being awesome, of course). When it comes to wanting real-to-life object models, or when the model is needed quickly, ReconstructMe definitely looks like a viable option.

The next presenter focused on developing a hierarchy for critically evaluating learning games, so that they can be more widely accepted and used in STEM classrooms and their merit understood on a broad metric scale. She based her evaluation on Bloom’s Taxonomy, with criteria for Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. She would then correlate the objectives of the game and player actions to these categories – if task one in a game was to design your own character, she may check that creativity is present in task one. Examples she had of quality STEM teaching games were CodeHero (for Unity and Variables) and Spore (for predator prey interaction). It was intriguing to see someone attempt to quantify a metric for gaming and entertainment based on valuable content rather than personal preference. Something like this, if done with care and properly implemented, could easily make its way into school systems to evaluate games that could be used in the core curriculum and have value to students – an exciting prospect for getting children excited about learning in a fun and different way!

Next, we focused on developing true stories in games – striving for non-linearity. One of the largest downfalls of gaming as a story mode is that our stories often must end up linear: this interaction must occur before this event, leading up to the final boss and ending. While this linearity from a coding perspective seems near unavoidable, this topic focused on ways to branch our stories such that the linearity does not become a limitation. A key takeaway was that our stories may be linear, but our gameplay should strive to be non-linear. A suggestion was “Satellite scenes”, which are based on a player action and then dynamically modifies a tiny bit of the story, until the fragments become the linear whole. Scenes that are the quintessential backbones to the story and must exist or must be in a certain order are known as “Kernel scenes”. Therefore, more open world and progressive, non-linear gameplay lies in tying satellite scenes to shaping the world, and not overpowering the game with a progression of consecutive kernel scenes. Some terminology to remember as a takeaway also: Actors perform actions, the world changes, and these events should relate to each other – and always remember that actors should be people, not things, two or more agents who understand each other and respond properly. Put effort and focus on depth in satellite scenes and letting the player see the little changes their choices make to the world at large (strong core story with flexible relevant nodes that add to gameplay), and your game will provide depth beyond the standard linear story.

This intrigued me from the standpoint of an experiences (as a user experience lover!) – be the experience a story, video games, or an alternate reality/marketing plan, considering the ripple effect on individual users rather than the funnel to the end goal is definitely something that can add finesse and excitement to any endeavor where participation from and excitement by the audience is hoped for!

The final presenter discussed the XBox SmartGlass, which is relevant for contextual use of augmented reality and future media consumption beyond simply video games. The XBox SmartGlass is designed to turn any smart device into a controller. It accounts for devices and the media SmartGlass is being used with through simplified text entry and contextual user interface – with the hope of keeping users engaged even when away from their initial screen, or continue to keep them interacting with their secondary device and engaged while at the primary screen. Examples included Forza, where the second device would provide an aerial GPS view, or a game like God of War, where SmartGlass may provide hints, maps, weaknesses, or additional scenes and content contextual as you progress, so there is never a need to look up a game guide. Again, as a UX person, I loved the idea of contextual content and assistance or depth added for users without additional work on their part, or without distracting them if they do not wish to utilize that aspect of the experience. I would love to see more contextual work like SmartGlass appearing in other media, and hopefully as AR continues to develop, on more devices as well.

As a lover of video games, I went into this talk expecting to be happy I went even if the content was lacking (because video games!). Instead I found quite a bit of content that inspired me beyond what I anticipated, and points for innovation beyond the gaming sphere. It’s amazing how gaming has become so strongly linked to experiences and technology development in our culture, and it’s exciting to see the possible applications across other modes and mediums as we continue to develop these immersive entertainment worlds.

“Video games foster the mindset that allows creativity to grow.” – Nolan Bushnell

ReconstructMe copyright ReconstructMe Team, Spore copyright Spore, Xbox copyright Microsoft

GHC Reflections: Web and Mobile Dev

The web and mobile dev lightning talk featured tons of technologies and trends for the next generation of development.

“World of Workout” was a concept discussed for a contextual mobile RPG based in real-world fitness. It would use pattern recognition to recognize user’s workouts – sparing them the complexity of having to input their info themselves (ie holding phone in an arm workout holster and doing squats, phone can recognize this motion). The workout info would then affect the progress of the game avatar, with stats available to the avatar for workouts done by the user, such as speed boosts for sprinting, strength for weights, and stamina for distance running. Another interesting feature they proposed was accelerated improvement at the start of the game so users are encouraged to get into a daily routine, but also adding in a fatigue factor so that rewards are reduced when workouts would become excessive. There would also be random rewards and associated notifications for doing “challenge” workouts with extra benefits attached.

This idea really resonated with me as part of the “future of user experience”: what better immersion is there than in a good game? And as we have learned, users appreciate apps responding to them and to receive gratification: which pattern recognition and rewards both do. After seeing this idea, I sketched out the concept for a similar game-incentive idea during a hackathon: TaskWarriors, an RPG based on checking things off your task list and gaining skill and gold based on the priority of the task and type of task (helping you balance your days -and- ensure you complete high priority tasks before their deadlines). I’d really like to re-explore TaskWarriors, since if done right, I think it could work very well like World of Workout seems (hopefully) fated to. It has also gotten me considering other avenues where gamification/customization and rewards could help with immersion and user experience – hopefully I can learn more and get more chances to potentially implement this in the future!

Parallax Scrolling was another feature discussed during this talk: specifically technologies with features that can aid or enhance parallax development. Javascript and CSS3 were discussed as features to aid in transitions, transforming, and opacity, while HTML5’s Canvas, WebGL, and SVG were also noted. Flash, VML, YUI scroll animation, jquery plugins such as Jarallax, and Javascripts such as pixi.js or easing effect tween.js were also featured as possible parallax technologies.

Parallax is definitely an intriguing artistic feature for making a website seem more interactive. Obviously, like any interactive feature, there’s definitely a point where it could be much too much. But there are some beautiful parallax scrolling websites that show what an awesome addition it can be to your content, especially on websites telling a story with a long scrolling page, like this one: http://jessandruss.us/

3D Graphics for web programmers was actually highly interesting to me. I’m terrible at making models (at least, at present) but have had a bit of experience with Unity, and always found 3D development interesting, even though I’m not the best at it right now. Though I would need to learn modelling to actually implement, the 3D Graphics presentation focused on three.js, a plugin that seems to make it extremely easy to program 3D elements into web pages on the website – rather than building them in Flash, Unity, or another engine. Three.js uses a what (mesh for the item and a pointlight for light source), a where (scene.PerspectiveCamera) and a how (translate, rotate, scale; requestAnimationFrame) at its most basic core to render and move 3D objects. Source code is available at http://github.com/shegeek/teapots_can_fly in which the presenter used only three.js, teapot.js (the item), and an HTML5 page to create the example.

CourseSketch was the final web and mobile technology shown, which was also really exciting from a college student perspective. It was a sketch-based learning platform being developed for MOOCs which would allow recognition of sketches to enhance automated grading capabilities of online problems. The examples given that were in development were truss diagrams for engineering, compound diagrams for chemistry, and Kanji for Japanese. Of course, with many more courses moving to online submission and grading, one can see applications for this technology well beyond the MOOC platform and into more education avenues – given of course the technology were robustly developed, and taking into account various drawing styles or other hiccups that may occur.

Overall there were a lot of intriguing development tools and concepts discussed. Obviously this talk hit home with me as World of Workout inspired the beginning conceptualization and planning for the Task Warriors app, even if it hasn’t seen fruition (yet! I hope I can continue it!). I love talks like these that bring to light new ideas and useful technologies – they have so much inspiration and energy within them that drives tech forward.

One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man” – Elbert Hubbard

Task Warriors copyright Bri, Jess and Russ copyright JessandRuss.us

GHC Reflections: Augmented Reality and Streaming Media

The Augmented Reality segment focused on getting user attention when they view the world through the lens of a device, and then providing them with relevant information – for instance, in an unfamiliar place seeing info labels pop up about the location. A problem however with labels is the contextual linking to the described object – and ensuring that the size relative to the screen size is still large enough to be helpful, without clustering too greatly and causing clutter. Conversely, solving this problem will definitely help users navigate the scenes – which of course, would be real-world scenes, by having optimal placement for aid.

Eye tracking was a highlighted topic for the Augmented Reality – and when discussing label placement, this is definitely understandable. Knowing where a user is going to look can ensure labels contextual to that appear – and decreases the amount of labels one would need to populate at a time, causing the clutter problem to all but disappear. Eye tracking methods include infrared light detecting the pupil, and heat maps of vision. The latter is good for studying eye movements, but the former could be a technology integrated into devices that could actually be utilized in real software for users.

A follow up to the idea of contextually populating based on eye
tracking does however, raise a few issues of its own. For instance, how can one ensure that label transitions after the eye moves are not too distracting?
Sudden or jerking movements would bring the users gaze back to the label, which could definitely throw off eye tracking software. “Subtle Gaze
Modulation” is the concept of using the right movement to draw the eye,
but terminating the stimuli before the gaze reaches its destination. Think of a blinking or glowing-then-dim light, drawing you toward it but disappearing before your eye lands on the spot that was radiating. Photography “tricks” like dodge and burn or blur, can heighten contrast and create the same sort of gaze-catching effect. And for anyone interested: the mathematical formula used in the presentation for gaze modulation

theta = arc cas ([v * w] / [|vector v| * |vector w|]).

Where v is the line of vision from the
focus and w is the desired line of focus to find the angle between the two.

The Streaming Media presentation was fairly standard pertaining to
connection quality versus video quality. Adaptive Streaming, or the
“Auto” default on YouTube for example, is the concept of the request
for stream quality changing relative to signal strength. The ideal of Adaptive Streaming is to ensure the user’s stream is not interrupted – quality may flux, but there should be no buffer waits and video/media should always be visible.
The encoding can also play a huge factor in video: compression reduces file
size, but at the obvious consequence of quality. The quality available for a
video to choose from when attempting adaptive streaming is dependent upon the file size – factors such as Resolution (HD/SD), bitrate, or frames per second (fps). Reducing frames per second can speed a file with potentially minimal consequences: video files contain a lot of redundancy (think of all the frames – many are repeated), and there is no way the human eye is able to see them all. Codex are compression and decompression algorithms that can minimize the impact of video file reduction to humans by taking into account these redundancies humans cannot notice anyway.

As a budding UX professional, the eye tracking points were of
intrigue to me. I would love to play with techniques similar to these in
digital designs in an attempt to help my users follow the path, without over-correcting or pushing them as they themselves adapt and explore. It would be interesting to see how this could be refined to be more subtle but assistive as needed.

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clark

All research copyright their respective owners

GHC Reflections: “Why Are We Still Geeks?” Panel – Part 3

(Trying to work toward graduation AND remember to blog is hard – sorry for the delay!!)
My favorite segment of the “Why Are We Still Geeks?” panel at Grace Hopper, part three featured professor Kim Surkan of Humanities & Gender Studies. Her discussion revolved around the current issues surrounding women in technology today, and steps being made toward remedying them.

Very early in the speech she made a memorable statement: “You have to remember, I am humanities, trying to step into your world – and let me tell you, your world is troubled”. With that statement alone, despite her not being a woman in the CS field today, you could tell how deeply she understood the problems plaguing women in technology today and hoped to remedy them.

She went on to discuss several distinct points hindering the cause of women in CS. For instance, both genders have a habit of correlating gender with ability in STEM fields, which regardless of actual skill causes a decrease in interest and  hinders abilities, which continues to increase our gender gap. In  simpler terms, both women and men perceive men as better at programming, thus women lose interest, stifle or hinder their own skills, and create an even wider divide to perpetuate this stigma.

She then cycled this concept into another called “symbolic annihilation”. She argues that the struggle there is for young women to see other women in computer science makes it difficult to protest the fact that they are underrepresented. It’s a difficult concept to wrap one’s head around at first: the best phrase for it is “It’s hard to protest an image that does not exist”. If we’ve never seen it, we have trouble conceptualizing it as a real problem.How can we address the problem of women entering STEM fields, if we have barely any women at all to turn to in the field for a frame of reference? Out of sight, out of mind as they might say.

One fact Dr.Surkan shared that I found startling was that Computer Science is the only STEM field that has seen a decrease in women joining in recent years. As a woman in the Computer Science field, I know we are few and far between – but to hear that trend is only becoming worse is something that makes me very sad. Any woman can be good at whatever she so chooses, but there is nothing about Computer Science as a field that makes it strictly male. I can think of plenty of areas that women can actually have an easier time conceptualizing than men due to how we process information. For instance, concurrency and object-oriented relations/definitions are things that I’ve seen women grasp more quickly. And for those who enjoy a human element – data analysis, human/computer interaction, usability, and user experience are all realms where a craving for “social” work can manifest in Computer Science – and are areas that sorely need workers, yet few Computer Science majors are as interested in. Not to put the genders in stereotypical boxes of course – I mean Grace Hopper developed the compiler – that says just how much women can contribute to the field in any area they choose!

Surkan continues by discussing subtle differencing in the language of Computer Science that one could also say contributes in part to the lack of women. The term hardware did not originate until 1958 – prior to that, computers were operated almost entirely by women! The word hardware versus software brings about a play on masculine versus feminine roles (men are “hard” and women are “softer”), and defined women solely as switchboard operators rather than “able to build computers”. This language change may have helped solidify the gender divide within Computer Science – where women are thought to be able to use the computers, but not build them or program on any level of real depth.

She follows this up with a case study of several events in the Computer Science world that have alienated women – and for me, these case studies were a turning point in how I viewed the CS world for women. I had known things can be bad, that we were few and far between – but some of these stories were beyond me. There was the 9-year-old girl at the TechCrunch Disrupt hackathon: who when other apps were inappropriate for someone of her age to have to see (and at least a touch objectifying to women) during demonstration, was blamed for being there – despite having built her app herself at the event. There was Anita Sarkeesian, receiving death threats for an attempt to kickstart a YouTube channel about female representation in video games. Adria Richards, who was harassed and then fired from her job for tweeting about some men making sexual jokes at a Python conference. And the more I Googled after the panel, the longer the list of stories became.

The above tied in with what she calls “brogrammer” culture – more and more startups and popular tech companies are modeling themselves in a fashion to attract young and thrill-seeking twenty-something males – to the point where the office culture resembles a fraternity house party more than corporate.<br />
Now, there’s nothing wrong with a fraternity house party, and there’s nothing wrong with a woman wanting to be a ninja or a wizard or a Jedi (as advertisements for these job may question if you are when it comes to coding), but there are elements of those environments and words that can cause women to automatically feel excluded. Perhaps when promoting jobs companies should use a second, more general, or even women-targeted ad set in addition (calling for code queens, Python princesses, and scripting sirens) if they wish to correct these images – and let their house party style be more like game&amp;study night on the co-ed dorm floor.

When I discuss the issues of women in Computer Science today, I am constantly brought back to referencing something I learned or heard in Surkan’s panel segment and branch out my discussion from there. No one has all the answers to these issues – but she definitely helped to raise some of the problems and questions, which is always a necessary first step. More than likely I will revisit some of these issues and pose some thoughts for the future in the future – but this blog has become long enough just revisiting Surkan’s panel points.

Is there a current problem with getting women into Computer Science, and the environment for women in the field at certain locations? Certainly.
Can we fix it for the future? Definitely.
Will it happen overnight? Probably not – but if we persevere, we will overcome.

“Computer science is no more about computers than astronomy is about telescopes” – Edsger Dijkstra

GHC Reflections: “Why Are We Still Geeks?” Panel – Part 2

In the second portion of the “Why are We Still Geeks?” panel at GHC, Brenda Laurel took center stage of the discussion, speaking out passionately about our portrayal of women as “professionals” – and what this decided imagery can do for our perceptions. She used the Grace Hopper poster as her example, commenting on details such as polished nails and suits – this serialized ideal of the business professional.

While at times her discussion seemed a bit out of left field or reaching, I felt she did have a very valid point in that we should be allowed to look “ourselves” and still be perceived as competent and professional. Of course we should dress workplace appropriate – but why must workplace appropriate for women include makeup? Why do so many images portraying business professionals show women in the 3-piece suit while men are able to wear khakis and polos in an ever-increasing amount of media? Why is pulled back hair considered professional – but pigtails (still pulled back) are not? When did these rather archaic lines become drawn between “appropriate” and “not” when some are rather silly when given a second glance?

Women should be allowed to dress in a way that is workplace appropriate but still expresses their sense of style and self – as for women, we often garner quite a bit of confidence from our dress. She stressed the importance of doing great work, being yourself, and allowing that to be noticed by those who will appreciate it – a valid point, even in a world with a need for a level of base professionalism. Who would want to work somewhere where their sense of individuality isn’t – on some level – appreciated at all?

Of course, Laurel’s discussion for those in the panel was a bit further seeming that we should be able to dress “any which way” that so suits us – to quote, we should&nbsp; “deny power to the spectacle “status quo” image of success” and “put our own self representations out”.

This is where I feel she loses me a bit. I agree, as I’ve stated above, that we should be allowed some semblance of freedom within the boundary of professionalism to express ourselves – and that often the media portrayal of that image is far too streamlined to a specific cut-and-dry image (the “power suit” woman – when so often many women where a nice dress or blouse to their jobs and are equally as successful). However, I do believe that there is a right for a company to have a dress code – again, some may be considered outdated or even bordering on archaic, but a business has a right to have an image they wish to convey. I don’t believe, however, that image should be allowed to fully mask the individual inside (whom they hired!). Even students in schools with dress codes often have some way to express – be it buttons their backpack, funny socks, hairstyle/color, or fun jewelery. If the overall “look” is being adhered to, why can’t someone be trusted with some freedom to express themselves?

While I wasn’t entirely sure how this discussion itself circled back to being geeks, I can see some correlation with perhaps our dress constructing an archetype?
Regardless, and even accounting for my disagreeing on certain levels with Laurel’s message, this section of the panel provided a good platform to consider the woman in the workforce dress code, and to hope we can continue to find and gain new ways of expressing ourselves in our dress while adhering to levels of professionalism.

“Don’t be into trends. Don’t make fashion own you, but you decide what you are, what you want to express by the way you dress and the way to live” – Gianni Versace

GHC Reflections: “Why Are We Still Geeks?” Panel – Part 1

One exciting Friday panel was on the topic of “Why Are We Still Geeks?” (we being Computer Scientists – especially women), and more specifically, how we can remedy this perception. The three speakers present were Maria Klawe, Brenda Laurel, and Kim Surkan. Their diversity of backgrounds in computer science, gender studies, and media really brought a depth of discussion to the table.

First to speak was Maria Klawe, and she discussed her personal attempts to remedy this stigma. Much of the discussion came back to media – we don’t see many computer scientists (let alone female ones) in media so we subconsciously disregard that they exist as more than the stereotypes that surround them (geeks, antisocial, etc). Her idea was that a television show following the life of a computer programmer (albeit dramatized in some fashion – not unlike criminal justice in NCIS, anthropology in Bones, behavioral psychology in Criminal Minds, or academia in Big Bang Theory, to list a few) would give public media a more ‘stylish’ concept of the computer programmer. This would allow for a broadening of the stereotype, and through character development in the show, do away with notions of what a computer scientist’s character “must be”. She has poured resources into a nice script, but has so far gotten nowhere – it seems even media is wary to take a chance on something “too fringe” or “too geeky”.

Of course, by this token, there are shows that do showcase computer scientists. Though the lack of media attention may be due to the fact that they either fall into stereotype (Big Bang Theory, the IT Crowd) or the computer scientist is not the main focus. For example, Chuck Bartowski of the NBC series Chuck is the main protagonist and is actually a Computer/Software Engineer – however the caveat is that the show explores his “spy life”, not his life as a computer programmer. Granted, episodes showcase his “hacking” talents, but his real world job beyond the spy life consists of IT service desk help at a Best Buy-like chain store – he is shown as nerdy and over-qualified but stuck until spy work finds him. The depth of character Chuck explores could certainly give a fresh media model for computer scientists – if only they had explored his programming talents more than a backstory and “feature” of his personality.

One of the few “saving graces” to women at least being represented as computer scientists comes from a surprisingly mainstream source: CBS hit series Criminal Minds. For nine seasons, Penelope Garcia has been the “tech goddess” of her BAU unit. Of course, she cracks up to the stereotype of being eccentric and ‘nerdy’ – but she’s loveable and human. She’s incredibly social, she cares deeply for her teammates, and in every regard except for her dress and collection of brightly colored toys she breaks the stereotypical image of a computer scientist on its head. One could even argue her dress, while “different” is still professional – not the ‘typical’ stereotype of hoodie/t-shirt and jeans. And she’s a woman.
Garcia exemplifies why broadcast stations should NOT be afraid to air computer scientists and crack open those stereotypes with a heart melting character.&nbsp; She makes an amazing role model – but of course the caveat is that on a show with as sensitive of material as Criminal Minds discusses, children can’t be exposed to her and thus their notions remain unchanged. Also, she is part of a show with many characters from many backgrounds, and sometimes her story can be a bit “lost in the shuffle”. However, in my eyes she gives hope that the computer science stereotype – even from the perspective of women in field, can be overcome gracefully.

Klawe makes a strong point that was echoed by the speakers after her – paying attention to media representations is critical to changing interest and stereotypes in our field. We may find media to be poor representations or at times superfluous – but they are what is in the public eye and their perception alters the societal perception as a whole. Hopefully change in TV media will come with efforts like Klawe’s and existing strong character models that already work in hit network shows being able to give an extra push to the initiative.

In the meantime I’ll cheer on Garcia every Wednesday night and relive my DVD series of Chuck – hoping for some new computer programmers to show their face and a depthy character archetype that will make me fall in love with not only their profession – but their personality.

“Well, I figured since I’m gonna have to interact with the mass  populace, I should dress in the traditional costume of a mere mortal.” – Penelope Garcia, Criminal Minds

GHC Reflections: Brenda Chapman Session

This reflection is one of the harder ones for me to write.

One of the highlights of the Grace Hopper Celebration was Friday’s panel with Brenda Chapman. For those who are not familiar, Chapman has worked at Disney, Pixar, and Dreamworks. She is most famous for concieving the idea and directing Brave (Pixar’s first female director, it should be noted). She was also Head of Story on the Lion King and Beauty and the Beast, and designed the iconic Little Mermaid scene of Ariel arching over the rocks with the waves crashing as she sings “Part of Your World”. Of course, Chapman has a slew of other credits to her name on projects both big and small – but I thought I might share the ones that were the most iconic to me for a general overview.

Chapman’s session was an overview of the projects she’s worked on in her career, and the skills that helped her to get there. Passion and tenacity were chief in this list. Interspurced clips from her works kept the audience engaged with nostalgia and wonder while highlighting some of her proudest moments.

And yet. And here is where the blogging becomes hard to do.

If I may be so bold, Chapman seemed so sad through the entire session. She spoke of doing what she loves, of passion and dedication, and yet there was a weariness in her that, despite trying to show a smile, it appeared she could not shake.

And honestly, in hearing her panel I would not blame her in the slightest if my perceptions were accurate. She was initially denied entry to Cal-Tech, only to finally get in the next year. When she was hired at Disney, it was only to meet a necessary female quota, not because it was believed she had talent. She was fired from directing the dream movie of her own design.

She embodies so much of the undertone that became apparent at Grace Hopper.

Chapman is a successful woman and her talent is not to be trifled with. She had many great roles, great mentors, and great opportunities because of her skills and her hard work. Yet something always seemed to go awry for her. Despite her obvious qualification she continues to, like so many professional women, not be believed in and to have her amazing accomplishments all too often dismissed. At least, this is how it felt in hearing her.

I hope that in saying these things it does not dismiss the validity of her accolades, or seem to state that no one has ups and downs in their career – far from it. But it seemed as though Chapman was sober to the sad truth that settled over the conference: women still aren’t trusted to do the job well, even when they have proven their worth. For a panel entitled “I Do What I Love to Do…and I’m a Girl” – it seemed as though the title was trying to affirm this as truth in a world that still sees otherwise. And I do hope that these reflections are of course, taken as what they are – reflections. Chapman’s panel was still enjoyable, and she is a lovely woman worth looking up to – I guess I could not help but feel like there was something more she wanted to say, but could not.

I think the hardest pain – and again, hitting the point home, probably lay in Brave. Merida is modeled after Chapman’s own daughter. This was, as she put it, her passion project. The story was her own, and it must have been amazing to have Pixar take interest. But then to be removed from the project and watch another finish your passion work – that must have been awful. I am surprised in all honesty that she has the strength and resolve she does to continue working, continue inspiring, continue telling stories and speaking up. If for some reason you find nothing else worthy of Chapman to look up to, look up to that. I know I do. She stared into the face of something that for a creator must have been heart wrenching and came out on the other side still in love with the creative process and her work. That is magnificent.

Along the same point of Brave, I had the opportunity to pose a question to Chapman regarding Merida’s redesign by Disney. For those unaware, Merida was stylized in marketing to look “more princess-like” – taming her hair, giving her curves, putting her in the dress she so hated in the movie, and removing her signature bow. We could speak again to the assumptions of women in our culture – but that topic seems tangential and bludgeoned to death.

Chapman publicly spoke out against the redesign as Merida was, in essence, an embodiment of being truly yourself – which the redesign stripped her of. For a while it seemed as though the public outcry had been heard, and the issue waned. Of course though, Halloween costumes rolled out – and when Disney princesses hit the shelves, the Merida redraw was back – after the issue seemed to have boiled over. I posed the question to Chapman if she thought all the efforts to get Merida’s design back had made a difference.

However, as I had seen the new product releases already I knew the answer was no before she replied. I was more interested in how she planned to move forward – would she continue to fight? Disney is of course, a force to be reckoned with – and at the end of the day, they are a company and need to be concerned with their marketing and advertisement. The fight may be a losing battle – but it is one Chapman intended to keep fighting. She stated she was planning to continue speaking with MightyGirl and other awareness campaigns for next steps. I applauded that despite being knocked down yet again, she was still hoping to continue on and keep trying.

One of the final somber points Chapman made was that she is going to move forward keeping her passion projects a little closer to herself. I think this point is the saddest part – the world needs people coming alive in their passions. Of course, Chapman can work on them herself, but sometimes the best way to bring passions to life is being able to work with others on them. Someone with such wonderful and inspiring ideas as her should be able to share them and help them grow through the assistance of others into whatever beautiful thing they can become. There is no limit to our capabilities – but sometimes we expand them through work with others. And it is sad that past offenses have caused her to lose some trust in that – like so many other genius, creative minds.

Despite all of the sobering undertones felt in this presentation, Chapman truly inspired me to continue to trudge forward despite falling. To continue to prove to the world what I am worth even when the world doesn’t want to listen. To ignite my passions even if I must do so alone.

Not to be corny, but Chapman truly taught me how to be brave.

For more information on Brenda Chapman, visit http://brenda-chapman.com/
For more information on the original controversy over Merida’s redesign, among other news websites you can visit http://www.theguardian.com/film/2013/may/13/brave-director-criticises-sexualised-merida-redesign

“If you had the chance to change your fate, would you?” – Merida, Brave

GHC Reflections: Grace Hopper (Looking Back)

For this reflection I wanted to take a step back and look at the namesake of the Grace Hopper Celebration – Grace Hopper herself. Time and time again she was remembered and commemorated at the event, and rightly so.

Grace Hopper was a United States Navy Rear Admiral, and of course a computer scientist. In addition to her plethora of distinguishments from the United States for her service, she is most known for the creation of the first compiler. In this regard, Grace Hopper is one of the “mothers” of computing. Modern computing simply would not be possible without compilers. In addition to this, she advocated the idea of machine-independent programming, which led to the development of COBOL. She is also known for coining the term “bug” in computing.

It is easy to see why Grace Hopper would be a strong representation for a conference celebrating women in computing. A woman with such great success who helped found modern computing as we know it surely deserves such recognition. However, in my eyes she represents more than just success in the computing field. She was also a strong woman who refused to be shyed away from her aspirations in computing.

Thinking back to Sheryl Sandberg’s keynote, and the strong undertones throughout the entire conference, one key message rang clear: women are underrepresented, undernoticed, and undertrusted in the computing science field. Grace Hopper is a symbol of both this perpetuation and rising above it. No one believed that she had created a running compiler. Her passion for compilers and machine-indepedent programming led her to be believed crazy by some. Those in her field (mostly men) told her the computer was only good for arthimetic, nothing more. That she was wasting her time on silly pipe dreams.

Yet look where we are because of her.

And still, for the amazing amount she has contributed to our technology today, how much is she recognized as an important figure? Not to fall tangent into a “her-story” monologue, but truly, how much do we learn of Grace Hopper in a technology classroom? Men like Hoare and Djikestra are remembered fondly for their algorithms – none of which would even apply to computing had Hopper not developed the compiler. Even in a computer languages and compilers classroom, her name is scarce. The shame of prominent and competent women still remaining unseen in the public eye when it comes to technology seems even to apply to someone as strong and amazing as Hopper.

Regardless of this dysfunction, nothing can dismiss from Hopper her colored career and amazing achievements. And for the fortunate who recognize her achievement and, if I may be bold, general awesomeness, a world of inspiration and stories of potential as well as a network of committed, diverse technologists await. Though Hopper may not be recognized as strongly as she always should be, she is still remembered, still recognized, and still carries a strong legacy that we can learn from and grow in.

One thing that impresses me about Grace Hopper beyond her accolades, is some of the quotes that are attributed to her. She is known for such oft quoted phrases as “A ship in port is safe, but that is not what ships are for. Sail out to sea and do new things” and “It’s much easier to apologize than it is to get permission”. Again, when these quotes are said they are not often attributed back to Hopper, but a quick search will yield that indeed, she is the one who said them. As a quotes and poetry&nbsp;lover, while looking into Grace Hopper’s life I was amazed that such an accomplished computer scientists had such a way with words. Perhaps an inherent love of languages that helped her develop the compiler in the first place? Either way, I was impressed and excited.

Hopper’s fierce, tell-it-like-it-is attitude, eccentric and quirky manner, and overall, for lack of a more efficient word – epicness – add up to one wonderous firecracker of a woman warranting all the praise and celebration she has recieved over the years. Hopefully in time her and more woman like her will be recognized more highly for their amazing achievements and inspiring success stories – but for now I’ll hold her close to my heart as someone I feel that I can relate to, look up to, and allow to inspire me.

For more information about Grace Hopper, and about the Grace Hopper Celebration of Women in Computing, please visit: http://gracehopper.org/2013/

“The most dangerous phrase in the language is, “We’ve always done it this way.”” – Grace Hopper