GHC Reflections: Front End Optimization

One of the single part workshops I attended was a discussion and exploration into front end optimization. As someone who works mostly in front-end design, this was an intriguing talk to me. It was rather technically oriented so the notes are a bit dry, but if you are stepping into this field at all, there are a few pearls you might find useful.

The first and most important note that the presenter made was to optimize your digital projects for the front end, contrary to popular belief. While it is of course important to build your systems on a strong framework and have clear channels to resources and reduce unnecessary clutter in back-end code, people often forget the impact front-end code can have to the end user. If your front-end development is sloppily thrown together, this is the layer that directly hits the user, and can easily result in a degradation of performance even if back-end code is flawlessly executed.

The next point the speaker hit on was minifying HTML, CSS, and Javascript files. The number of lines in a file counts toward the KBs needed to load the site and can slow it down. The speaker pointed out that users are unlikely to care about “pretty code” especially if it’s causing slower performance.

Minifying is a practice I’ve had trouble stepping into myself, if only because I like to “grab and go” with my code. I often hear of keeping two copies: your editing file and then uploading the minified version to the web – I just have had little reason to lately, as my own website’s pages are not incredibly line-heavy. Likely as I work more on larger projects, minifying will become more and more my practice – this speaker’s stressing of it was part of the motivation I needed to look into it more.

Next were a few basic points, like avoiding redirects and bad URLs. Not only can they be confusing and frustrating to the user, but redirects can cause the page load time to increase (as the request has to jump around more than usual), and bad URLs will likely destroy the flow of users actually using the application. Redirects like m.mysite.com for a separate mobile versus web version can also cause issues down the road: for instance, content missing from one version of the website and two sets of code to now maintain that have quite a large portion of duplicate content (which may cause issues for search engine optimization). Using responsive design can help fix this issue by allowing one set of code with varied breakpoints to function on all devices.  If you must do re-routing, try to limit it to the server side instead of client side to optimize the redirect’s speed and overhead. One last tip: if your redirects attempt to make a user download the app (such as a mobile version of a site redirecting or loading a modal saying you must visit the app store), stop what you’re doing right now. Not only is this annoying and likely to drive traffic away from your site, it’s a poor attempt at getting a hook in a user who isn’t even sure they enjoy your content and can leave a very bad first impression that might make them unlikely to come back. Furthermore, redirecting them to an app because developing your mobile site more robustly wasn’t in your plan shows a laziness to develop your site with their needs in mind.

Allowing GZip Compression was another point made, which required a little more research on my part as I hadn’t heard of it prior. GZip is a compression algorithm for websites that finds similar strings within a file and replaces them temporarily, which can make the file sizes a lot smaller – especially in documents made for the web, where phrases, tags, and whitespace are often repeatedly used. If you (like me) had never heard of GZip and would like more details, find out more here: https://developers.google.com/speed/articles/gzip

Page load times are obviously critical to the success of an application, and can often be an indicator of how optimized performance is (after external factors such as internet speed or evened out, of course). Typical metrics for average load times tend toward users losing interest in a web page if it hasn’t loaded (or at least, loaded something) within half a second. Mobile users tend to have more patience, but after about ten seconds their patience is gone – two seconds or less makes them quite happy though. This number has been one I utilize quite often now when asked “how long is too long” or doing quick load tests. It’s a simple note and numbers to remember, but ones that can really help in a pinch if you’re trying to quickly decide if more optimization of existing code is needed, or to move on to the next task or project as the code “loads reasonably”.

Applying web best practices is a key component of ensuring optimization. Not only will following best practices likely result in more efficient and optimized code, it will also typically result in cleaner code for developers to understand, and greater optimization for search engines, thus resulting in more end users.

Another practice for optimizing your user’s front end experience is to cache and consolidate your resources. Consolidation can consist of compression (such as GZip) for files and also image compression. Of course, with image resources there is always the fear of a quality trade-off with compression, but when done correctly images typically still have room for at least a bit of optimization with little to no loss in quality. If your site is image heavy, I recommend looking into image compression and load optimization – it can seem scary, especially on a portfolio site where quality is key – but the results can pay off in happier users. This is definitely something I myself need to get more comfortable with learning about and utilizing, especially as I build out my own portfolio projects and such – and so I’ll challenge you to it also.

If you’re still unsure about using compression on your images, you can at least dip your toe in the waters by ensuring you’re using the correct file types for your images. PNGs (portable network graphic) are almost always the most optimized file type for web and mobile use. GIFs (graphic interchange format) are typically best for very small images (think a Wingding style icon, at about the size of ten to twelve point font), or images containing very little color (typically three or less color points). GIF and PNG both support transparency in modern browsers (degradation for transparency can get spotty especially for PNGs in older versions of Internet Explorer. If you’re having issues in IE 7 or 8, the fix can be as simple as saving your PNGs as “Indexed” rather than “RGB” mode). GIF provides support for animation frames – meaning if you require animation in your image and cannot or do not wish to achieve the animation effect with several images and CSS (this can definitely be cumbersome), GIF is the ideal format. JPG (Joint Photographic Experts Group) is ideal for all photographic quality images. BMP (Bitmap Image File) and TIFF (Tagged Image File Format) are not ideally suited for use in web applications any longer.

Another key facet of front end optimization is ensuring you as a developer do everything in your power to combat device limitations for your users. This includes creating adaptively: load resources on user demand and customize images by screen size to ensure the fastest load time – to name a few ways. Practice progressive rendering – loading an image entirely at lower quality and progressively enhancing the image as more power to do so becomes available – helps ensure users with slow graphics cards still get the full experience, even if it starts off a bit fuzzy. JavaScript slowness can be a debilitating issue in slower CPUs; considering this and limiting your necessary JavaScript (of course, don’t betray your functionality needs!) can help every user enjoy your website easily and speedily.

The presenters finished out with a few tools that can be used to measure the performance of front-end and mobile devleopment. Webpagetest.org can be used on internal sites – which is great for entities with a large intranet presence. Pagespeed is a plugin that can be added to your page to test and gather data on load times. Mobitest is optimized for mobile speed testing, and the Chrome Remote Debugger and Safari Web Inspector allow you to plug in an Android or iOS device respectively and test for performance.

Overall a lot of great information here – some of which I was a bit leery of given my own ways and justifications for those, but could see the merit in what the speaker was suggesting and that it was, at the very least, worth considering and potentially implementing aspects of for each project as the struggle between optimizing and “getting it done” rages on. Regardless, there was plenty I learned or at least gained a stronger awareness of, and I’m very glad I attended the workshop to have my eyes opened a little bit wider.

“There are two ways of constructing software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.” – C.A.R Hoare

GHC Reflections: Mobile Design & Security

This lightning panel was rather interesting, as the topics were fairly varied in point but all great to consider for mobile design and the future of data and security.

The first talk discussed a user’s “social fingerprint” – a mathematically unique sequence of how a user interacts with their mobile device on social networks, texting, calling, etc. Essentially, every user boils down to using their device in a slightly different way – when these patterns are calculated no two are exactly alike. This is an interesting concept: we often think everyone talks, texts, or checks Facebook identically – but apparently this could not be farther from the truth. Social fingerprint is more than just -how-, it is who and when: time zones, contacts frequented, and more all makeup the social fingerprint. This term is often used to describe our social usage in general, but it can be investigated deeper to create this truly unique representation of our habits.
The speaker pointed out how if our social fingerprints are indeed unique, they could be used in some capacity for security measures, such as fraud detection. Exploring secure measures beyond the password is definitely exciting territory. I worry though that social fingerprint is “too” unique – in the sense that it could consistently change. If you cut ties with someone you used to call every day, would that not raise an alarm in social fingerprint detection?Obviously social media has ways to trend anticipated life events and interactions between people based on the sheer amount of data – but can everything truly be boiled down to a mathematical signature? I’m excited by the prospect of using social fingerprints, but concerned at the actual application of them – especially if the math and inputs are as complex as they seem they may be.

Another take on security was utilizing GPS to ensure secure interactions. Specifically, the speaker discussed GPS as a means to identify “zones” in the real world that one anticipates accessing devices and the level of comfort they have that at those locations, they are indeed themselves. For instance: home and work may be level 1, where we are confident that if we are here, our device is being accessed by us. Level 2 may be the cafe or laundromat, where we would frequent, but may accidentally leave the device unattended. Level 3 could be our hometown, neighborhood, or even state: where we can be expected to be in general but could easily lose a device within. And level 4 might be anywhere else globally: access from these places would be irregular or unanticipated. The presenter discussed using these levels to give varying degrees of password/access assistance. If I’m at home and forget my password, I expect that I should be able to receive all my hints or assistance channels for logging in. On the town, I may want less options to appear, just in case someone else is on my device. And most definitely I would want heightened security to anyone attempting to access when I’m out of state/country/etc (or trying to access -from- these places), so their hints should be extremely restricted if there at all. The idea was to provide “secure spaces” to heighten security beyond just the password, but to further attempts to breach it or obtain information pertaining to it.

This topic is intriguing looking back because Microsoft has been implementing a similar feature in Outlook. While I appreciate their security at times it can be a bit too overbearing – my work’s servers ping off a cluster not near us geographically, and this triggers the “suspicious activity” login attempt any time I try to get to my email at work. The security concept is great – but something like the presenter discussed, where I have more of a choice in defining my regions, would definitely save headaches at times (like when I try to log in at work for one small thing only to have to go through a chain of security measures which the details for may be at home). Definitely interesting to see this idea being implemented, and curious where the next steps will be with it.

Another speaker in this panel discussed A/B Testing – something among many other versions of testing I’m hoping to become more familiar with in my job. They stated a strong A/B test can be made even more helpful by integrating code to retrieve data on user input or mouse movements – so patterns between sets A and B can be recognized and the user process more readily understood. Sessions and their data could be stored in buckets relative to their version and even the time/cycle or type of user for quicker retrieval and review.

The next topic was accessibility in mobile. This topic was fairly straightforward, but always refreshing to keep in mind. The presenter highly recommended considering the accelorometer – think of technologies like FitBit, and how relevantly accessible its use is beyond just software and screens. Other considerations for accessibility – touch and sound. Consider your feedback to users: a soft pulse/vibration when they press a button, a light ding when an alert appears. Remember to consider how these affordances effect the experience for users who are color-blind, deaf, etc. – are your notification color choices still visibly helpful or even viewable to someone who is color blind? Does your application give another form of feedback if a user is deaf and anticipating a ding (a glowing icon, tactile response, etc)?

The final presenter discussed flexible privacy controls. With the advancement of healthcare digital records and increasingly more sensitive information going digital, at times companies forget the affordances that could be made with physical/paper copies that need digital counterparts. The presenter used healthcare as an example: Certain health records you would like to be visible to your spouse, certain to your family, and certain to only yourself, your doctor (or only certain doctors), and so on. These preferences may also change over time: think a bank account in which a parent has access while a child is in school, but the child may need or wish to remove the parent’s access once they are grown. While these issues in the past were fixed with phone calls or paperwork, digital counterparts need flexible privacy controls to ensure users can take care of these privacy needs with the same ease (or at least, the same to less amount of headache) that they did in analog. These flexible privacy controls can even extend to securing applications themselves: if my healthcare app is linked to my phone, I may want to have additional security measures before starting the app to ensure that no one can tamper with my settings but me (and here we can even correlate to the talks before for more ways to secure our privacy!).

I loved the focus on users and their experiences interacting with their phones and how that relates to the real world in so many of these talks. They pointed out design imperatives and areas for continued development to continue to make phones and in turn technology overall an extension and addition to the “real world” – rather than purely a distraction or separate plane entirely.

“The mobile phone acts as a cursor to connect the digital and the physical” – Marissa Mayer

GHC Reflections: Web and Mobile Dev

The web and mobile dev lightning talk featured tons of technologies and trends for the next generation of development.

“World of Workout” was a concept discussed for a contextual mobile RPG based in real-world fitness. It would use pattern recognition to recognize user’s workouts – sparing them the complexity of having to input their info themselves (ie holding phone in an arm workout holster and doing squats, phone can recognize this motion). The workout info would then affect the progress of the game avatar, with stats available to the avatar for workouts done by the user, such as speed boosts for sprinting, strength for weights, and stamina for distance running. Another interesting feature they proposed was accelerated improvement at the start of the game so users are encouraged to get into a daily routine, but also adding in a fatigue factor so that rewards are reduced when workouts would become excessive. There would also be random rewards and associated notifications for doing “challenge” workouts with extra benefits attached.

This idea really resonated with me as part of the “future of user experience”: what better immersion is there than in a good game? And as we have learned, users appreciate apps responding to them and to receive gratification: which pattern recognition and rewards both do. After seeing this idea, I sketched out the concept for a similar game-incentive idea during a hackathon: TaskWarriors, an RPG based on checking things off your task list and gaining skill and gold based on the priority of the task and type of task (helping you balance your days -and- ensure you complete high priority tasks before their deadlines). I’d really like to re-explore TaskWarriors, since if done right, I think it could work very well like World of Workout seems (hopefully) fated to. It has also gotten me considering other avenues where gamification/customization and rewards could help with immersion and user experience – hopefully I can learn more and get more chances to potentially implement this in the future!

Parallax Scrolling was another feature discussed during this talk: specifically technologies with features that can aid or enhance parallax development. Javascript and CSS3 were discussed as features to aid in transitions, transforming, and opacity, while HTML5’s Canvas, WebGL, and SVG were also noted. Flash, VML, YUI scroll animation, jquery plugins such as Jarallax, and Javascripts such as pixi.js or easing effect tween.js were also featured as possible parallax technologies.

Parallax is definitely an intriguing artistic feature for making a website seem more interactive. Obviously, like any interactive feature, there’s definitely a point where it could be much too much. But there are some beautiful parallax scrolling websites that show what an awesome addition it can be to your content, especially on websites telling a story with a long scrolling page, like this one: http://jessandruss.us/

3D Graphics for web programmers was actually highly interesting to me. I’m terrible at making models (at least, at present) but have had a bit of experience with Unity, and always found 3D development interesting, even though I’m not the best at it right now. Though I would need to learn modelling to actually implement, the 3D Graphics presentation focused on three.js, a plugin that seems to make it extremely easy to program 3D elements into web pages on the website – rather than building them in Flash, Unity, or another engine. Three.js uses a what (mesh for the item and a pointlight for light source), a where (scene.PerspectiveCamera) and a how (translate, rotate, scale; requestAnimationFrame) at its most basic core to render and move 3D objects. Source code is available at http://github.com/shegeek/teapots_can_fly in which the presenter used only three.js, teapot.js (the item), and an HTML5 page to create the example.

CourseSketch was the final web and mobile technology shown, which was also really exciting from a college student perspective. It was a sketch-based learning platform being developed for MOOCs which would allow recognition of sketches to enhance automated grading capabilities of online problems. The examples given that were in development were truss diagrams for engineering, compound diagrams for chemistry, and Kanji for Japanese. Of course, with many more courses moving to online submission and grading, one can see applications for this technology well beyond the MOOC platform and into more education avenues – given of course the technology were robustly developed, and taking into account various drawing styles or other hiccups that may occur.

Overall there were a lot of intriguing development tools and concepts discussed. Obviously this talk hit home with me as World of Workout inspired the beginning conceptualization and planning for the Task Warriors app, even if it hasn’t seen fruition (yet! I hope I can continue it!). I love talks like these that bring to light new ideas and useful technologies – they have so much inspiration and energy within them that drives tech forward.

One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man” – Elbert Hubbard

Task Warriors copyright Bri, Jess and Russ copyright JessandRuss.us

GHC Reflections: Augmented Reality and Streaming Media

The Augmented Reality segment focused on getting user attention when they view the world through the lens of a device, and then providing them with relevant information – for instance, in an unfamiliar place seeing info labels pop up about the location. A problem however with labels is the contextual linking to the described object – and ensuring that the size relative to the screen size is still large enough to be helpful, without clustering too greatly and causing clutter. Conversely, solving this problem will definitely help users navigate the scenes – which of course, would be real-world scenes, by having optimal placement for aid.

Eye tracking was a highlighted topic for the Augmented Reality – and when discussing label placement, this is definitely understandable. Knowing where a user is going to look can ensure labels contextual to that appear – and decreases the amount of labels one would need to populate at a time, causing the clutter problem to all but disappear. Eye tracking methods include infrared light detecting the pupil, and heat maps of vision. The latter is good for studying eye movements, but the former could be a technology integrated into devices that could actually be utilized in real software for users.

A follow up to the idea of contextually populating based on eye
tracking does however, raise a few issues of its own. For instance, how can one ensure that label transitions after the eye moves are not too distracting?
Sudden or jerking movements would bring the users gaze back to the label, which could definitely throw off eye tracking software. “Subtle Gaze
Modulation” is the concept of using the right movement to draw the eye,
but terminating the stimuli before the gaze reaches its destination. Think of a blinking or glowing-then-dim light, drawing you toward it but disappearing before your eye lands on the spot that was radiating. Photography “tricks” like dodge and burn or blur, can heighten contrast and create the same sort of gaze-catching effect. And for anyone interested: the mathematical formula used in the presentation for gaze modulation

theta = arc cas ([v * w] / [|vector v| * |vector w|]).

Where v is the line of vision from the
focus and w is the desired line of focus to find the angle between the two.

The Streaming Media presentation was fairly standard pertaining to
connection quality versus video quality. Adaptive Streaming, or the
“Auto” default on YouTube for example, is the concept of the request
for stream quality changing relative to signal strength. The ideal of Adaptive Streaming is to ensure the user’s stream is not interrupted – quality may flux, but there should be no buffer waits and video/media should always be visible.
The encoding can also play a huge factor in video: compression reduces file
size, but at the obvious consequence of quality. The quality available for a
video to choose from when attempting adaptive streaming is dependent upon the file size – factors such as Resolution (HD/SD), bitrate, or frames per second (fps). Reducing frames per second can speed a file with potentially minimal consequences: video files contain a lot of redundancy (think of all the frames – many are repeated), and there is no way the human eye is able to see them all. Codex are compression and decompression algorithms that can minimize the impact of video file reduction to humans by taking into account these redundancies humans cannot notice anyway.

As a budding UX professional, the eye tracking points were of
intrigue to me. I would love to play with techniques similar to these in
digital designs in an attempt to help my users follow the path, without over-correcting or pushing them as they themselves adapt and explore. It would be interesting to see how this could be refined to be more subtle but assistive as needed.

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clark

All research copyright their respective owners

Google Pokemon MAPster: On Nintendo and Mobile

If you have any aspiring Pokemon masters as friends, or happened to open Google Maps up today, chances are you found out about Google’s April Fools prank this year:

Granted, there actually ARE Pokemon in Google maps today: just in sprite form and no traveling required. (Unless you count hopping from Harajuku to Old Faithful via the Maps app travel)

While a collaboration between Google, the Pokemon Company, and Nintendo was a rather ingenious prank to tug on any kid-at-heart’s nostalgia and gain some excellent publicity for all parties, what might not have been expected of the prank was the conversations it brought about to the future of Pokemon, and well – Nintendo games in general.

Nintendo franchises are some of the most beloved and memorable games: Mario, Donkey Kong, Pikachu, and Link (Legend of Zelda) easily spring to mind among others when one is asked to think of a video game. One of Nintendo’s best selling points for its games is the exclusivity of its characters: typically confined to Nintendo only titles with rare cameos to outside titles, and exclusively playable on Nintendo console systems.

Does that exclusivity exclude Nintendo from some successful business ventures? Any console junkie will tell you that when it comes to hardware, Nintendo may have innovative ideas (a controller with a screen? Some of the first motion detection titles?), but their processing power can lag years behind Sony PlayStation or Microsoft XBox. Some mobile devices may even have better processing capabilities and features than current generation Nintendo devices.

Would it be better business for Nintendo to farm out their franchise characters? Or start developing and selling for mobile? Maybe opening up a retro games section of the Play store filled with mobile formatted nostalgia-inducers?
Think of the possibilities mobile could offer: the augmented reality type game described in the Google Maps trailer isn’t so far off – granted it might have to be scaled down a bit since it’s unlikely one will hop a plane to Egypt to finish a game.
Mobile could hit a base of users Nintendo is missing too. Users who love Mario and Pikachu, but can’t bring themselves to shell out the money for a console just to play one or two titles, but would gladly pay the money for those titles on their mobile device. Or even users who would play more classic mobile games a la CandySwipe, Cut the Rope, etc. that would buy extra levels or make a micropurchase for a small game with their favorite characters starring. There’s a potential market left untapped.

Yet for all the possibilities, and all the frustrated Nintendo lovers but non-console buyers who would clamor for mobile Nintendo love, there’s some sound strategy to what Nintendo has done so far. As stated at the beginning, Nintendo built its characters partially on their exclusivity. Only seeing Mario in his Nintendo environment gives an expectation and a context, and it gives a level of quality expectation for the product. Letting Mario run around just anywhere willing to shell out the cash for him could dampen the iconic-ness of him and other Nintendo franchise.

Plus, just like Sony and Microsoft, part of Nintendo’s profits come from console sales. While PlayStation and Microsoft have plenty of great third party developers to contract out games to and are known for a vast array of different games being available to them, Nintendo again breeds its consoles in part for the exclusiveness of their franchise titles – third party developers are almost akin to just gravy. Take the franchise titles and put them anywhere, and when stacked against the competitors with better horsepower, who is going to buy the Nintendo console anymore? They may have novel hardware innovations, but given console sales for Nintendo already are less than their competitors, who can say how much more the scales would tip?

None of these conversations are to say Nintendo needs any advice. Their brands speak for themselves: the company has amassed quite spectacular revenue and while their current consoles may seem in trouble, the company itself is far from likely in the same waters. These are what ifs, and exploring the whys.

The bottom line seems to be that for all the excitement and potential new markets Nintendo could open up by expanding its horizons, it could also become a fatal blow to the company. Dwindled to a halt console sales could potentially rip open any gaming company,  and beyond that the iconic nature of Nintendo franchise characters could get lost in the mix as they jump from game to game, console to console. While it might seem backwards to those looking at the potential innovations ahead of us, Nintendo sticking to what they know may be exactly what they need to continue on their path of household gaming entity.

Plus, if the technology already exists, that means it can always become a part of the next big Nintendo thing. The 3DS already HAS augmented reality features, for example: they’ve just never been that strongly used in a franchise game to my knowledge. Maybe this Google Maps trailer is opening doors to something right in their backyard?

Regardless of what they choose to do in the future, Nintendo is a savvy company who chose to opt out of the console horsepower war and opt into developing further what was already working for them: their characters. I’m interested to see how their business plan continues to unfold, and I’m actually doing a marketing course research survey project on Nintendo and mobile devices, so you may see more blog posts about this from me.

But until then, I’m going to go back to searching for all these Pokemon in….where am I now, Kyoto? And hoping against hope if I find them all Google sends me a lovely little Pokemon master card to hang on my wall, right next to my pile of Pokemon plushies.

“Video games are bad for you? That’s what they said about rock-n-roll” – Shigeru Miyamoto

Pokemon and respective characters (c) Nintendo, Game Freak, and the Pokemon Company International; Mario, Luigi, and other characters (c) Nintendo

GHC Reflections: “Why Are We Still Geeks?” Panel – Part 3

(Trying to work toward graduation AND remember to blog is hard – sorry for the delay!!)
My favorite segment of the “Why Are We Still Geeks?” panel at Grace Hopper, part three featured professor Kim Surkan of Humanities & Gender Studies. Her discussion revolved around the current issues surrounding women in technology today, and steps being made toward remedying them.

Very early in the speech she made a memorable statement: “You have to remember, I am humanities, trying to step into your world – and let me tell you, your world is troubled”. With that statement alone, despite her not being a woman in the CS field today, you could tell how deeply she understood the problems plaguing women in technology today and hoped to remedy them.

She went on to discuss several distinct points hindering the cause of women in CS. For instance, both genders have a habit of correlating gender with ability in STEM fields, which regardless of actual skill causes a decrease in interest and  hinders abilities, which continues to increase our gender gap. In  simpler terms, both women and men perceive men as better at programming, thus women lose interest, stifle or hinder their own skills, and create an even wider divide to perpetuate this stigma.

She then cycled this concept into another called “symbolic annihilation”. She argues that the struggle there is for young women to see other women in computer science makes it difficult to protest the fact that they are underrepresented. It’s a difficult concept to wrap one’s head around at first: the best phrase for it is “It’s hard to protest an image that does not exist”. If we’ve never seen it, we have trouble conceptualizing it as a real problem.How can we address the problem of women entering STEM fields, if we have barely any women at all to turn to in the field for a frame of reference? Out of sight, out of mind as they might say.

One fact Dr.Surkan shared that I found startling was that Computer Science is the only STEM field that has seen a decrease in women joining in recent years. As a woman in the Computer Science field, I know we are few and far between – but to hear that trend is only becoming worse is something that makes me very sad. Any woman can be good at whatever she so chooses, but there is nothing about Computer Science as a field that makes it strictly male. I can think of plenty of areas that women can actually have an easier time conceptualizing than men due to how we process information. For instance, concurrency and object-oriented relations/definitions are things that I’ve seen women grasp more quickly. And for those who enjoy a human element – data analysis, human/computer interaction, usability, and user experience are all realms where a craving for “social” work can manifest in Computer Science – and are areas that sorely need workers, yet few Computer Science majors are as interested in. Not to put the genders in stereotypical boxes of course – I mean Grace Hopper developed the compiler – that says just how much women can contribute to the field in any area they choose!

Surkan continues by discussing subtle differencing in the language of Computer Science that one could also say contributes in part to the lack of women. The term hardware did not originate until 1958 – prior to that, computers were operated almost entirely by women! The word hardware versus software brings about a play on masculine versus feminine roles (men are “hard” and women are “softer”), and defined women solely as switchboard operators rather than “able to build computers”. This language change may have helped solidify the gender divide within Computer Science – where women are thought to be able to use the computers, but not build them or program on any level of real depth.

She follows this up with a case study of several events in the Computer Science world that have alienated women – and for me, these case studies were a turning point in how I viewed the CS world for women. I had known things can be bad, that we were few and far between – but some of these stories were beyond me. There was the 9-year-old girl at the TechCrunch Disrupt hackathon: who when other apps were inappropriate for someone of her age to have to see (and at least a touch objectifying to women) during demonstration, was blamed for being there – despite having built her app herself at the event. There was Anita Sarkeesian, receiving death threats for an attempt to kickstart a YouTube channel about female representation in video games. Adria Richards, who was harassed and then fired from her job for tweeting about some men making sexual jokes at a Python conference. And the more I Googled after the panel, the longer the list of stories became.

The above tied in with what she calls “brogrammer” culture – more and more startups and popular tech companies are modeling themselves in a fashion to attract young and thrill-seeking twenty-something males – to the point where the office culture resembles a fraternity house party more than corporate.<br />
Now, there’s nothing wrong with a fraternity house party, and there’s nothing wrong with a woman wanting to be a ninja or a wizard or a Jedi (as advertisements for these job may question if you are when it comes to coding), but there are elements of those environments and words that can cause women to automatically feel excluded. Perhaps when promoting jobs companies should use a second, more general, or even women-targeted ad set in addition (calling for code queens, Python princesses, and scripting sirens) if they wish to correct these images – and let their house party style be more like game&amp;study night on the co-ed dorm floor.

When I discuss the issues of women in Computer Science today, I am constantly brought back to referencing something I learned or heard in Surkan’s panel segment and branch out my discussion from there. No one has all the answers to these issues – but she definitely helped to raise some of the problems and questions, which is always a necessary first step. More than likely I will revisit some of these issues and pose some thoughts for the future in the future – but this blog has become long enough just revisiting Surkan’s panel points.

Is there a current problem with getting women into Computer Science, and the environment for women in the field at certain locations? Certainly.
Can we fix it for the future? Definitely.
Will it happen overnight? Probably not – but if we persevere, we will overcome.

“Computer science is no more about computers than astronomy is about telescopes” – Edsger Dijkstra

GHC Reflections: Megan Smith Keynote

The second day of the Grace Hopper Celebration was kicked off by Megan Smith, vice-president of Google[x] at Google. For those unaware, Google[x] is a branch of Google devoted to more physical applications – Google Maps, Google Earth, and engineering for space innovations and methods of providing internet worldwide.

I was fascinated to find out about Google[x] – as searches for information on it yield rather sparse results. Granted, Google[x] is not in my specific field of interest – but hearing about seemingly “left-field” initiatives a company like Google is taking to expand themselves and make a difference was intriguing.

What stuck with me the most of Megan Smith’s keynote was her discussion on moonshots – which is what they see the Google[x] initiative as promoting. “Moonshots” are thinking beyond the purported limits of what can be done and aiming a little higher. One such statement was in the line of: “let’s throw away the thinking of how this product can change a million people’s lives – if we can make it change a billion people’s lives, well, then we’re talking”.

In this vein, moonshot seems to be an adage to the old inspirational saying (commonly plastered on grade school walls) stating “shoot for the moon – even if you miss, you’ll land among the stars”. The idea is that if you raise your bar higher, you will likely exceed your original expectation, even if you miss the new mark. I for one, welcome someone taking that phrase and coining it into a relevant term for innovative fields.

I think the concept behind moonshots bears repeating, and while simple is often forgotten. If you’re going to create a system or technology that works on such massive scales, you’re going to have to start from the bottom up. Fixing a car so that it gets not 60mpg but 600, or even 6000 – that line of thinking requires we reconsider how the car itself works and recreate it. To some it seems like reinventing the wheel – why not just optimize what exists and save time? But “reinventing” the wheel in this complete strip-down style can yield a nonwheel – that is, something that can take the place of the wheel but isn’t, and removes many of the prior issues the wheel had. We like to think by marginally increasing the bar we will save time and money – but why not set entirely new bars that, while intensive, could put us far and away from the competition?

Overall Megan’s keynote reminded me to dig a little deeper, and not to settle for making something “better” but to shoot beyond for perfect and enjoy my landing (albeit a bit short) among the stars of outstanding when I succeed. I look forward to finding more opportunitites for moonshots in my life – and hope she inspired others to as well.

For information on the Google[x] open forum initiative, Solve for [x] which encourages moonshot thinking and collaboration, please visit https://www.solveforx.com

“Solving any problem is more important than being right” — Milton Glaser

GHC Reflections: Sheryl Sandberg Keynote

Last week I had the joy of visiting the Grace Hopper Celebration in Minneapolis, MN as a scholarship recipient. For those unaware, the Grace Hopper Celebration (GHC) is a conference celebrating women in computing through speakers, panels, research presentations, learning sessions, and of course – dancing. The conference was awe-inspiring – to see so many women in a field where women are extremely underrepresented coming together with a common interest and drive. I left the conference with new knowledge and new vigor, and would like to share some of my experiences.

The kickoff keynote Wednesday was by none other than Sheryl Sandberg: COO of Facebook. Anyone unfamiliar with Sheryl Sandberg can do a quick search to find how successful of a business woman she is. Bringing in a powerhouse woman like Sandberg to speak to thousands of young aspiring women in technology definitely kicked the conference off with a bang.

Sandberg discussed women’s under representation in business in general  – one of the topics she is most noted for speaking on – but tied it to technology rather elegantly as the two topics often intertwine. For me, hearing Sandberg speak was a whole new level of amazing; I have been quoting her points on the inverse popularity of women as they rise in power for quite some time now.

What was saddening is how what Sandberg said to all of us on Wednesday morning rang true throughout the entire conference: women in computing truly aren’t recognized as being as capable as their male counterparts. Almost every other keynote and session thereafter had some story come out that reflected the truth she expressed that morning.

It was also rather unsettling to see how even a woman as powerful as Sandberg and who advocates so strongly that any woman can be successful still deals with the conception of women in business and technology. She spoke of a panel she had been on where a man stated “not all women are like Sheryl – she’s competent” and another on the panel stated how having women in the workplace may tempt him. It’s a pity that we still deal with these notions in business but also the technology field – and yet they ring too true. Stories of female developers who weren’t allowed by their bosses to touch any code lest they “break it”, then upon finally doing the code completing it well were told they “must be one of the good ones” – these conceptions are true across the board of females in technology. The fact that even someone as successful as Sandberg who should be the case in point for the absurdness of such statements still having to deal with them proves how misconstrued our views of women in technology and business truly are.

There are many reasons for the gender gap and gender conceptions in technology fields (some of which I will discuss in reflections from other GHC panels and keynotes), but one thing is clear: it must be eliminated. Every developer approaches their project from a different mindset – why would we ever want to suggest that just because that mindset is female it is not valid?

Female developers have done amazing things – just look at the GHC namesake. Without a marvelously smart and driven woman like Grace Hopper, modern computing would not have been possible. Why is her achievement of the compiler shoved under the rug, much like Sandberg’s success?

Being a woman is considered this fault that must be overcome for success – but it is not a fault at all. It means our success may come in different forms, and with a different background than many of our current counterparts (read: male). Variety is the spice of life, and allowing women’s successes to be celebrated and revered could breed wondrous possibilities and diversity.

And maybe, just maybe – if young girls see women who succeeded being regarded highly for their skill and achievements rather than called “lucky” for overcoming their gender barrier…well, maybe those young girls will know just how possible it is for them to be successful in the future as well.

See Sheryl Sandberg’s Keynote Here:
http://mashable.com/2013/10/02/sheryl-sandberg-grace-hopper

Sheryl Sandberg’s acclaimed book, Lean In, has started a foundation to support women and drive their ambition.
Learn more about Lean In circles, the Lean In foundation, or order the book here: http://leanin.org/

“I want to tell any young girl out there who’s a geek, I was a really serious geek in high school. It works out. Study harder.” -Sheryl Sandberg