Talking Personas and Empathy Mapping with UX Akron

I had the great opportunity last week to give a short presentation at UX Akron, and lead a small workshop on personas and empathy mapping.

A persona is simply a fictional character (or set of them) your team creates to serve as a snapshot of your audience. The idea is to take things like demographics, goals, wants and needs and give them a relatable human presence. It’s much easier for us to be empathetic when we’re discussing ‘someone’ rather than just ‘our users’ or some other more abstract reference.

Empathy mapping is an exercise used to get in the mindset of your users, helping you to think and act like they would. You take one of your target users, and divide up a sheet into quadrants: thinking, feeling, seeing, and doing. You brainstorm different ideas with sticky notes for each section to help build a snapshot of this user at this point in time.

Our goal was to learn a bit about personas, and then look at personas that have been created for the UX Akron group. We used empathy mapping to understand the thoughts and motivations each of those personas was experiencing as they were thinking “I’m considering going to a UX meetup”. We looked at our results and it led to some quick ideas on how the group is missing marketing opportunities, or might structure its messaging to better appeal to certain groups.

Overall it was a great night with a diverse group. I look forward to attending as a regular member, and hopefully speaking again in the future.

If you’re looking to learn more about a current design topic, let us know at info@coffeeandcode.com We’re booking speaking and workshop engagements now for Fall and Winter 2016.

Photos credit UX Akron & WOCinTech

Returning Simple Data with Tastypie

It’s not everyday that I find myself in the land of Python, but I recently started working on a project with a friend’s company to help them reach an upcoming deadline.

The project uses Tastypie to generate an API based on Django models, which works great! However, when I wanted to return generic data I ran into a bit more resistance than expected.

The API pointed me to Using Tastypie With Non-ORM Data Sources, though I was hoping for something less heavy handed. I didn’t want to make resourceful routes around a custom data source, I just wanted to return a simple json object.

I ended up with the following solution:

# Custom object that we'll use to build our response.
class CustomResourceObject(object):
    def __init__(self, name=None, label=None):
        self.label = label
        self.name = name


# The Tastypie resource that will return our data.
# Make sure to inherit from Resource instead of ModelResource.
class CustomResource(Resource):
    # You will need to add fields for each property
    # that will be returned in the response.
    label = CharField(attribute='label', readonly=True)
    name = CharField(attribute='name', readonly=True)

    class Meta:
        # Start by disabling all routes for this resource
        allowed_methods = None
        # Allow the `get` index call where we will return data
        list_allowed_methods = ['get']
        # Use the custom object we created above
        object_class = CustomResourceObject
        # API endpoint for this resource
        resource_name = 'custom_endpoint'

    # Create our array of custom data
    def get_object_list(self, request):
        return map(lambda val: CustomResourceObject(label=val[0], name=val[1]), DjangoModel.CHOICES)

    # Return our custom data for the API call
    def obj_get_list(self, bundle, **kwargs):
        return self.get_object_list(bundle.request)

Photo via Visual hunt

You Promised Me!

I absolutely love Promises in JavaScript code. As a person who started their programming career in DHTML I’ve seen a ton of new features added over time, but none have seemed as powerful as being able to control the flow of my software.

I want to focus on one aspect of Promises for this post though, exceptions. If a Promise function has an exception thrown, the promise will be rejected with the exception as the value.

Here’s an example:

new Promise(function(resolve, reject) {
  throw new Error('ARGHHH!');
}).catch(function(error) {
  console.log('The error is:', error);
});

What I don’t have to do is to do any try / catch to make sure the exception does not halt my program. Great!

One thing that sometimes slips my memory though is that the implicit catching of errors is only done on the function being executed inside the Promise (the executor), it does not extend to other callbacks that are called by that method.

const fs = require('fs');

new Promise(function(resolve, reject) {
  // Error thrown if the file "post.md" does not exist
  fs.readFile('post.md', function(err, data) {
    if (err) throw err;
  });
}).catch(function(error) {
  console.log('We will never get here.');
});

If the file post.md does not exist, Node will throw an along the lines of: Error: ENOENT: no such file or directory, open 'post.md'. That error will not be caught and your app will have a bad day.

The reason is that the callback executed in the readFile method creates a new context for execution and you have to rely on normal try / catch logic if your intent is for the Promise’s final catch statement to have your error.

const fs = require('fs');

new Promise(function(resolve, reject) {
  fs.readFile('post.md', function(err, data) {
    try {
      // It's now ok to throw an error here.
      // You can also just reject it.
      if (err) throw err;
    } catch (error) {
      // Reject any caught errors.
      reject(error);
    }
  });
}).catch(function(error) {
  console.log('The error is:', error);
});

Hopefully this helps you make sure your code’s flow control is exactly as you intended.


Sign up for our newsletter to learn some more tips and tricks, or just keep up to date on what we’re doing.

If you’d like to teach those tips and tricks to your team, we offer coaching and training opportunities for existing team members at your company.

We’re going to South by Southwest!

Later today I get to board a plane to one of my favorite American cities, Austin, Texas. It’s that time of year for the ridiculously large South by Southwest conference that celebrates all things tech, film, and music.

This trip is a little different though, as I’m going to celebrate one of our clients, iDisclose, being one of the selected startup companies at the SxSw Startup Showcase.

iDisclose robot avatar

I’m extremely happy with the work that we have done to get iDisclose to market over the past year. It is very satisfying to help a client go from idea to execution and then watch them gather the media and industry attention they deserve.

None of this would have been possible without CEO Georgia Quinn though. She’s extremely good at what she does and has made the difficult navigation of crowd funding much easier to navigate. This will even be the first time we get to meet in real life. Google Hangouts have gotten us a long way, but nothing beats being able to high five an amazing client in person.

If anyone else would like to learn more about iDisclose or how we tackled the project, feel free to say hello at SxSw or online.

Choosing a Prototype and Wireframe Tool in 2016

The past few years have seen an explosion of interest, usage, and growth in using interactive prototypes to build, evaluate, and iterate on design work. Coupled with the rise in style guides and atomic design, the prototype is now often the primary deliverable many designers create.

Thankfully, the tools to create prototypes have kept pace; there are myriad choices available, with something new posted with shocking frequency on design blogs. How do you select a tool for your workflow? What are the pros and cons of different types of software? I’ve prepared some notes to help guide you, and focus your search. I was inspired to do this after a great session prompted by UX Akron.

Before getting much further, I’d like to address wireframes. Though the term has fallen out of favor lately, I think that wireframes are still incredibly relevant and useful. The difference between a rough prototype and a clickable wireframe is just terminology. I don’t believe wireframes need to be static, nor do I think prototypes need a certain level of fidelity. The differences seem largely semantic and based on industry zeitgeist. The level of fidelity should be based on the project and your needs, not a pre-determined bias.

Types of Prototype Tools

There are 3 main categories that modern tools fall into:

  • Screen or page focused
  • State or layer focused
  • Code focused

Some tools are a hybrid of these, but it makes it simpler to divide our choices up and evaluate them that way.

InVision screnshot

Screen or page focused

In a nutshell, these let you create pages and/or import static images and then easily create hotspots to link to other pages. This can sometimes be automated so that whenever your design files are updated so are your prototype screens.

How much interaction you can add, and the level of animation is somewhat limited, but the upside is that these tools are extremely easy to get up and running with. They’re an ideal choice for a scenario where you need higher fidelity than just boxes and text; if you’re adding features to an existing app for instance. Conversely though, tools like Balsamiq are page based with the goal of keeping things lo-fi, so there are options on both ends of the spectrum. I also include Keynote and PowerPoint, which weren’t designed as prototyping tools but have plenty of functionality to work, and work quickly.

Popular page/screen focused tools are Balsamiq, InVision, Flinto, Marvel, Fluid UI, Keynote, and PowerPoint.

OmniGraffle screenshot

State or layer focused

In contrast are tools focused on changing states, or layers. You can have multiple layers on a single page, dynamic states within a single layer, variables/conditionals for elements, and fairly detailed control for time-based animations and interactions.

With all of this, you’re afforded more control than page-based tools including the ability to link between pages/views in more complex ways. But they tend to be more expensive and complex due to an extensive feature list, bringing a larger learning curve.

Some tools I lump into this category are Axure, OmniGraffle, Proto.io, Pixate, and Indigo Studio.

Code focused

The last group are basically libraries that help you when creating prototypes programmatically. This gives you essentially complete control over prototypes.

Creating prototypes this way can help communicate your ideas easily to developers, because the developers can look into the code and possibly translate it into production quality code, or at least understand the intention behind it.

The challenges with this are that it’s highly dependent upon your programming skills. It’s also often more difficult to share and present with other members of your team, since there aren’t typically built in web viewers or presentation wrappers.

Quartz Composer with Origami and Framer JS are two examples of tools that fall into this category.

So what’s the best tool?

The short answer is obviously that it depends… Each project will have different needs, and you or your team (or clients) will have different requirements too.

With that being said, I turn to screen based tools first, specifically InVision. My experience is as a designer so having synced files from Sketch or Photoshop is incredibly handy and lets me explore ideas quickly. The prototyping and animation features are limited, but it fits the use cases I’ve had, and having such a low learning curve is wonderful for a working professional.

If there’s a need for a larger prototype I’ve used Axure as well. Visual fidelity wasn’t the goal, but for pure prototyping its organization with complex projects and the ability for team members to collaborate was invaluable.

Don’t worry (too much) about the software

Ultimately, any of the tools here (and all of those that I missed) will allow you to build prototypes. The most important things are the process and your communication; no tools can overcome those. Showing your team a rough prototype early on is much more valuable than spending days getting something perfect, only to realize you’re too late in the process and the project has already moved forward. Explore the options, find something you like, and do great work.

Cleanup Docker Images and Exited Containers

It’s pretty easy to accumulate Docker containers and images on a development machine. Normally, every container that is ran is preserved on your machine. Try running docker ps -a and see how much you’ve accumulated over time.

Concerned about loss of disk space over time, I found a blog post that talked about cleaning up after Docker. Part of the article talked about removing dangling images, or intermediary images that are created while building other containers.

docker rmi $(docker images -f "dangling=true" -q)

However, I also wanted to clean up exited containers that accumulate from running docker-compose run commands, but leave the containers that are automatically started and stopped from running docker-compose up.

I ended up with the following bash alias to help me retrieve previous disk space:

alias docker-cleanup='docker rm $(docker ps -a -f "name=_run_" -q) && docker rmi $(docker images -f "dangling=true" -q)'

As a bonus, here’s the bash alias I use to quickly connect a terminal session to the default docker-machine.

alias docker-setup='docker-machine start default; eval "$(/usr/local/bin/docker-machine env default)"'

Prioritizing Performance: Completing a performance audit using Web Page Test

Optimizing a website for performance is hard, but that’s what makes it fun! When you plan to accomplish a goal, it’s never a good idea to go from 0 to 100 immediately. Whether it’s running a marathon, getting out of debt, or learning to wake up early in the morning – you have to take things one step at a time. This applies to improving performance as much as it does to these other examples.

The first step to improving your site’s performance is to establish a baseline of where it is at today. What we’ll need to do is a performance audit. This is an in-depth, instantaneous look at the performance of a website.

Enter Web Page Test

Web Page Test provides you with a very detailed view of what’s happening on your website. Here are a couple key metrics that you need to pull out and begin to document:

  1. Load Time (Document Complete Time)
  2. Time to First Byte *
  3. Start Render
  4. Speed Index **
  5. Fully Loaded Time

These are the high level items that you’ll want to be able to reference, but for this audit we’ll go a bit more in-depth to see what’s really going on behind the scenes. To do so, we’ll need to track data from each call that is made from the site. From each call, we’ll want to know:

  1. Mime Type
  2. URL
  3. Size (KB)
  4. Request Start Time
  5. DNS Lookup Time
  6. Initial connection time
  7. Time to First Byte
  8. Content Download Time
  9. Total Time Taken

Once you’ve arranged all of this data in a spreadsheet I’m sure your first question is something like this:

What does all of this data mean!?

You, probaby

With this information we’ll be able to pinpoint exact situations where we can improve, we just need to know what to look for. Let’s look at a quick example to outline a scenario that we should be looking for:

SVG Chart Example

Here we can see that we’re loading in 5 SVG images. These images take a total of 597ms to perform a DNS lookup, establish a connection, and wait for a first byte. That means that half a second went by, and we haven’t even started to download these images! Granted, some of these things could be happening synchronously, but we can all agree that it’s a bit of a waste for five images that take 0ms to actually load because of their small size. An easy improvement to this is to create a sprite sheet, reducing all of the inherent network latency involved in calling a server five times over again into one call.

For those unfamiliar:

[Sprite Sheets] are important for website optimization because they combine several images into one image file to reduce HTTP requests.

From Guil Hernandez at the Team Treehouse Blog

Please keep in mind that the sprite sheet optimization technique (hack?) is for a website following the HTTP 1.x protocol. This is considered an anti-pattern in the newer HTTP2 protocol. You can read about the HTTP2 protocol here. For a more concise translation regarding the switch from HTTP1.x to HTTP2 check out this post by Matt Wilcox

This scenario should hopefully give you an insight into what we’re looking for as we sift through this data. Every site and scenario will be unique, so I’ll let you dissect away!

When all is said and done, you should be able to get some valuable data by aggregating the call times by sections. Ours came out looking like this (Y-Axis is time in milliseconds):
Performance Graph

Are you noticing a lot of initial connection and time to first byte time, but not a lot of content download time? It seems like you might want to try combining some assets to make less HTTP requests! Are you seeing high content download times? You might want to try compressing your assets to decrease their size to a more reasonable level!

Hopefully with this information you’re ready to audit your own website and begin prioritizing performance!


* If you’re using a CDN like CloudFlare and Gzipping your files, you shouldn’t be worrying about the Time To First Byte metric as stated in this post by Cloudflare themselves.

** The math behind a Speed Index Score is very interesting, and best described in this page in the Web Page Test documentation.

Codemash 2016 As a Designer – Pushing Yourself to Learn and Grow

I will admit that I had some trepidation about registering for this year’s Codemash conference. I wasn’t worried about the conference overall, which is always wonderfully organized, presented, and planned but rather the content of the presentations themselves.

I’m a designer; user-focused design is what I do daily (and occasionally nightly) for clients at Coffee and Code. Codemash, as you can guess from the name, is a developer focused conference. Last year there were a good number of talks geared towards designers and front end developers; I learned about animation, illustration, and style guides. I was even fortunate enough to give one myself on a modern design process. This year though the talk abstracts seemed to fall more squarely in the coding realm and anything related to design and UX was more of an introduction for developers.

My teammates convinced me to sign up; they made the important point that any concerns over the content wouldn’t matter when you consider the value of such a large conference that’s so close to us and so accessible. We want to support Codemash and the community, and worse case scenario… I could hang out by the waterslides drinking Kalahari Sunrises!

As I looked over the schedule and planned out my days, I took the opportunity to see some truly wonderful talks; most of which had nothing to do directly with design.

Some highlights were:

How Do We Solve for XX?

There was an excitement in the room before this talk like none of the others I went to. The (wonderfully diverse) crowd seemed very eager to get started as the lack of women in the web/technology field, with the alienation and attrition of those that are our coworkers, is a huge problem for our industry.

Kate Catlin (@Kate_catlin) was a wonderful speaker, really more of a leader for this interactive session. Her energy and boldness helped take this beyond a typical “we need more women” discussion and toward “what can all of us actually do?”

She began by discussing her background, and some shocking numbers regarding female tech workers.

The amazing moments came as we were split into groups; each trying to tackle one step of the pipeline problem, from young girls being discouraged from STEM interests to existing women feeling isolated and excluded in their careers and workplaces. We brainstormed ideas and everyone shared their experiences. Women, men, native English speakers and not, those from traditional tech backgrounds, and those who switched careers later in life.

It was eye opening to listen to all the experiences and be exposed to some of the privileges that are so easy to be unaware of. The great thing though was that it wasn’t a complaint-fest, there was a hopeful tone throughout and some great ideas to try and actually implement in our own communities.

This session was hugely beneficial to me in the form of empathy. As a designer there are few things more important than that.

A Web for Everyone

I’ve often approached accessibility in web projects the way I think many others have; follow basic best practices… and hope for the best beyond that.

Dylan Barrell (@dylanbarrell) led this discussion, sharing what he had learned leading accessibility studies and audits over the past few years.

What was brilliant about this presentation was a blindfolded walkthrough of some sites using screen readers. Far more powerful than reading a spec and following guidelines, it was eye-opening to experience how visually impaired users consume the things we build.

The other “wow” moment in this talk came from a first hand demonstration of the way a visually impaired – but not blind user used the web. She doesn’t use a screen reader, but instead relies heavily on magnification tools and context hints like color and contrast. It’s incredibly easy for us to assume accessibility == screen readers, ignoring users who have different levels of vision and possibly additional challenges, perhaps auditory, cognitive, or physical dexterity. Building in an accessible way is a complex challenge, but also incredibly important and one I hope we can make more of a priority moving forward as an industry.

Finally, a Voice for the Enterprise!

“Hey Alexa, tell me how our servers are doing.”

Voice command interfaces like Siri, Google Now, Cortana, and Amazon’s Echo are growing every day. It’s entirely conceivable that soon “interfaces” will be far less visual than I’m used to building, and voice and other tactile inputs will be how we work with them.

William Klos (@williamklos) led this session, which was as much real demo as it was discussion. He contrasted the different systems available today, and talked about the roadmap for new developments.

This presentation showed what we can do right now; customizing the Amazon Echo to run custom code, complete tasks and interact with users. There are a number of pitfalls and shortcomings, but it’s clear that leveraging technology like this is only going to be more widespread. How you work with the APIs, and how you structure your voice interactions was really illuminating – it’s interface design, but much different than what I’m used to doing.

Conclusion

The overarching theme to all of this was that I learned the benefit of thinking in a different way and becoming a more well rounded learner. I could have easily gone to a different conference and learned about Atomic Design, or new prototyping tools, or new type systems but instead I pushed myself in interesting ways and came away a better designer and human. Couple that with good times with friends, board games, and an indoor waterpark and I couldn’t have asked for a better Codemash.

Running Tests Automatically With Watchman

I’m currently working on a very small PHP library for a client and was looking for a way to automatically re-run the test suite anytime a PHP file is updated, added, or deleted.

A similar library on the Ruby side of the fence is called guard, which listens for file events and runs commands in response. I’ve used guard in previous projects, but I try not to pull in dependencies from other programming languages if possible in projects. Also, guard represents something that I’m trying to minimize in my development process which I’ll call a “wrapper application”.

“Wrapper Application”

I’m not sure if there’s a better word for it, but I’m going to talk about “wrapper applications”, or a library that wraps the core functionality that I’m trying to work with. In this case, it’s calling a command in response to a file system change. To get to the library that is actually doing the watching, guard pulls in listen, which pulls in rb-fsevent who does the actual monitoring.

If rb-fsevent makes a breaking change (or fixes a bug, or adds new functionality) I will have to wait for the listen library and the guard library to add the new functionality. I’ve been burnt many times before (I’m looking at you Grunt and Gulp plugins) so I try to minimize wrappers whenever possible. The fewer moving parts, the better.

All of that said, composition of libraries is not a bad thing, just something I like to look out for. Now, back to the story.

Enter the Watchman

When trying to find applications that could fit my need without being a “wrapper”, I ran into two other brew installable apps called fswatch and fsevent_watch, but their cryptic usage instructions and command line arguments led me to find Facebook’s Watchman. It’s open source, been out for a few years, and seemed to be able to meet my needs without requiring a “wrapper” library. In addition, it seemed to be pretty robust.

Installing is pretty straight forward thanks to brew install watchman and the introduction on their website looked straight-forward, but then things went downhill quickly.

The documentation doesn’t give many example setups, so it took a few read throughs before I found how to watch files and trigger commands in response. Unfortunately,
running the test task isn’t very helpful if you never see its output.

Turns out that normally triggered tasks send their output to the watchman log file that’s conveniently buried deep on your system’s bowels. Since I didn’t want to tail a file, I kept poking around in the documentation till I found the watchman-make command. It’s a convenience command that invokes a build tool in response to file changes while sending command output to your terminal. It met my needs and didn’t require any complicated setup of triggers or watchers.

While digging through documentation I also found that watch-project is preferred to the deprecated watch command so that overlapping watch commands can use a common process to be easier on the operating system.

That brings the command needed for my project to:

# Run this command to monitor PHP files and run phpunit in response.
watchman-make -p 'src/**/*.php' 'tests/**/*.php' --make=vendor/bin/phpunit -t tests

It was a bit of a journey to get everything working properly the first time, so hopefully this information will be helpful to others looking for a similar setup.

How to be a Good Developer

I was fortunate enough to be asked to talk to Wadsworth High School students again at today’s High School Career Day. It was a very well organized event and I was proud to be a part of it. I have had quite a few role models that helped shape my personal tech journey and I like to give back when possible.

While last years talk focused on a mixture of entrepreneurship and tech, this year I wanted to talk more about differentiating yourself in an industry that’s attracting more and more people everyday. The following are a few of the topics we covered in case it’s helpful to someone else.

Do What You Love

I’ve found the easiest way for me to learn new things are do to the things that interest me, and constantly learning new things is very important to your development as a developer.

Thankfully my desire to program on the web turned into a pretty good career as well.

An important thing to remember is that at some point in your personal development you will reach a point where you become stuck. The amount of knowledge you have yet to achieve will leave you confused about where to go next. I’ve found that focusing on the things that interest me and not the new and shiny worked quite well. One thing will lead to another in tech, but you should never get discouraged with the expectation that you should learn it all.

Speak Out

I owe a lot of my company’s growth the local special interest groups that I found through sites like meetup.com. I ended up with an entire network of amazingly smart and helpful people that have allowed my business to spread word of mouth.

We grew together, learned together, and helped each other meet our career goals.

For me, giving a talk at meetups led to speaking at conferences which were an excellent way to be viewed as a subject matter expert. You’ll need to back up your words, but if you’re constantly learning new things that’s not a problem.

On the Job Experience

Nothing can beat actual on-the-job experience. I encouraged all attendees to take advantage of co-op / internship programs to find out more about how their desired industries operate in the day to day. Find out if you’ll enjoy programming for the rest of your life as quickly as you can to avoid a costly change down the road.

Outside of working for other companies, you can do things to showcase your aspiration and talent by tinkering on side projects or even contributing to open source development. As you learn more about programming languages, libraries, and frameworks you’ll be introduced to an entire world of opportunities to show off. Take advantage of it.

Finally, feel free to reach out to the company’s you admire and ask if you can learn more about how they work. You may even be able to hang out and shadow them for a day to learn more about what skills they are looking for so you can direct your own personal education.