Archive Page 2

28 Things Everybody Should Know, Part XXII

Dangerous products should be harder to engage and easier to stop.

Earlier this month, a four-year-old girl died after being trapped inside a front-loading washing machine which was turned on by her 15-month-old brother. The event stirred up a good deal of discussion involving the design and usability of certain washing machines in households with children.

Childproofing a home is never easy, and often quite expensive. Entire aisles of safety mechanisms are often available at retail stores in an attempt to guard children against numerous potential dangers: electrical outlets, drawers containing unsafe products, closet doors, sharp edges, hard surfaces and choking hazards, to name a few. As soon as a family expects its first child, it quickly becomes apparent just what a death trap some homes can be.

It’s impossible to remove every hazardous element from a child’s life, and attempting to do so only prolongs the encounter for a later time. When dealing with products and environments that can pose a threat to a child’s safety, it’s good to take advantage of the one safety mechanism built into all children: their size. Kids unable to figure out dangerous equipment start out with a very limited reach, and this should be utilized when designing products that can’t be simply kept away from children, such as washing machines.

According to news reports, the controls to the washing machine in question (a Kenmore 417 front load washer) are a mere twenty inches off the floor–well within the reach of a small child–and can be engaged easily. In top loading washers, the controls are usually set behind the door, and require a taller operator with an extended arm to start. With the advent of front loading machines, perhaps because clothes can now be folded or piled on top of the machine, keeping the controls where they were would have seemed like a bad idea, as access to the buttons might be blocked with no need to keep the top of the machine clear.

Years ago, a few medicine companies began advertising bottles that were easier to open, responding to elderly users having difficulties opening their medicine containers. Most childproof bottles feature caps which must be squeezed and forced open, or arrows which have to line up with one another before the cap will pop off. Both took considerable strength, and the arrows were small and hard to notice, making them harder for children to figure out. Obviously, these safety features cause problems for older users, who often have problems with both the strength and eyesight needed to open the bottles. To solve this problem, the new bottles have a long tab sticking up from their cap, making them easier to grasp, but still take a bit of strength to twist off. On these bottles, instead of the standard “Keep out of reach of children” warning, the label clearly states not to allow the bottle in any household with children–which is wonderful for older users, who are typically beyond the stage of having to worry about kids running around their homes.

The problem with this new style of washing machine isn’t only where the controls are placed, but the type of controls they use. As a user, I never really liked the push-twist-pull dial used to select the type of fabric and duration of the wash. Because the dial can only spin clockwise (a limitation I’ve never understood but have found on every dial I’ve ever tried), passing the desired setting means having to turn the thing around another rotation, and it isn’t always easy to know if the arrow is right on the correct setting or one click behind it. I’m always a bit uneasy about advancing an extra click when trying to select my setting, and because I’ve always used the exact same setting with all of my clothes, the fact that I have to turn the dial with every load does seem a bit pointless.

So the dial isn’t necessary, but eliminating it also gets rid of a helpful safety feature. How can a button-driven menu incorporate an equally effective feature? One idea could be to require two buttons, placed far enough apart to require two hands, to be pressed simultaneously. This will make it almost impossible to activate the machine accidentally, and still offer a simple way to get the machine started. Because the contents of the machine move around during the cycle, a release lever or button inside the machine isn’t possible, but in the interest of preventing another accident, unlikely as this sort may be, it would be possible to install a small microphone that halts the cycle if a loud noise, such as a scream, is detected when the tub begins filling with water.

Cases like these make us realize how important it is to analyze every possibility regarding household objects, products and situations, and at least try to prevent accidents before they occur. I wouldn’t say a recall is necessary on washing machines like this, but users need to understand the ease with which they can be engaged, and make the controls harder to reach by keeping the machines elevated or their rooms locked if there are children about. Like with the medicine bottles, manufacturers of these machines should make sure customers are warned of their inherent shortcomings as equipment easily accessible to children, by including printed warnings on boxes and in manuals that come with the products. When accidents like this happen, there is often no one branch of the user experience process the place the blame, as all parties–design, development, sales and even the user–may all have contributed to the unsafe conditions which led to the accident. That’s why it’s important to consider every step of the process when working to prevent future incidents.

28 Things Everybody Should Know, Part XXI

Try to break your system before someone else does.

Product testing is often overlooked by developers whose products aren’t a threat to anyone’s safety, or for which laws don’t exist to mandate testing. But the majority of all products and services are designed for a market not comprised of like-minded developers, and users will inevitably end up making mistakes not accounted for during the development process.

Another problem with experience design is that developers often test their own products in the way they’re meant to be used, without exploring different approaches that might inadvertently–or even purposefully–cause the system to fail.

Corner cases are situations beyond those normally anticipated by developers, where a user might push the abilities of a product further than it was constructed to support. In certain scenarios, such as with load-bearing pulleys and cables, corner cases must account for wide margins (a pulley I have states its limit at 500lbs, but I suppose it will probably sustain twice that without breaking–the company probably severely understated its abilities to prevent accidents and ensuing lawsuits), whereas electronics like computers don’t need such a large safety net (many people safely overclock their systems, threatening little more than the longevity of the computer itself).

Borrowing from Murphy’s Law, wherever there is the possibility to break a system exists, someone will find it sooner or later, and it’s best to catch it and fix it (or create an acceptable workaround) before it hits the shelves and starts causing problems.

By way of example, most keyboards today only recognize four keys pressed at one time. Honestly, the keyboards themselves probably recognize many more than that, but probably refuse to relay the extra signals to the computer. I don’t know exactly why they do this, but seldom are more than three keys ever used simultaneously, and it’s possible that too many signals at once could cause some applications to go a little crazy. (In fact, it may be a Windows problem–I don’t recall experimenting on a Mac.) But with all the various programs out there, most only really dealing with one or two keystrokes at a time, limiting the operating system’s recognition of more than a handful of keys undoubtedly has solved some problems. And it’s still more than you could ever press at the same time on a typewriter.

This is a screenshot of LEOGEO, a website I discussed earlier. Under normal circumstances, the gray letters expand to display a link when the user rolls over each one, and reverts to its single-letter state when the cursor rolls away. Essentially, only one link is in its full state at any given time.

In Flash, the commands used to trigger events with the cursor rolls on and off buttons are on(rollOver) and on(rollOut). However, there are a few more states designers often fail to account for, and one in particular can result in multiple rollover states the designer hadn’t planned for: on(releaseOutside). This tells the computer how to act if a user clicks the mouse button down, drags the cursor away from the button on the screen, and then releases the mouse button. Without declaring a releaseOutside event, the button stays in its rollOver position until the cursor rolls back on and off the button a second time.

LEOGEO’s buttons weren’t scripted to handle this unexpected behavior, which can occur when a user is moving the mouse and clicking multiple buttons rapidly–or whenever I decide to test buttons to see what will happen. Once a website goes live, there’s no telling who will use it, and if every unlikely problem isn’t anticipated, it will very likely turn up at the most inopportune time.

The best way to make sure a system won’t break is by doing everything possible to break it. Automotive companies crash test their own cars extensively, using their findings to improve on future models and features. Unfortunately, many developers don’t have the mindset of a product tester, and certainly don’t think the way typical users do, so without knowing what it takes to break a system, they can’t possibly know how to prevent such a breakdown.

28 Things Everybody Should Know, Part XX

Interaction should enhance the user experience, not hinder it.

Technically, all websites can be categorized as interactive, no matter how basic and seemingly passive to a user’s behaviors. With the exception of parked domains and single-page sites with no buttons or links to any other page, there is some degree of interactivity between the human and the machine.

When the elements of a site are developed to react in a new, unexpected, experimental or engaging fashion, it becomes a subject of interactive design, with all the connotations and philosophies that go along with the practice. There are many reasons to choose to make the switch from static HTML to a more dynamic presentation such as Flash: added functionality, a more human look and feel, or just a desire to stand out from the drab expanse of drab unmoving, sites on the web.

When planning the style and degree of a site’s interaction, it’s important to consider the reasoning behind it and whether it will enhance the overall user experience. Quite often, websites will feature full Flash menus that, once looked into, are little more than simple menu lists with moving elements, often slowing user navigation and resulting in unnecessary disorientation. In fact, a large number of artistic portfolio sites made in Flash are simple menus made more frustrating than helpful by making users chase moving buttons, explore confusing landscapes with no visible hints as to what leads where, and perform feats well beyond simply clicking on a concise list of available options, which would have worked just as well.

The Amsterdam Film Experience website starts off with a number of thumbnails randomly tossed about the screen–some overlapping others at times–which lead to a featured film or information about the event. The menu is more engaging than a simple list of pages and videos, but makes it difficult to find what the user is looking for, especially since buttons don’t tell where they’ll lead until the cursor rolls over them (a phenomenon knows as Mystery Meat Navigation, which, aside from exploration-centered experiences, is a very bad idea, as it makes users do more work than should be necessary to discover where clicking will take them; after all, moving the eye is far simpler and takes less effort than moving the mouse and accurately stopping over the button’s hit area.)

When the user chooses a thumbnail–by either double-clicking or dragging the image into the box in the lower right corner, again muddling the experience–the remaining thumbnails fall to the floor, where they remain for the rest of the visit, unless the user drags them around to see what’s hiding behind them (having dropped to the same Y axis, thumbnails are even more likely to overlapped, leaving at least a couple completely hidden, as well as some important text and the email input field). The sudden exposure to gravity gives these thumbnails a tangible quality, which might make the user feel more connected to the site, but with all the overlapping and trouble caused by vague button descriptions, it’s a shame to give the appearance of a row of physical objects and yet not provide something to hit when things get too confusing.

Of course, while the interactive element of this site isn’t necessary, the experience can still be quite enjoyable. But forcing users to play along with less than conventional site navigation, when many of them might want to quickly find what they’re looking for and move on, isn’t a good way to reach the broadest audience. A successful interactive site will be designed with the understanding that some users aren’t looking for an immersive experience, and supply a secondary, static navigation style to allow those users a less complicated experience.

Interactivity can greatly enhance the user experience, but there is a time and a place for it, and it’s impossible to tell whether a user will be receptive to interactive immersion at any given time. Instead of expecting users to fully appreciate the artistic vision of a website, designers should try to make sure the experience will benefit from the addition of interactive elements, and even then, try to give an alternative for what might end up frustrating a percentage of their visitors.

28 Things Everybody Should Know, Part XIX

Users expect navigation either above or to the left of the content.

More often than not, when a user visits a website, the purpose for visiting–a certain bit of information, for example–isn’t on the first page of the site. Users generally have to click around before reaching the functional, meaty part of the experience, and the faster users can find the desired links and get started, the less chance a site has of chasing them away prematurely.

Because the English language reads from left to right and top to bottom, users are naturally inclined to scan for useful navigation starting in the upper left hand corner and moving either right along the top, or down along the left. (That’s once the user’s decided to move on to another page, of course. Large splash images and other content usually grab the user’s initial attention, but when it’s time to move on, our instincts tell us to head for that upper left corner.) The layout of the page, in much the way a painting directs the viewer’s eye around its canvas, has a large impact on where the eye moves from that starting point in the corner: a prominent horizontal row of buttons along the top will imply that the most utilized navigation will be included in that row, while a column of buttons down the left side will tell users to scan downward first.

LiveJournal uses a horizontal navigation along the top of the page, where rolling over a menu item will bring up a submenu underneath. Placing the site’s logo in the upper left corner assures users that this corner is a good starting point in searching for common navigation and functionality.

YouTube’s navigation is spread around the site a bit more, with video-specific functions to the right of a video’s playback area. This helps keep videos within the browser window, for user like me whose windows aren’t big enough to include the video and the options and links all at the same time. But still, the most commonly used buttons–or at least the most helpful buttons for novice users who don’t know their way around yet–are in the upper left, with user account options in the upper right.

A good example of a site with navigation on the left of the page is Hoogerbrugge, a site full of experimental presentations and animations. Anticipating most users’ ability to scroll or, at the very least, hit the Page Down button if necessary, Hoogerbrugge has large menu buttons with accompanying illustrations, clearly stating that the most important part of the site is waiting just on the other side of these buttons.

There are many reasons to break this pattern of navigation, especially when the architecture of a site’s content interacts with the menu–good examples are LEOGEO and Semillero, both sites that feature the navigation as an experience in itself. Other sites, especially those that rely on advertising revenue, need users to stick around a while before heading for the menu, and have a reason to be sneaky with their button placement (but not too sneaky, or users might give up and never return). But aside from this and artistic experimentation–which isn’t necessary as often as many designers want to believe–users want their browsing experiences to be as fast and painless as possible, and managing the navigation of a site with the understanding of where the human eye is conditioned to look will make everything run a little smoother.

28 Things Everybody Should Know, Part XVIII

Drunk people are users too.

Products deemed to be potentially dangerous to the user or the surrounding environment, such as vehicles, weapons and chemicals, are tested under more strenuous conditions and held to higher engineering standards to ensure a level of personal and public safety. Cars are built with a large number of features meant only as a last resort to save lives during an accident, while household products which can’t have safety mechanisms added–bleach, for example–can only be fitted with safety switches and warning messages on their labels; of course, once the bleach has left the bottle, the label can’t follow it to warn of the dangers of its use.

Some cars are equipped with breathalysers, usually issued after a driver has already been caught inebriated behind the wheel, that won’t allow ignition unless the driver’s alcohol content is below the legal limit. Unlike seat belt, airbags and engine mounts that release the engine rather than crush passengers under their weight, the breathalyser is a precaution meant to prevent a tragedy from happening in the first place, much like the safety switch on a pistol. These all seem like common sense today, but not so long ago they were mere suggestions to the manufacturers.

Architecture is another field of design where safety is a primary concern–emergency elevators, backup stairways and fire escapes are all mandatory additions to large buildings and public spaces. But one place where safety is overlooked, sometimes to an obvious degree, is in the interior design that comes after the architects have finished their job.

Interaction design plays a major role in interiors, and in many cases, it seems, safety concerns are overlooked in the interest of artistic value. In this example, I have to again draw from my experiences at The Triple Door in Seattle. It’s not because I didn’t like it there, but because it seems the designers felt like product testing just doesn’t apply to interiors or architecture, which is unfortunate.

The upper level of the establishment is an upscale bar, complete with a giant fish tank, floor-mounted lighting and, as I mentioned in an earlier post, unmarked restrooms. There is a row of booths for private dining along one side of the bar, and surrounding these booths is a wall about chest high and perhaps five inches thick. The wall is topped with a smooth black finish, and happens to be the proper height on which to rest one’s drink while mingling, dancing, or searching for the restrooms.

In fact, the wall seems like it was meant to hold drinks. And why wouldn’t it? No sense letting that space go to waste. The only problem is that the smooth, slick finish is set at an angle–maybe 10 degrees–and does a really good job of holding a glass full of liquid just long enough to give the illusion that everything’s under control. After picking up the shattered remnants of one too many pint glasses to qualify as random user error, I discovered the angle of the wall wasn’t flat, and tested my own glass on its surface. The less liquid in the glass, the longer it would stay–an empty pint glass generally stayed indefinitely–but a full pint fell off within a couple seconds. A half-full glass was too sporadic to come to any conclusions, but more often than not, it would eventually fall in the time it would take most people to remove their coat.

And that’s considering the people weren’t already hindered by the effects of alcohol. Of course, I was sober when I did these tests–the glass I used was filled with root beer–but this being a bar, the designers should have taken into consideration the altered state of a drinker–not just your average tipsy patron, but the Friday night college student with no kids and no responsibilities. If there is a law in effect disallowing a bartender from serving outwardly drunk customers, establishments like this should put forth the effort to lessen the possibility for accidents and injuries that are amplified when alcohol is introduced. Drunkenness may be considered a corner case from an engineering perspective, but that doesn’t mean it’s less common, just less anticipated in most situations.

Interior interaction seems to fall through the cracks between the architectural and decorative stages, almost as if all safety concerns are expected to have been solved by the architects who are long gone before the next wave of designers step in. But to dismiss the safety aspects of any facet of design is to invite more hazardous situations–especially when a user’s behaviors might be altered by a factor such as alcohol. I’d go so far as to say it would be more responsible for a team of designers to hire drunk product testers to examine new interiors and user experiences at various degrees of inebriation. I’m sure there are people who would volunteer for just such a position.

28 Things Everybody Should Know, Part XVII

Don’t fight the operating system.

While they continue to offer more than just a starting point for our applications, such as customizable applets and desktop widgets, operating systems like Windows and Mac OS have developed fairly steady, systematic guidelines by which most programs happily abide. These systems include color-coded, iconic navigation tools and affordance-specific hints that, when used appropriately, allow for easier usability and less confusion.

For example, programs in the Windows environment generally follow a consistent color scheme. In Windows XP, for example, title bars are by default given a blue gradient (which I’ve replaced with solid blue), and inactive title bars are grayed out to show the user that the focus is on another application. As only one program may be in focus at any given time, this is the most obvious hint as to which application will respond to a user’s input.

Adobe Photoshop used to adhere to these standards. Here we see blue title bars showing that Photoshop is the current active application, and which of the three open documents is active within Photoshop. Also, the toolbar to the left shows where a user can click to move the toolbar, or double-click to hide it.

Here is Photoshop’s newest incarnation. Notice there are no blue bars to be found, and the difference between the active and inactive documents is much more difficult to notice at first glance. And switching to another application changes nothing in Photoshop’s title bar, which can lead to confusion for the user.

This new Adobe color scheme, found in most CS4 applications, seems to echo Windows Vista’s default settings, rounding corners around documents and losing the blue headers for a less saturated color scheme. And it could be argued that more neutral surroundings will allow images to be seen with less distraction, but going so far as to eliminate even the option to replace the familiar, ever helpful blue bars that help discern active from inactive elements only takes control away from the user.

Overriding the established scheme also takes an unnecessary toll on the processor. Moving documents around in Photoshop 7 is much smoother and faster than in CS4, and the new layout scheme–really just a Vista/Mac-inspired skin–doesn’t always do its job:

The top part of this image shows Windows XP’s default scheme, and in the middle is Photoshop CS4’s own layout. Quite often, especially after minimizing and restoring the application, Photoshop will forget to refresh its skin properly, allowing a bit of the original format to show through, resulting in a choppy overlapping mess, as shown in the bottom of the image. Because the two don’t have identical buttons size or placement, the user might not know exactly where to click. Fixing this will likely take a couple minutes of coding, and will probably be improved in the near future with an update, but if Adobe had stuck to the rules, they wouldn’t need to come up with workarounds for problems like this.

Another example of ignoring common operating system guidelines is when a program doesn’t place a corresponding button in the Windows taskbar–that horizontal strip along the bottom of the screen. I understand the desire to free up space on the taskbar, but some applications–such as Trillian, my chat program–will often get buried underneath others, or minimized when I want to see my desktop. Without offering a button along the taskbar, it makes locating the application a lot more difficult than if it had just stuck to the rules.

Operating systems don’t always make things easy, but one thing that human-computer interaction benefits greatly from is conformity. With certain exceptions–full-screen games, for one–developers and designers should work together to create experiences that work within these limitations, or at least give users the choice to set their own. For the most part, computer applications like Photoshop are tools we use to achieve a specific end; they aren’t expected to be an experience in themselves.

28 Things Everybody Should Know, Part XVI

Screen edges and corners can drastically improve functionality.

Whenever I seem to lose track of my cursor–something that happens fairly often when using Photoshop, despite how much work Adobe has put into making the cursor stand out from the image behind it–I know I can swipe the mouse into a corner of the screen, where it will stay (unless I’ve got a dual-screen setup), and I’ll have my bearings once again. The corners of the screen give a little solace to those who lose sight of their cursors now and then, and provide a welcome alternative to shaking the mouse back and forth. If a parked cursor is hard to locate, a cursor wildly dashing left and right isn’t much more helpful.

Many elements of human-computer interaction also involve the edges and corners of a display. OSX’s application dock, the Windows Start button, and program-specific toolbars are often located along the edges of the display and nestled in the corners, making them easier to locate, and supposedly, easier to use.

The great thing about these locations is they demand very little attention from a user’s eyes, minimizing the delay in workflow and giving the user less to think about. In a typical setup, moving a mouse more than a couple inches in any direction will bring the cursor to the limit of the screen, no matter where it starts from.

There are cases, however, when a button is placed near an edge or corner, but doesn’t recognize a click unless it occurs a few pixels away from the outside of the screen. This still makes them easy enough to find, but miss out on a critical possibility to truly speed up the user’s actions.

This is the lower left corner of my screen. The Start button won’t activate unless my cursor is at least four pixels up from the bottom of the screen, or two pixels right from the left. Because of this, I can’t simply sweep my mouse down and left, and expect the Start menu to open when I click. I have to move away from the corner, but not so much as to pass the entire button. This takes a lot more of my attention than placing the hit area in the very corner.

Along the edge of the screen are my quick launch icons and buttons to recall all of my opened applications. As with the Start button, none of them are actually along the lower edge, but four pixels above it, taking considerably more effort to click on them.

Thankfully, Windows XP fixed this oversight, but being a fan of the original Start menu and organization, I always use Classic View, which doesn’t include that extra functionality. I have used systems with which Classic View does a better job or recognizing edge and corner clicks, but with all the different versions of Windows out there, and accounting for upgrades and service pack installations, I can’t recollect which versions behave in which way.

Many applications, such as Photoshop, override Windows blue title bar feature (something I’m not too happy about, but I’ll discuss that next) and place their toolbars and other interface components along the top edge of the screen. Again, these items aren’t actually placed against the very edge, but rather seven pixels lower.

This image shows all four corners of the screen using Adobe Lightroom with Windows XP’s standard Start menu. As with Photoshop’s toolbars, none of these are accessible from the very edge, nor are the program menu in the upper left or resize buttons in the upper right. It should be noted that Windows XP’s standard resize buttons, generally applied to all programs, do react to the very edge and corners.

Here, the Start menu and program bar buttons all accept edge and corner clicks, but in the lower right corner, the icons in the system tray and the clock all require the mouse to move away from the edges to work.

Mac OSX takes the idea of screen corners to a fuller extent, launching applications, organizational tools and screen savers when the user stows the cursor in a corner for a second. Many laptop touchpads and PDA screens utilize corners for user-defined applications and options. These make launching common programs much faster and require less interruption to a user’s thought process, adding to the experience, while those just a few pixels off slowly chip away at it.

Until everyone has a touch-enabled screen on their desk, the edges and corners of the screen are the closet thing to tactile response a monitor can provide, as users can safely assume the limitations of the screen will catch the cursor and hold it there for them. Placing buttons along the edges and in corners, rather than just short of each, will make this understanding work in the users’ favor.