Design in the Age of Divided Attention

Author:
Topic: Methodology

Interaction Design (IxD) has quietly reached a point of cultural saturation: technology today is practically all-encompassing. When we’re not surrounded by it, we tend to bring some with us, clinging to our devices even in the heart of the wilderness. For some designers, this equates to job security, but it’s increasingly clear that ubiquity has come at a high cost. Technology is supposed to extend our reach and enable us to do the impossible, but there seems to be more disruption than augmentation happening lately.

“If I’m not watching TV, I’m on my phone. If I’m not on my phone, I’m on my computer. If I’m not doing any of those things, what am I supposed to do?”

These are the words of a 14-year-old American girl, interviewed recently by The Huffington Post. It feels like she’s speaking directly to me, as an interaction designer and a parent. As a discipline, we need to examine how impartial we can – or should – remain in the creation of new products and services. That means taking some responsibility for the cultural and behavioral by-products of our craft.

Those by-products are affecting human behavior, whether we like it or not. Look at people waiting at crosswalks, sitting at dinner or in the lane next to you at a stoplight: head down, device in hand, attention focused on the things we’ve designed. The temptation to check in constantly has grown to a point where some people are taking evasive action. Social games like phone stack promote better behavior when out with friends. Others are utilizing their creative voice to create movies like the 2012 drama Disconnect. Among highly-connected professionals, there’s a lot of talk lately about the benefits that come with unplugging. Maybe – just maybe – these signs point to us as designers having done our jobs too well.

“Never allow technical thinking to cloud judgement about what makes for a good life.”
-Aristotle, circa 350 BC, as paraphrased by David Tabachnick in The Great Reversal

Techne, in Aristotle’s Greek, meant craft: making with intention, or the practical application of an art. The word we’ve derived from that root, technology, was uncommon prior to the 20th century, because there wasn’t much in the world for it to describe. Every technological advance, from the printing press to the television, through and including computers, has been decried in its time. What’s new today is the ubiquity of the devices we spend our days with; half of all Americans check their phones 150 times a day or more. As interaction designers continue to create products and services that are used by millions of people all the time, we need to acknowledge the implications, and take a different approach. It’s time to take some responsibility, and to return focus to what makes for a good life. The future we’re crafting at Ziba, with clients like Intel and HP, points to a future where technology gets back into a supporting role.

Designing for the Periphery
Today’s marketplace is filled with products that demand undivided attention; designs that start out delightful can lead to compulsion or worse. As we move ahead and develop more digital ‘mouths to feed,’ it’s time to shift away from front and center, and start thinking about design that doesn’t need so much attention. We can choose to design for the periphery, and invent digital solutions that don’t demand 100% engagement. Attention is a precious resource. Let’s acknowledge that if we’re creating things that people interact with hundreds of times a day, we can’t afford to keep racing blindly toward the next release.

Soon, devices will listen to what’s going on around you, and know what’s just happened… maybe even what’s about to happen.

Smarter devices will help. Devices that not only know us, but that can communicate effectively between themselves, enabled by active listening and sensing, are the future. Ever-cheaper, smaller sensors and controllers embedded in wearable devices will bring technology even closer to our bodies, potentially putting our entire lives on record. New inputs – speech recognition and voice commands, gestural controls – will require new design approaches. Applications are already sophisticated enough to take into account who you are, where you are, and what time it is. Soon, devices will listen to what’s going on around you, and know what’s just happened… maybe even what’s about to happen. Are you in the middle of a presentation? The technology to recognize your voice and postpone pushing an incoming text message exists today. This technology could have been used to prevent you from being distracted and losing your train of thought mid-sentence in front of several hundred people. But your phone was not designed with peripheral context in mind.

Learning to Nudge
Fully acknowledging context would take software with the kind of sensitivity we expect of mature people. There’s still a long way to go, in terms of the capacities of artificial intelligence, but some highly empathic interaction designers are already creating veneers of personality to make up for devices’ lack of nuanced awareness. Nintendo’s Wii, for example, prompts users after three games: why not take a break? This emphasizes the importance of understanding the “user’s bias,” which boils down to people not being very good at knowing what’s good for them. Interaction design needs to get away from warnings, alarms and trying to push changes in behavior. We need to learn to nudge: provide well-timed encouragement for people to make good choices in real-time. Making headway like this will require a deep understanding of people’s motivations as well as learning from behavioral and data sciences.

With expectations on the rise, companies have to invest in making products and services that allow consumers to stay in the moment, focused on what really matters.

Leveraging that deep understanding, and using it to get contextually-aware interactions right is not easy. Even the most experienced designers, with the time to work out a tremendous range of variables, struggle to solve the problem of devices working for us versus working against us. Design this subtle and sophisticated requires commitment from our business partners, too. We need clients who see the value in investing in these small but crucial moments. With expectations on the rise, companies have to invest in making products and services that allow consumers to stay in the moment, focused on what really matters. It’s worth it; this kind of investment will increase loyalty, ultimately, and drive long-term revenues.

A Call to Action
Interaction design has the potential to improve with use: our things should get better the more we use them. Rather than taking over our time, lives, and consciousness, as seems to have happened to the teenager quoted earlier, we can design experiences to keep people’s heads up. If we accept interaction as our medium and behavior as our outcome, we’ve got to create technology that’s adaptive and long-lasting. Soon, people won’t remember life without the internet. They’ll bring new expectations about how technology can actively support their evolving wants, needs, and aspirations. For now, though, we have people’s attention. We have the means. It’s time to bring the intent, take ownership of what we do, and embrace the level of impact we can have. To paraphrase Apple’s 2013 manifesto, “We simplify, we perfect, we start over, until everything we touch enhances each life it touches.” These words imply a dedication to craft worth spending time and resources to achieve. Let’s get started designing the world we want to live in, one interaction at a time.


Comments are closed.