In the integration field, our technologies often require us to work within a defined set of parameters. There’s a maximum capacity for hardware-based storage solutions, for example, and a maximum bandwidth for data processing. I’ve often wondered if there’s a similar bandwidth cap and storage limit in the human brain. What is the maximum amount of information we are capable of carrying and recalling on a moment’s notice? And how quickly can we hunt for and recall that information?
We build, design, and deploy devices and systems that have RAM, or “random access memory,” that serve as the machine’s “working memory” and allow it to recall information for the tasks that are currently being worked on. That information is then put away when those tasks are complete so that RAM can be freed up for something else. But our brains don’t quite work that way. Take a ride on my train of thought for a moment.
If you’ve spoken with me, whether on a call, in person, or on social media, you’ve no doubt heard me discuss how our work with technology influences and shapes the experiences we have with our jobs. As the last 14 pandemic months have unfolded, I’ve paid extra special attention to people’s ability to adapt and learn the skills necessary to execute good videoconferences or virtual presentations.
While many people have certainly upped their game in the virtual medium, it seems that plenty still struggle with even the most basic things—and I can’t help but wonder why. I don’t have all the answers, but I hope this article will provoke some conversations.
As I see it, it’s quite possible that people can’t get a handle on these rather simple things because they’ve reached their storage and/or bandwidth capacities. Or maybe more likely, they are simply are “too full” to care.
As I’m writing, I have received several notifications: text messages, calendar alerts, email notifications, phone calls, voicemails, and to-do reminders. (This list doesn’t even take into account additional popups one might receive on a very regular basis from social media, news sites, retail sales companies, weather alerts, and maybe even a Tinder match.) In an effort to keep things streamlined, I go out of my way to unsubscribe from push notifications for most apps and services I use, enabling alerts only for those that need my immediate attention.
What do my app notification preferences have to do with integration? Well, continuing with the example of bad videoconferencing habits, I’m theorizing that people’s seeming inability to grasp even basic concepts that would result in a more pleasant virtual experience (something as simple as using the mute function, let’s say) is a two-layered issue: we can’t be bothered to remember little things like that because we are inundated with many other more important things to remember or pay attention to, and those other things are constantly screaming for our attention and time. (Like my favorite cash settlement collection service, JG Wentworth, “They want it now!”)
Unlike a computer, our brain’s “RAM” is tuned to support tasks that are completed more frequently and ones that have been completed more recently, which seems reasonable. I see these tasks as akin to computer applications that are always running in the background and would likely have an icon in the brain’s system tray. Things like sending a meeting invite are second nature and don’t require conscious thought because we do them so frequently. I don’t need to remember to put a subject in or to include a location, date, time, etc.
Prior to the videoconferencing onslaught brought on by the COVID-19 pandemic, I fully accepted rocky video chats because many people lacked the experience to pull off a seamless call. After 14 months, though, we’ve had ample opportunities to learn how to do remote meetings well; one of the biggest excuses seems to be, “Who cares?” There is too much happening to be monkeying around with my mute button every time I want to be heard, or to remember to adjust my lighting, raise my laptop to eye level, etc.
When we design solutions, we need to consider this. Yes, it is very easy and reasonable (to us) to ask the user to remember to [insert new behavior or action here] in order for things to work properly, but is that enough to compel them to actually do it? And if they don’t do it and the experience is hindered, is that their fault for not executing properly, or our fault for not anticipating the very easily presumable behavior of the human condition?
What is reasonable to request of people who don’t have infinite amounts of energy or brain power to dedicate to the seemingly mundane functions of their job?
I’ve had the opportunity to work all over the AV industry. In the manufacturing roles I’ve had, I identified one consistent truth: The less you rely on the user to push the experience forward, the better the experience will be. That is to say, the more you can remove the user from the technology equation, or the more you can make the technology adapt to how people are inclined to do things, the smoother things will go.
With less user intervention, the less likely it is that problems will occur, generally speaking—which greatly improves the odds of a good user experience.
Now that I’ve had a little time to think and reflect on this, it makes perfect sense, but we can’t anticipate every single step, nor can we account for every possible scenario. Therefore, as we design technology solutions to improve the experience for end users, we must ask ourselves, In our world of required multitasking, with its never-ending stream of notifications, calls for action, and all-around distractions, what is the human capacity for storage and bandwidth? And as we design solutions, and how can we rely less on that capacity?