[ad_1]
Basic books
Copyright © 2020 Laura Major and Julie Shah All rights reserved. The following excerpt is reprinted from What to expect when you expect robots: the future of human-robot collaboration by Laura Major and Julie Shah. Reprinted with permission from Basic Books.
We tend to think of robots in terms of the joy and entertainment they bring. We buy an Alexa not only to play our favorite music, but also to add character to our homes. We rejoice in his pre-programmed jokes, jokes and animal sounds. People personify their Roombas and choose smart home devices that blend in with their decor. We give our devices names as if they were pets and personalize their voices. The overwhelming feeling is that what we want from robots is that they can be narrated. We want them to be flexible pseudo-people.
In turn, today’s robots are usually designed with special attention to aesthetics and character. When robot news goes viral, it’s because bots were made to look more like people. They mimic our facial expressions and act friendly. We want them to have personalities. Indeed, much attention has been devoted to the development of robots capable of engaging their users and connecting with them on an emotional level. The companies that develop these tools are likely to feel that the anthropomorphization of their products will help to create attachment to their brand. There is a whole new area of ​​technology design that aims to optimize user emotions, attitudes and responses before, during and after using a system, product or service. It’s called user experience (UX), and for most businesses, the goal of UX development is to attract and retain customers.
But as robots enter our daily lives, we need them more than entertainment. More and more, we don’t want them to simply delight us, we want them to help us and we have to understand them. As robots move in and out of traffic, manipulating our meds and sneaking by our toes to deliver pizza, it doesn’t matter how much fun we are having with them. Developers of new technologies will have to confront the complexity of our everyday world and devise ways to manage it in their products. We will all inevitably make mistakes in these interactions, even where lives are on the line, and only by designing a true human-robot partnership can we identify these mistakes and compensate for them.
The stakes for the design of most consumer electronics products are now quite low. If your smartphone breaks, no one is likely to be hurt. The designers therefore strive to provide the best experience for the most common situations. Problems that only occur in rare circumstances are tolerated, and it is assumed that most problems can be resolved by restarting the device. If that fails, you just need to find out, perhaps with the help of a tech-savvy friend. It is just not the goal of most consumer technologies to be resilient against all possible outages, and it is not worth the effort for businesses to avoid all outages. A user, after all, is usually willing to ignore an occasional software glitch, as long as the overall experience is good and the device seems more useful than the competition offers. It’s just not the same with safety critical systems: a blue screen of death on the freeway in a self-driving car could mean a catastrophic accident.
So, the goal of UX is to get a positive emotional response from the user, and the best way to do that is to focus on the artistic aspects of the system. Give it a “personality”, make it elegant and playful. Emphasize the branding of the product. Consider trapping users in the system by accumulating their data or making it difficult to transition to a competing product. And then at some point stop sending software and security updates. Planned obsolescence of the product forces the user back into the sales cycle. The ultimate goal in the design of most consumer electronics is to get people to buy more, which results in short timescales between generations. And every time you buy the latest version of the product, you have to restart the learning process.
These design goals will not be enough for the new class of social work robots that we will increasingly encounter in our daily lives. Take, for example, the first BMW iDrive. BMW was at the forefront of the movement to introduce high-tech infotainment systems into cars. In 2002, the company launched the iDrive. Engineers tried to make it fun and stylish, but that wasn’t enough. Much like the introduction of new generations of aircraft automation, this first interactive infotainment system created unexpected safety concerns – so many of them, in fact, that early versions of the system were dubbed “iCrash “.
The first iDrive gave users the flexibility to customize the display to suit their preferences. There were about seven hundred variables that the user could customize and reconfigure. Imagine how distracting it was to change the placement of features or the color of buttons on the screen when you stopped at a red light. This created unnecessary complexity for the users, as there was too much to learn. The many features of the infotainment system and the many ways to customize it were overwhelming. As the conductors got absorbed in manipulating the interface, their focus shrank and things got dangerous. Drivers started to miss important clues on the road or other cars. This is why user personalization is a bad idea for security critical systems. Instead, designers need to determine an optimal configuration for the controls early on with safety in mind. In this case, the commonly used functions had to be more easily accessible to the driver. A single button for raising and lowering the air conditioning or changing the radio station does not have to be hidden under a complex tree of menu options.
The physical layout of the first iDrive system was problematic. The design introduced a central control architecture with a digital display and a single controller, a trackball. But the screen and the controller were physically separate, with the screen in the center panel facing forward and the controller on the center console between the two front seats. Most other infotainment systems required the driver to press buttons near them or on the screen itself. The physical separation between the screen and the input device presented a mental barrier, as drivers had to play the trackball in one location and watch the screen in another. Also, removing physical buttons has eliminated muscle memory that most of us have developed in our own cars. We reach out and grab the button to turn down the air conditioning fan without even taking our eyes off the road. This is not possible with a digital screen: the driver must look away from the road to adjust the air conditioning or the radio.
Finally, the first iDrive used a deep menu structure, which required the user to click through many menu options to access specific functions. The specific functions desired by users were buried deep within a series of options. A large menu, separating functions into individual commands that could be accessed directly, such as buttons or dials, would have been preferable. The wide menu design is the choice for most cockpits, as it allows pilots to activate specific functions with the push of a single button. The pilot is physically surrounded by a full set of menu options and can quickly activate any of them at any time. Large menus require more physical space for buttons and dials, and they may require the user to learn more about the system, depending on the number of menu options available. They may seem more complicated, but in fact, they make it easier to quickly select options. The right solution for the work of robots, as we will see, often combines both approaches.
[ad_2]