One of the most compelling arguments for introducing autonomous driving vehicles is the potential reduction in injuries and fatalities caused primarily by human error. According to the National Highway Transportation Safety Administration, human error accounts for 94% of fatal crashes, and advanced safety technologies can be expected to reduce these numbers substantially.
However, at the end of April this year, there was an incident in which a Tesla Model S, from a parked position, collided with a trailer parked in front of it after the “Summon” autopark feature was activated. With Summon, the driver can exit the car and make it roll slowly forward or backward into a tight parking space using the key or an app. The incident caused only about $700 in damage to the luxury car’s windshield, but this incident might be a symptom of a greater problem.
Gartner, the IT research company, published its 2016 list of predictions for marketing technology a few weeks ago. As I was reading it, I was struck with how easily the list could be aligned with the user experience of the typical purchase process. Not the complete buyer’s journey; but at least the transactions steps in the middle.
In less than three years, advances in marketing technology will move beyond human intervention to streamlining and scaling activities that currently require manual interactions with audiences. Intelligent technologies will do more than automate repetitive operations — they will investigate, evaluate and make decisions on behalf of both marketers and customers.Marketing technology will soon become so intelligent that it will perform tasks that have always required direct human involvement.
Everyone’s excited and/or scared about artificial intelligence but should we be excited and/or scared about Intelligence Amplification instead?
I have been interested in this dichotomy for a long time, especially in health care . Socio-culturally, there are many reasons why we don’t accept fully autonomous systems, even when they are safer, faster, and more effective. The DTNS hosts use the example of elevators where a human operator was necessary for several years before we were willing to accept automation, even when they weren’t really doing anything except hitting buttons. We see it now with cars and drones.
I can’t decide if this is a triumph for analytics and algorithms or if it is one of those gaps that is ripe for human attention.
Arjun Chandrasekaran from Virginia Tech and pals say they’ve trained a machine-learning algorithm to recognize humorous scenes and even to create them. They say their machine can accurately predict when a scene is funny and when it is not, even though it knows nothing of the social context of what it is seeing.
I recently had to retire my tried and trusty alarm clock radio. I never thought it was anything special. I didn’t keep it around because it had superior functionality, a great experience, great audio quality . . . . it was just the clock I had.
But in making a change to a new clock, I realized how good I had it before. It seems that attention to simple human factors principles was not a priority.
Mindfulness. It seems to be the holy grail for everything these days. From productivity improvement to psychotherapy. It also seems to be something we are not particularly good at, as this episode of South Park hilariously illustrates (skip forward to minute 10). A 2010 study by Matthew Killingsworth and Daniel Gilbert found that we are mentally absent for half of our waking hours.
In Howard Rheingold’s 1993 book Virtual Communities, one of the earliest works to chronicle the reality of life online, he laid out two rules for the coming age: “Rule Number One is to pay attention. Rule Number Two might be: attention is a limited resource, so pay attention to where you pay attention.”
There are so many cases where we see customers having trouble with befuddling and legalistic user agreements that get them into trouble. Perhaps a customer reveals more personal information than she realized to an advertising aggregator. Perhaps he ceded the intellectual property rights for something he created while using a development environment. Perhaps I agreed to transaction fees and automatic services I never intended to.
In a later call from emergency services made to Bernstein directly, the driver denied all knowledge of any accident. The driver told the dispatcher that “everything was fine,” before the dispatcher said, “Ok but your car called in saying you’d been involved in an accident. It doesn’t do that for no reason. Did you leave the scene of an accident?”
If you haven’t seen it, Don Norman co-wrote an article in Fast Company decrying the collapse of Apple’s commitment to usable design. Then Anthony Franco (from UX Magazine) pilloried him in a Pulse piece on LinkedIn.
No more. Now, although the products are indeed even more beautiful than before, that beauty has come at a great price. Gone are the fundamental principles of good design: discoverability, feedback, recovery, and so on. Instead, Apple has, in striving for beauty, created fonts that are so small or thin, coupled with low contrast, that they are difficult or impossible for many people with normal vision to read. We have obscure gestures that are beyond even the developer’s ability to remember. We have great features that most people don’t realize exist.
Flow is one of those concepts that we all like to talk about but is much harder to achieve in practice. At its essence, flow is that feeling you get when you are “in the zone.” You are fully immersed in an activity, totally focused, high levels of challenge are balanced with high levels of skill so you get a great feeling of accomplishment. It increases your ability to persist through difficulty.
Sanjay Batra brought up a fantastic example of inclusive design during the Accessibility Panel at the HFES Annual Meeting this year. For someone who is visually impaired, it is very hard to judge if their meat is sufficiently cooked. Because of the health risks of undercooked meat, this is both a perception challenge and a high anxiety context.