Humanoid Robot

Emerging Technologies and the Precautionary Principle

Scientific American has a great series where they invite thought leaders from a variety of science-related fields to expound on an important topic. At the recent World Economic Forum, they asked philosopher and neuroscientist Nayef Al-Rodhan to talk about the ethical implications of emerging technologies. It is an incredible read and not gated, so I recommend every one of you should read his thoughts on the subject.

Immediate ethical red flags emerge, however: Building neuromorphic chips would create machines as smart as humans, the most intelligent species on the planet. These technologies are demonstrations of human excellence yet computers that think could be devastating for our species and, as Marvin Minsky has put it, they could even keep humanity as pets.

He makes a couple of important points. First, that there is often intentional demagogic discourse that confuses the issues involved. It creates ambiguity and opens up the notoriously susceptible human brain to bias and polarization. But he doesn’t dwell on this like many of the talking heads do on TV.

My Take

More importantly, he notes that different stakeholders have different ways of thinking and different time frames that constrain their thinking. So it is not necessarily obfuscation. He didn’t go into a lot of detail, so let me add some here.

  • Politicians have a time frame aligned with their term of office. They generally are not technical experts and they communicate the most affectively and demogogically and focus largely on voters and donors, both with political agendas of their own.
  • Companies have time frames in fiscal quarters and tend to use utilitarian thinking. They are also less sensitive to externalities. They communicate with customers, shareholders, regulators, and employees. They need some buy in from society at large.
  • Individuals have time frames measured in decades. They think largely based on emotion (as much as we like to think we are rational beings). They care mostly about their families, communities, and somewhat about larger groups like their religions, cultures, and nations.
  • Religious leaders have time frames in millennia. We can see this with Pope Francis trying to drag the Vatican into the 21st Century and only asking for the smallest of changes at a time. They speak on faith and emotion.

Then we have to think about the time frames inherent in the risks of different emerging technologies. Driverless cars have some unknown risks, but they are short term. It would be tragic if hundreds of cars crashed due to some flaw in the design, but future generations would not be impacted. On the other hand, climate change has some potential to destroy the species or even the world ecology. And then there are some that we don’t know. Could Stephen Hawking be right that artificial intelligence could destroy the world? Or can we just unplug it if we get that far? What about genomic tinkering either of our flora (GMO crops) or of our children (with the best of intentions to avoid disease but leading to eugenics)?

You might think that these technologies don’t involve much human factors, so why are we bringing it up here on EID. But we have to deal with these issues much more than you might think, and much more in the future. Are you working on automation? 3D printers? Drones? Smart homes? IoT manufacturing or logistics? GPS? Augcog? All of these have implications and potential consequences that could spiral out of control without thinking through how they might evolve going forward. And the way that emerging technologies might meld with the environment is exactly the kind of use case evaluation that we are trained for.

Your Turn

Think about whatever domain you are working in. Can you imagine any way that it gets out of control? Perhaps not today, but ten years from now? Perhaps not the way you expect it to evolve, but in another, unpredicted way? Perhaps not with the good intentions that you and your team apply, but with more malicious intent?

Could your designs lead to an increase in the digital divide? Could they reduce our already minimal levels of privacy? Do they support sustainability? Do they make us dependent on technology?

How careful should we be? Should we allow the free market to identify what works and what doesn’t? Should we use the extreme precautionary principle and test the heck out of a new design or technology before releasing it on the world?

Image Credit: Marc Seil

2 thoughts on “Emerging Technologies and the Precautionary Principle”

  1. To a certain degree, the Precautionary Principle is inapplicable to the advance of technology, as automation and the replacement of human activity by machines is clearly a matter of destiny coming from Western civilization. Had the Western fathers sat around and applied the Precautionary Principle to their world at the time, things might have been different, now, but who knows? Every individual ought to be thinking about what it means for humanity. However, if HFES specialists do not own the question and push others to think, then who do we think is going to do it, and what do we expect to come? I would not assume that people reading the blog have a functional understanding of the Precautionary Principle. It may be applied to any decision-making process, but famously has been served up as an invitation to think more deliberately about climate policies. It is more than estimating risk. Something unknown and unknowable could “save” the world, which might include Human Factors, or not.

Leave a Reply

Your email address will not be published. Required fields are marked *