Friday, May 27, 2022

Hitting the Books: What do we want our AI-powered future to look like?

Must read

[ad_1]

Simon and Schuster

Extract THE POWER OF ETHICS by Susan Liautaud. Copyright © 2021 by Susan Liautaud. Reproduced with permission from Simon & Schuster, Inc, NY.


Blurred boundaries – the increasingly blotchy junction where machines cross purely human realms – stretch the very definition of the edge. They decrease the visibility of the ethical issues at stake while multiplying the power of other driving forces of ethics today. Two fundamental questions demonstrate why we must constantly recheck that our framing prioritizes humans and humanity in artificial intelligence.

First, as robots become more realistic, humans (and perhaps machines) need to update regulations, societal norms, and norms of organizational and individual behavior. How can we avoid leaving the control of ethical risks in the hands of those who control innovations or avoid letting machines decide for themselves? A nuanced, non-binary assessment of robots and AI, with attention to who programs them, does not mean tolerating a distortion of how we define what is human. Instead, we need to ensure that our ethical decision-making incorporates the nuances of the vagueness and that the decisions that follow put humanity first. And that means proactively representing the great diversity of humanity – ethnicity, gender, sexual orientation, geography and culture, socioeconomic status and beyond.

Second, a recurring critical question in an algorithmic society is: who decides? For example, if we use AI to plan traffic routes for driverless cars, assuming we care about efficiency and safety as principles, then who decides when a principle takes priority over to another, and how? Does the developer of the algorithm decide? The management of the company making the car? Regulators? The passengers? The algorithm that makes the decisions for the car? We’re not close to determining the extent of the decision-making power and responsibility we will or should give to robots and other types of AI – or the power and responsibility they might one day assume with or without our. consent.

Human engagement is one of the main principles guiding the development of AI among many government, business, and nonprofit organizations. For example, the Organization for Economic Co-operation and Development’s artificial intelligence principles emphasize the human ability to challenge AI-based results. The principles state that AI systems should “include appropriate safeguards – for example, allowing human intervention when necessary – to ensure a just and equitable society”. Likewise, Microsoft, Google, the OpenAI Research Lab, and many other organizations include human intervention capability in their set of principles. However, it is still not clear when and how this works in practice. In particular, how do these innovation controllers prevent harm – whether it’s car crashes or gender and racial discrimination due to artificial intelligence algorithms trained on unrepresentative data. In addition, some consumer technologies are being developed that completely eliminate human intervention. For example, Eugenia Kuyda, the founder of a company that makes a bot mate and confidant called Replika, believes consumers will trust the privacy of the app more because there is no human intervention.

We desperately need an “off” switch for all AI and robotics in my opinion. In some cases, we need to plant a stake in the ground when it comes to aberrant and clearly unacceptable robot and AI powers. For example, giving robots the ability to indiscriminately kill innocent civilians without human supervision or deploying facial recognition to target minorities is unacceptable. What we must not do is nullify the opportunities offered by artificial intelligence, such as locating a lost child or a terrorist or dramatically increasing the accuracy of medical diagnoses. We can equip ourselves to enter the arena. We can influence the choices of others (including businesses and regulators, but also friends and fellow citizens), and make more (not just better) choices for ourselves, with greater awareness of when a choice is made. is taken from us. Businesses and regulators have a responsibility to help us make our choices clearer, easier and better informed: first think about who can (and should) decide and how you can help others to be able to decide.

Now let’s move on to aspects of the framework targeting only fuzzy boundaries:

Blurred boundaries fundamentally force us to step back and reconsider whether our principles define the identity we want in this fuzzy world. The most basic tenets – the classics of treating each other with respect or being accountable – hold up in a world where what we mean by “one another” is unclear? Do our principles focus enough on how innovation affects human life and the protection of humanity as a whole? And do we need a separate set of principles for robots? My answer to the latter is no. But we need to make sure that our principles prioritize humans over machines.

Then, application: do we apply our principles in the same way in a world of blurred borders? Thinking about the consequences for humans will help. What happens when our human principles are applied to robots? If our principle is honesty, is it okay to lie to a bot receptionist? And do we distinguish between the different types of robots and lies? If you lie about your medical history to a diagnostic algorithm, it would appear that you are unlikely to receive an accurate diagnosis. Do we care if the robots trust us? If the algorithm needs some form of codable trust in order to ensure that the stop switch is working, then yes. And while it can be easy to dismiss the emotional side of trust given that robots don’t yet feel emotion, here again we wonder what impact could be on us. Would untrustworthy behavior with machines negatively affect our emotional state or spread mistrust among humans?

Blurred boundaries increase the challenge of obtaining and understanding information. It’s hard to imagine what we need to know – and that’s before we even know if we can know it. Artificial intelligence is often invisible to us; companies do not disclose how their algorithms work; and we don’t have the technological expertise to assess the information.

But some key points are clear. To speak of robots as if they were human is incorrect. For example, many functions of Sophia – a realistic humanoid robot – are invisible to the average person. But thanks to the transparency team at Hanson Robotics, I learned that Sophia is tweeting @RealSophiaRobot with the help of the company’s marketing department, whose character writers make up some of the language and extract the rest. directly from Sophia’s machine learning content. And yet, the invisibility of many of Sophia’s functions is essential to the illusion that she seems “alive” to us.

In addition, we can demand transparency from companies which is really important to us. Maybe we don’t need to know how the fast food bot employee is coded, but we need to know that he will accurately process our food allergy information and confirm that the burger meets the requirements in health and safety.

Finally, when we look more closely, some blur is not as blurry as it seems at first glance. Lilly, creator of a male romantic robotic companion called inMoovator, doesn’t think of her robot as human. The concept of a romance between a human and a machine is hazy, but she openly acknowledges that her fiancé is a machine.

Right now, the responsibility lies with the humans who create, program, sell, and deploy robots and other types of AI – whether it’s David Hanson, a doctor who uses AI to diagnose cancer, or a programmer who develops the AI ​​that helps make immigration decisions. Responsibility also falls on all of us when we make the choices we can about how we engage with machines and express our views to try to shape both the regulatory and tolerance levels of society. for the blur. (And it’s worth pointing out that stakeholder accountability doesn’t make robots more human, nor does it give them the same priority as a human when principles conflict.)

We also need to be careful to consider how robots might be more important to those who are vulnerable. So many people are in difficult situations where human assistance is not safe or available, whether for reasons of cost, being in an isolated or conflict area, insufficient human resources or for others. reasons. We can be more proactive by considering stakeholders. Support tech leaders who are highlighting the importance of diverse data and perspectives in building and regulating tech – not just sorting through the damage. Ensure that non-experts from a wide variety of backgrounds, political perspectives and ages lend their perspective, thereby reducing the risk that blurring technologies contribute to inequality.

Blurred boundaries also compromise our ability to see potential consequences over time, leading to blurred visibility. We do not yet have enough research or information on potential mutations. For example, we don’t know the long-term psychological or economic impact of helping robots, or the impact on children growing up with AI in social media and digital devices. And just as we’ve seen social media platforms improve connections and give people a voice, we’ve also seen that they can be addictive, a mental health issue, and be used as a weapon to spread compromised truth and even violence.

I urge the companies and innovators who create seemingly user-friendly AI to go one step further: incorporate tech breaks – switches – more often. Consider where the benefits of their products and services might not be useful enough to the company to justify the additional risks they create. And we all have to push ourselves harder to use the control we have. We can insist on genuinely informed consent. If our doctor is using AI to diagnose, it needs to be told to us, including the risks and benefits. (Easier said than done, because doctors can’t be expected to be AI experts.) We can limit what we say to robots and artificial intelligence devices like Alexa. , or even if we use them. We can redouble our efforts to model the good behavior of children around these technologies, humanoid or not. And we can urgently support political efforts to prioritize and improve regulation, education and research.

[ad_2]

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article