Thursday, September 21, 2023

Worried about your company’s AI ethics? These startups are here to help.

Must read

[ad_1]

Parity is among a growing number of startups promising organizations ways to develop, monitor, and patch their AI models. They offer a range of products and services ranging from bias mitigation tools to explainability platforms. Initially, most of their clients were from heavily regulated industries like finance and healthcare. But increased research and increased media attention on bias issues, confidentiality and transparency have shifted the focus of the conversation. New customers are often simply worried about being responsible, while others want to “stick around” in anticipation of regulation.

“A lot of companies are really facing this for the first time,” says Chowdhury. “Almost everyone is asking for help.”

From risk to impact

When working with new clients, Chowdhury avoids using the term “responsibility”. The word is too spongy and ill-defined; this leaves too much room for poor communication. Rather, it starts with more familiar corporate jargon: the idea of ​​risk. Many companies have risk and compliance management arms and have established risk mitigation processes.

AI risk mitigation is no different. A business should start by considering the different things that concern it. These can include legal risk, the possibility of breaking the law; organizational risk, possibility of losing employees; or reputational risk, the possibility of suffering a public relations disaster. From there, he can work backwards to decide how to audit his AI systems. A finance company, operating under fair loan laws in the United States, would like to check their lending models for bias to mitigate legal risk. A telehealth company, whose systems are trained on sensitive medical data, could perform privacy audits to mitigate reputational risk.

Parity includes a library of suggested questions to help companies assess the risk of their AI models.

PARITY

Parity helps organize this process. The platform first asks a business to create an internal impact assessment – in essence, a set of open-ended survey questions about how their business and its AI systems work. He can choose to write custom questions or select them from Parity’s library, which contains over 1,000 prompts adapted from AI ethics guidelines and relevant legislation from around the world. Once the assessment has been established, company employees are encouraged to complete it according to their function and professional knowledge. The platform then runs their responses in free text through a natural language processing model and translates them taking into account key areas of business risk. Parity, in other words, serves as a new intermediary to put data scientists and lawyers on the same page.

Then the platform recommends a corresponding set of risk mitigation actions. These can include creating a dashboard to continuously monitor a model’s accuracy or implementing new documentation procedures to track how a model has been trained and refined at every stage of its process. development. It also offers a collection of open-source frameworks and tools that could help, like IBM AI Fairness 360 for bias monitoring or Google Model Maps for documentation.

Chowdhury hopes that if companies can reduce the time it takes to audit their models, they will become more disciplined by doing it regularly and often. Over time, she hopes, it might also open them up to thinking beyond risk mitigation. “My sneaky goal is actually to get more businesses to think about impact and not just risk,” she says. “Risk is the language that people understand today, and it’s very valuable language, but risk is often reactive and reactive. The impact is more proactive, and it’s actually the best way to define what we should be doing. “

An ecosystem of responsibility

While Parity focuses on risk management, another startup, Fiddler, emphasizes explainability. CEO Krishna Gade began to reflect on the need for more transparency in how AI models make decisions while acting as the engineering lead for Facebook’s News Feed team. After the 2016 presidential election, the company made a big internal effort to better understand how its algorithms ranked content. The Gade team developed an internal tool which later became the basis of the “Why am I seeing this?” functionality.

[ad_2]

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article