Home Technology news New York City Offers Regulation Algorithms Used in Hiring

New York City Offers Regulation Algorithms Used in Hiring



In 1964, the Civil Rights Act prohibits humans who have made hiring decisions from discriminating on the basis of sex or race. Now software often contributes to these hiring decisions, helping managers filter resumes or interpret video interviews.

This worries some tech experts and civil rights groups, who cite evidence that algorithms can reproduce or amplify the biases people show. In 2018, Reuters reported that Amazon scrapped a tool who filtered resumes based on past hiring patterns because they discriminated against women.

The legislation proposed to the New York City Council aims to update the age discrimination rules for algorithms. The law project would require companies to disclose to applicants when they have been assessed using software. Companies that sell such tools should perform annual audits to verify that their people sorting technology does not discriminate.

The proposal is part of a recent movement at all levels of government to impose legal constraints algorithms and software that shapes life-changing decisions – one that can shift to new speed when Democrats take control of the White House and both houses of Congress.

More than a dozen U.S. cities have banned government use face recognition, and New York State recently adopted a two-year moratorium on the use of technology in schools. Some federal lawmakers have proposed legislation to regulate face algorithms and automated decision tools used by businesses, including for hiring. In December, 10 senators asked Commission for Equal Employment Opportunities to control bias in AI recruiting tools, saying it feared technology would worsen racial disparities in employment and harm the economic recovery Covid-19 in marginalized communities. Also last year, a new law came into effect in Illinois requiring consent before using video analytics on job applicants; a similar Maryland law restricts the use of facial scan technology when hiring.

Lawmakers are more accustomed to talking about regulating new algorithms and AI tools than implementing such rules. Months after San Francisco banned facial recognition in 2019, it had to change the order because it inadvertently made city-owned iPhones illegal.

The New York City proposal launched by Democratic Council member Laurie Cumbo would force companies to use so-called automated job decision tools to help select candidates or decide on conditions such as compensation for disclosing the use of technology. Suppliers of such software would be required to perform an annual “bias audit” of their products and make the results available to customers.

The proposal faces resistance from some unusual allies, as well as unresolved questions about how it works. Eric Ellman, senior vice president of public policy at the Consumer Data Industry Association, which represents credit and background check companies, says the bill could make hiring less fair by imposing new charges on companies that perform background checks on behalf of employers. He argues that such controls can help managers overcome a reluctance to hire people from certain demographic groups.

Some civil rights groups and AI experts are also opposing the bill – for different reasons. Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, organized a letter from 12 groups, including the NAACP and AI Now Institute at New York University, opposing the proposed law. Cahn wants to regulate the hiring of technology, but he says New York’s proposal could allow software that perpetuates discrimination to be recognized as having passed a fairness audit.

Cahn wants any law to define the covered technology more broadly, not let vendors decide how to audit their own technology, and allow individuals to sue to enforce the law. “We haven’t seen any meaningful form of enforcement against the discrimination that concerns us,” he said.

Others have concerns but still support New York’s proposal. “I hope the bill goes ahead,” says Julia Stoyanovich, director of the Center for Responsible AI at New York University. “I also hope it will be revised.”




Please enter your comment!
Please enter your name here

Exit mobile version