[ad_1]
This year has had a lot, including some bold claims of AI breakthroughs. Industry commentators have speculated that the GPT-3 language generation model could have achieved “aartificial general intelligenceWhile others have praised the protein folding algorithm of DeepMind, a subsidiary of Alphabet, Alphafold and its ability to “transform biology. While the basis for these claims is thinner than the headlines, it has done little to dampen enthusiasm in the industry, whose profits and prestige depend on the proliferation of AI.
It is in this context that Google fired Timnit Gebru, our dear friend and colleague, and leader in the field of artificial intelligence. She is also one of the few black women in AI research and a strong advocate for bringing more BIPOCs, women and non-Westerners to the field. By any measure, she excelled to the work that Google hired her to do, including demonstrating racial and gender disparities in facial analysis technologies and developing reporting guidelines for AI datasets and models. Ironically, that and her vocal advocacy for those under-represented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues remove a research paper critical large-scale (profitable) AI systems, Google Research told her team that she accepted her resignation, despite not resigning. (Google declined to comment for this story.)
Google appalling treatment of Gebru exposes a double crisis in AI research. Field is dominated by an elite, mostly white males, and it’s controlled and funded mostly by big industry players – Microsoft, Facebook, Amazon, IBM, and, yes, Google. With Gebru’s dismissal, the civility policy that harnessed young efforts to build the necessary guardrails around AI was torn up, bringing questions about the racial homogeneity of the AI ​​workforce and the ineffectiveness of corporate diversity programs to the center of the discourse. But it has also shown that – sincere as the promises of a company like Google may sound – corporate-funded search can never be separated from the realities of power and the flow of income and capital.
It should concern us all. With the proliferation of AI in areas such as Health care, criminal justice, and education, researchers and advocates raise urgent concerns. These systems make determinations which directly shape life, at the same time as they are integrated into structured organizations to reinforce stories of racial discrimination. AI systems also concentrate power in the hands of those who design and use them, while obscuring the responsibility (and accountability) behind the veneer of complex computation. The risks are deep and the incentives decidedly perverse.
The current crisis exposes the structural barriers that limit our ability to build effective protections around AI systems. This is all the more important as the populations prone to prejudices and prejudices of Amnesty International’s predictions and determinations are mainly BIPOC people, women, religious and gender minorities, and the poor – those who have suffered. the brunt of structural discrimination. Here we have a clear racialized divide between those who benefit – businesses and predominantly white researchers and developers – and those most at risk of harm.
Take, for example, facial recognition technologies that were shown “Recognize” people with darker skin less frequently than those with lighter skin. This alone is alarming. But these racialized “mistakes” aren’t the only facial recognition problems. Tawana Petty, Director of Organization at Data for Black Lives, points out that these systems are disproportionately deployed mainly in black neighborhoods and towns, while towns that have successfully banned and repelled the use of facial recognition are predominantly white.
Without independent, critical research that centers the perspectives and experiences of those who support the harms of these technologies, our ability to understand and challenge the industry’s over-excited claims is significantly hampered. Google’s treatment of Gebru is increasingly showing where the company’s priorities seem to lie when critical work pushes back its business incentives. This makes it nearly impossible to ensure that AI systems are accountable to those most vulnerable to their damage.
Controls over the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions. Researchers from companies and universities publish papers together and rub elbows at the same conferences, with some researchers even occupying concurrent positions in technology companies and universities. This blurs the line between academic research and business research and obscures the incentives underlying this work. It also means the two groups are terribly alike – AI research in academia suffers from the same pernicious issues of racial and gender homogeneity as its corporate counterparts. In addition, the best IT departments accept large amounts Big Tech research funding. We only have to look to Big Tobacco and Big Oil for some disturbing models that reveal the extent to which the influence on public understanding of the complex scientific problems that large corporations can exert when knowledge creation is left in between. their hands.
[ad_2]