[ad_1]
It’s not that large-scale models could never come to a common sense understanding. It’s still an open question. But there are other avenues of research that deserve a greater investment. Some experts have bet on Neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use much less data, inspired by a human child’s ability to learn with very few examples.
In 2021, I hope the field realigns its incentives to prioritize understanding over prediction. Not only could this lead to more technically robust systems, but the improvements would also have major social implications. The vulnerability of current deep learning systems to be duped, for example, compromises the security of autonomous cars and presents dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.
Empower marginalized researchers
If algorithms codify the values ​​and perspectives of their creators, a large cross section of humanity should be present at the table as they develop. I haven’t seen any better proof of this than in December 2019, when I attended NeurIPS. That year there was a record number of women and minority speakers and attendees, and I could feel it change in a tangible way the content of the proceedings. There has been more talk than ever about the influence of AI on society.
At the time, I praised the community for their progress. But Processing of Gebru by Google as one of the few leading black women in the industry has shown how far there is to go. The diversity of numbers is meaningless if these people are not empowered to bring their lived experience to their work. I am however optimistic that the tide is changing. The flash point triggered by Gebru’s ignition turned into a critical moment of reflection for the industry. I hope that this momentum continues and turns into lasting systemic change.
Focus the perspectives of affected communities
There is also another group to bring to the table. One of the most exciting trends of the past year has been the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to algorithms.
In July, the first conference workshop dedicated to this approach gathered a wide range of ideas on what it might look like. It included new governance procedures to solicit feedback from the community; new models of audit methods to inform and engage the public; and proposed an overhaul of AI systems to give users more control over their settings.
My hope for 2021 is to see more of these ideas tested and seriously adopted. Facebook is already testing a version with its external supervisory board. If the company follows through by allowing the board of directors to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worth emulating.
Codify guardrails in regulations
So far, grassroots efforts have driven the movement to mitigate algorithmic damage and hold tech giants accountable. But it will be up to national and international regulators to put in place more permanent guardrails. The good news is that lawmakers around the world have watched and are drafting laws. In the United States, members of Congress have already introduced invoices to address facial recognition, AI biases, and deepfakes. Several of them too sent a letter to Google in December, expressing their intention to continue to apply this regulation.
So my last hope for 2021 is that we see some of these bills passed. It is time to codify what we have learned over the past few years and move away from the fiction of self-regulation.
[ad_2]