This article is from The Technocrat, MIT Technology Review’s weekly tech coverage e-newsletter about energy, politics, and Silicon Valley. To obtain it in your inbox each Friday, join here.
The US Congress is heading again into session, and they’re hitting the bottom working on AI. We’re going to be listening to so much about varied plans and positions on AI regulation within the coming weeks, kicking off with Senate Majority Leader Chuck Schumer’s first AI Insight Forum on Wednesday. This and deliberate future boards will convey collectively among the prime folks in AI to focus on the dangers and alternatives posed by advances on this expertise and the way Congress would possibly write laws to tackle them.
This e-newsletter will break down what precisely these boards are and aren’t, and what would possibly come out of them. The boards shall be closed to the general public and press, so I chatted with folks at one firm—Hugging Face—that did get the invite about what they’re anticipating and what their priorities are heading into the discussions.
What are the boards?
Schumer first introduced the boards on the finish of June as a part of his AI laws initiative, known as SAFE Innovation. In ground remarks on Tuesday, Schumer mentioned he’s planning for “an open discussion about how Congress can act on AI: where to start, what questions to ask, and how to build a foundation for SAFE AI innovation.”
The SAFE framework, as a reminder, isn’t a legislative proposal however moderately a set of priorities that Schumer laid out when it comes to AI regulation. Those priorities embody selling innovation, supporting the American tech business, understanding the labor ramifications of AI, and mitigating safety dangers. Wednesday’s meeting is the primary of 9 deliberate periods. Subsequent conferences will cowl matters comparable to “IP issues, workforce issues, privacy, security, alignment, and many more,” Schumer mentioned in his remarks.
Who is, and isn’t, invited?
The invite checklist for the primary discussion board made a splash when it was made public two weeks in the past. The checklist, first reported by Axios, numbers 22 folks and contains numerous tech firm executives who do plan on attending, comparable to OpenAI CEO Sam Altman, former Microsoft CEO Bill Gates, Alphabet CEO Sundar Pichai, Nvidia CEO Jensen Huang, Palantir CEO Alex Karp, X CEO Elon Musk, and Meta CEO Mark Zuckerberg.
While a few civil society and AI ethics researchers have been included—particularly, AFL-CIO president Liz Shuler and AI accountability researcher Deb Raji—observers and distinguished tech coverage voices have been fast to criticize the checklist, partially for its tilt towards executives poised to revenue from AI.
The inclusion of so many tech leaders might be a political sign to reassure the business. Tech corporations, for the second, are positioned to have a variety of energy and affect over AI coverage.
What can we count on out of them?
We don’t actually know what the outcomes of those boards shall be, and contemplating that they’re closed door, we’d by no means actually have full perception into the specifics of the conversations or their implications for Congress. They are anticipated to be listening periods, the place AI leaders will assist to educate legislators on AI and questions about its regulation. In Schumer’s remarks from Tuesday, he mentioned that “of course, the real legislative work will come in committees, but the AI forums will give us the nutrient agar, the facts and the challenges that we need to understand in order to reach this goal.”
The boards are thought of labeled, but when we do get some data about what was mentioned, I’ll be listening for some potential themes for US AI regulation that I highlighted again in July: fostering the American tech business, aligning AI with “democratic values,” and coping with (or ignoring) present questions about Section 230 and on-line speech.
How are invitees getting ready?
I exchanged some emails with Irene Solaiman, the coverage director of Hugging Face, an organization that builds AI growth instruments based mostly on an open-source framework. The CEO of Hugging Face, Clém Delangue, is likely one of the 22 folks heading to the discussion board on Wednesday. Solaiman mentioned the corporate is getting ready as greatest as doable given what she known as “a firehose” of fixing circumstances.
“We’re reviewing recent regulatory proposals to get a sense of Hill priorities,” mentioned Solaiman, including that they’re working with of us from their machine studying and R&D groups to put together.
As for Hugging Face’s political priorities, the corporate needs to encourage “more research infrastructure such as the great work being done at NIST [the National Institute of Standards and Technology] and funding the NAIRR [the National AI Research Resource]” and “to ensure the open-source community work is protected and recognized for its contribution to safer AI systems.”
Of course, different corporations will even have their very own methods and agendas to push to Congress, and we can have to wait and see the way it all shakes out. My colleague Melissa Heikkilä will even be overlaying this subsequent week, so join her e-newsletter, The Algorithm, to comply with alongside.
What else I’m studying
- Here is an glorious story from Rest of World about ladies falling in love with an AI-voiced chatbot and the grief they felt when he “died.” It jogs my memory of this podcast episode I labored on about friendships between people and chatbots, and I promise, it isn’t as bizarre as you suppose.
- CNN revealed new data from a forthcoming blockbuster biography of Elon Musk by Walter Isaacson, alleging that the tech movie star restricted satellite tv for pc web connectivity in Ukraine throughout an assault on Russian ships. The incident is an illustration of the unprecedented function Musk’s Starlink has within the battle. It’s additionally an extraordinarily controversial allegation and shall be dissected within the coming days, weeks, and past.
- For a little bit of humor, learn this New Yorker satire piece about Worldcoin. We’ve additionally written so much about the biometric crypto firm right here at TR.
What I realized this week
Google is in sizzling water for its advert insurance policies once more. A report revealed from the Global Project Against Hate and Extremism (GPAHE) discovered that Google was benefiting from advertisements bought by extremist teams based mostly world wide, together with far-right, racist, and anti-immigrant organizations from Germany, Bulgaria, Italy, France, and the Netherlands. (I not too long ago wrote about how Google Ads have promoted and profited from AI-generated content material farms.)
According to the report, “Google platformed 177 ads by far-right groups from 2019 to 2023 that were seen between a collective 55 and 63 million times before the company identified them as violative and took them down.” GPAHE reported that Google earned €62,000 to €85,000 for the advertisements, which could be insignificant for the corporate however nonetheless signifies a dangerous incentive mannequin. GPAHE additionally notes that its findings will not be complete.
…. to be continued
Read the Original Article
Copyright for syndicated content material belongs to the linked Source : Technology Review – https://www.technologyreview.com/2023/09/11/1079244/what-to-know-congress-ai-insight-forum-meeting/