Skip to page content

Amazon, Meta, ACLU gather in Boston to discuss future of AI


Rebecca Finlay
Rebecca Finlay is the CEO of the Partnership on AI.
Courtesy of Partnership on AI

Representatives from tech giants like Amazon, Meta and Apple will gather in Boston next week for a meeting to discuss the future of artificial intelligence.

These big industry names will be joined by academic, civil society and media organizations at Partnership on AI’s annual meeting. The nonprofit aims to stay ahead of developments in AI to ensure the technology creates positive outcomes for people and society. The American Civil Liberties Union of Massachusetts said it will host the Partnership on AI on Oct. 25-27.

The Partnership on AI was created in 2016 by Microsoft, Apple, Amazon, Meta, Google and IBM alongside organizations like the ACLU and MacArthur Foundation, said the nonprofit’s CEO Rebecca Finlay. 

“At that time, this new technology that was emerging, which we now call artificial intelligence, had this profound capacity for innovation and impact on different sectors of the economy,” Finlay said. “But in order to be deployed responsibly … a nonprofit organization like the Partnership on AI needed to be stood up so that this work could be done collectively and in the public interest.”

AI's impact on society

As AI develops new capabilities and becomes more prevalent, there are growing concerns about its impact on society. As with most technologies, there are questions about data privacy. The use of AI combined with surveillance technology was a hot topic a few years ago when Boston became the second-largest city to ban facial recognition technology.

And experts continue to find cause for concern regarding biased or incomplete data that leads to inequitable outcomes when AI gets involved. Apple was under fire a few years ago when users noticed its credit card — which relied on AI — was offering smaller lines of credit to women than to men.

These are some of the reasons that Northeastern University launched an AI research center this spring to teach a more human-centered approach to its students. The Mass. ACLU also recently created Freedom Unfinished, a new podcast series to discuss the impact of emerging technologies, such as AI and biometric surveillance, on civil liberties.

Finlay said the Partnership on AI’s founding members realized that no single organization could solve all of these challenges. They also knew that they needed to bring civil rights leaders and technology experts to the same table. 

By bringing together different stakeholders, Finlay said the nonprofit looks to identify issues, create new resources and turn these ideas into real-world applications through things like pilot projects. 

The ethical use of AI

Finlay said all corporate partners need to agree to the nonprofit’s core tenets regarding the ethical use of AI. All partners also must participate in working groups on different topics to learn, share their work and find areas for collaboration.

“Our partners don’t just fund us. They have to be actively engaged,” Finlay said. 

This year, Finlay said the nonprofit will spend one day of its annual meeting focused on bringing underrepresented groups into conversations about AI.

“How do we engage communities respectfully and responsibly in the development of R&D, therefore, to ensure that it is more just and fair and equitable?” Finlay said.

The gathering will also include a board of directors meeting to discuss priorities for 2023. Finlay said one of the issues they are looking at is the deployment of AI in the workplace to support employees.

Identifying deepfakes

The group is also focusing on deepfakes images or videos of fake events that are created using AI. Some of the nonprofit's partner media organizations like the BBC and The New York Times are working alongside Adobe and Microsoft to create standards for verifying photos and videos.

"On top of that technical structure, is there some sort of governance or code of conduct structure we can put into place with some of our partners to really create a set of shared norms internationally around the use of some of this media," Finlay said.

Another area of focus is the “safety of some of these large-scale models,” Finlay said. The Partnership on AI has a new initiative called safety critical artificial intelligence that is in the process of developing resources and best practices to avoid accidents, misuses and other negative outcomes from AI algorithms.

“The field is moving so quickly and the issues are so complicated that having a community that’s coming together that in the first instance is really learning together and really trying to understand better what is happening … and then also out of that sharing to say here’s something that we could do together, it’s a really powerful message,” Finlay said.


Keep Digging

News
News
Fundings
News
Awards


SpotlightMore

See More
See More
See More
See More

Upcoming Events More

Jun
14
TBJ

Want to stay ahead of who & what is next? Sent daily, the Beat is your definitive look at Boston’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat.

Sign Up