Skip to page content

Takeaways from "AI and Ethics" at Affectiva's Emotion AI Summit


gettyimages 470621987 1 1068x400
Getty Images

Almost every time the wordsartificial intelligence” pop up online or in the news, they feed people’s fear of these technologies. There’s simply no shortage of coverage on the invasive and sometimes dangerous aspects of AI  

The techies, entrepreneurs and healthcare professionals at Affectiva’s Emotion AI Summit on Tuesday knew this wariness to AI exists. But they also understood that AI opens up a treasure trove of innovation opportunitiesfrom self-driving vehicles to voice recognition and robots of any and all kinds, my personal favorite. And these AI-driven developments show no sign of slowing down.   

So, a hefty portion of the day-long conference was dedicated to discussing ethical issues related to artificial intelligence that concern people.  

“AI and Ethics” was the second session I attended after arriving at The State Room, a sweeping venue on the 33rd floor of a tower nestled near Faneuil Hall. Sessions earlier in the day covered the connection between AI and safety, productivity and healthcare.  

The ethics-focused portion of the afternoon was, in my opinion, the most interesting part of the summit—chock full of insights on how to make this evolving side of tech more accountable. This year marked the second time emotion measurement tech company Affectiva included an ethics-centric session in the event’s agenda.  

Affectiva appointed two presenters: Graham Page, its global managing director of media analytics, and Peter Robinson, a computer technology professor at the University of Cambridge. A three-person panel explored AI’s ethical development after the speakers’ remarks. 

Here are the biggest takeaways from the “AI and Ethics” session:  

Unethical AI doesn’t look like we expect.

“The ‘robots are becoming sentient’ narrative is a bit crazy,” said Robinson. “I’m not worried about humanoid robots. But I am worried about embedded systems—things like automated trading systems.” 

These programs hurt society far more than the robots people typically fear—Frankenstein-like metal monsters that resemble and replace humans.   

Innovators should focus on mending problematic AI-backed computer systems that have become normalized today. Think of the algorithms behind social media sites like Facebook and the automated trading systems that played a role in sparking the 2008 financial crisis. Many of these programs contribute to privacy invasion, targeted marketing and economic downfall more directly than human-like robots ever could. 

Ethical issues in AI often come from lack of cultural understanding. 

Harvard Kennedy School professor Kathy Pham said so much AI is coated in misunderstanding and bias because companies fail to consult social scientists about the communities they’re trying to reach 

“Go out into communities and honor the expertise of people who are not engineers,” Pham advised in the panel. “Actually talk to people who study humanity. Have these conversations.”  

Always closely consider the communities AI technology is intended for when creating it. Huge chunks of consumers—often women and people of color—are left out of businessestesting trials. These gaps in cultural understanding are what make many AI systems unfair and “unethical, leaving a large sect of customers unhappy. 

[embed]http://twitter.com/Affectiva/status/1184207746087952385[/embed]

It’s nearly impossible to build ethical AI with subpar computer programmers  

Computer scientists and programmers aren’t always held to the same standards as other professionals, like doctors and lawyers. In fact, Robinson said there is a clear lack of professional regulation over computer tech workers from the government and the individual companies employing them. As a result, the AI industry is flooded with sloppy and unfair products made by workers.

The tech industry’s rapid growth in the past three decades is partially responsible for this discrepancy. Proficient programmers who are up-to-date on quickly developing technology rise to the top of their field. But their success has come at the expense of a consistently fair and untarnished AI experience for users. 

Programmers’ competency should be paramount, but it often isn’t, explained Robinson. He believe it’s time to change that.  

[embed]http://twitter.com/Affectiva/status/1184202499965763585[/embed]

Brands with ethical intentions fare better 

Turns out being honorable is actually good business sense 

“Consumers and employees are attracted to brands that have a strong ethical stance,” said Page. Brands with purpose, as Page calls them, advertise their own forward-thinking claims and back them up with action

These companies end up being more marketable, digestible and morally justifiable in the AI sector. In the end, neatly-branded businesses distinguish themselves from their counterparts.  

[embed]http://twitter.com/Affectiva/status/1184194325405716481[/embed]


Keep Digging

CELLTREAT 3 Nemco Way Ayer MA (1)
News
PSU Robotics opening
News
Spark Charge Roadie
News
Boston Skyline
News
Mantel Team
News


SpotlightMore

See More
See More
See More
See More

Upcoming Events More

Nov
28
TBJ
Oct
10
TBJ
Oct
29
TBJ

Want to stay ahead of who & what is next? Sent daily, the Beat is your definitive look at Boston’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat.

Sign Up