Skip to page content

Experts weigh in on ethical AI development at MassChallenge panel


Artificial intelligence
Experts addressed the issue at a Thursday MassChallenge panel event.
Jirsak

In the race to develop the newest, flashiest and most futuristic AI technologies, what ethical responsibilities do companies have to the public?

That was the topic of discussion at a Thursday MassChallenge "AI Ethics Showdown" panel.

It's one thing to say you're committed to ethical technology development for the greater good, said Dan Doggendorf, the CEO and principal advisor at Pro4: Six Consulting, a cyber security firm. But without agreement on what ethical development means, that promise means little.

Limiting bias

Major considerations for companies and startups dabbling in AI for the first time should be limiting bias and transparency between humans and robots doing the processing, said Bishanka Peskin, head of data science at MassMutual Life. 

“There's a great risk of inadvertent bias introduction,” she said. The risk can be somewhat mitigated by a deep understanding of how different algorithms and models work, but it's often easier said than done.

Peskin said organizations need to be attentive to individual steps in the way AI models process data, just as they need to be attentive to the end result.

Companies should have teams specifically dedicated to understanding that, she said. Individual industries are susceptible to different risks, so a one-size-fits all regulation won’t cut it. The onus is on the companies handling data and developing the technologies to keep the bias out. 

Bias can also make its way in through acquired data, and companies have a responsibility to ensure the data being acquired is accurate, Doggendorf said.

Startups and even more established firms dabbling in AI but not fully committed to it may be most susceptible to risk, said Doggendorf. It’s in that stage that cyber security risks open up, and companies don’t have a mechanism in place to address attacks. 

“That can be dangerous, very, very dangerous,” he said.

What founders need to know

“If you're a founder out there- if you want to be able to sell your product and close deals, you need to have all the security aspects locked in,” said Steven Dorval, founder of Dorval Advisory Group, a firm that helps fintech startups orchestrate deals. 

Startups might not face the same initial scrutiny as larger firms in the development stage, but if they’re looking to be acquired, they’ll have to pass muster. From the outset, data security, data transparency, and overall ethical applications of AI should run the show, he said. 

Security measures are also essential to protect intellectual property and reputation, Doggendorf said. 

The public, too, has a role in driving the ethical development of AI technologies. 

What the public is comfortable with as customers will dictate the direction that good-faith companies take their technology developments, said Dorval. The public communicates that through the applications they use, the technologies they interact with, pay for, and consent to share data with.

These individual choices, on the macro level, are the biggest influence, and best way for the public to impact, where AI goes, he said.


Sign up for The Beat, BostInno’s free daily innovation newsletter from BostInno reporter Isabel Tehan. See past examples here.


Keep Digging

News
News


SpotlightMore

See More
See More
See More
See More

Upcoming Events More

Jun
14
TBJ

Want to stay ahead of who & what is next? Sent daily, the Beat is your definitive look at Boston’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow the Beat.

Sign Up