Skip to page content
Sponsored content

How Richmond innovators are thinking about the promise and challenges of AI


AI Programming
Image Credit: AmericanInno
Cassidy Beegle

It's clear to anyone who works in tech that artificial intelligence is a powerful force that can allow businesses to leverage data to improve outcomes. As it becomes more advanced and prevalent, even those in nontechnical roles ought to know about how AI can be used and the challenges that it inevitably brings.

On Wednesday, Richmond Inno gathered regional AI leaders for a virtual discussion on the scope of AI's impact across industries. The State of Innovation in AI event highlighted how these leaders are thinking about the potential of AI, where it's best put to use, how to use it responsibly and how to cultivate the human talent that goes hand-in-hand with it.

“The fundamental promise, being on the consulting side of the world, is enabling business transformation and the business value"

The event got underway with a showcase featuring Richmond startups GOGO Band, APEX and EnrichHER, which each highlighted how they're using AI in innovative ways to enhance their offerings. Following the showcase, Keynote speaker Paul Hurlocker illustrated how AI and ML are being leveraged at Capital One. Hurlocker, vice president of the bank's Center for Machine Learning, covered how it has used these technologies to innovate in areas like risk governance, talent acquisition, fraud detection, customer service chatbots and more.

“From Capital One’s beginning, their value proposition was that we could leverage data in a smarter way to extend credit to people. That is in the DNA of Capital One,” Hurlocker said.

The promise of AI

The panel kicked off with the fundamentals. Moderator Dan Myers, partner at 42Phi Ventures, asked the panel about the promise that AI holds for their organizations and industries.

For Atish Ray, managing director at Accenture with a focus on data, machine learning and artificial intelligence, the promise is in how it will make life easier or better for people. The applications will differ for an insurance company trying to automate claims processing versus a manufacturer looking to reorder parts, Ray said, but the focus is ultimately on people.

“The fundamental promise, being on the consulting side of the world, is enabling business transformation and the business value,” Ray said. “Make sure you're applying it in the context of how it can be used for the betterment of humans in general.”

Repping the insurance industry, Clark Farrey, senior director of Innovation at Markel, elaborated that the potential for AI is in how it can help manage information from the broad array of industries that a company like Markel underwrites.

"The promise to me I think is largely around dealing with the complexities of an incredibly diverse set of businesses. Insuring an oil tanker is very different than insuring your dog,” Farrey said. “How do you bring all of that together when most of it really isn't structured?”

How and where to deploy

Appreciating the potential of AI is one thing. Knowing when to use it is another. Ray explained that Accenture chooses to deploy AI to operations where it knows it will add value. The industry was once at a stage where applying AI was primarily experimental – seeing what it could do. Accenture is past that point, he said.

“But I think now we are at a very mature stage where we see most of our organizations are trying to associate a value that it will provide,” Ray said. “We're at a point where we're looking at what is it exactly that we're getting out of it? And at what scale?”

How do you know if you’ve succeeded? For many clients, that is measured in a dollar amount, Ray said. But he explained that success in AI can also extend to its legacy.

“Success is really what I see as transforming your business,” Ray said. “Have you been able to really bring innovation into the game which has made your business leapfrog into being seen as a differentiator and a leader?”

Choosing where to deploy can also depend simply on where it’s possible. Farrey of Markel pointed out that the insurance industry is entirely based on information, rather than physical products. That often means looking for where you have a decent source of data.

“There's a lot of places where the data is well-hidden or not well-exposed, and we have to be able to pull that out,” Farrey said. “We also look for things that are scalable. Where can we do this and then can apply it in some more spaces?”

Ethical AI and bias

Any conversation about AI is incomplete without talking about its pitfalls. That often means how it can be implemented safely in ways that aren't limited to the human programmer’s narrow perception of it.

For Ray, the key is explainability and responsibility of how AI is built and implemented. Being mindful of those things can help an organization manage risk of reputation as well as complying with increasingly prevalent regulations in the tech space.

“We all know about how a lot of companies got on the wrong side, whether it was in hiring or a criminal investigation … Those will be the downside,” Ray said. “We always have to think about what impacts we are making when we run a businesses.”

Farrey, who came to the insurance industry from a financial background, said he was no stranger to algorithms making accurate choices that, from a human perspective, were ethically questionable. For example, he said, it’s relatively easy for a financial institution to predict which customers were not going to pay their bills. But the reason behind that is also important to take into account.

“A lot of time, people with escalating medical expenses on their credit cards weren’t going to pay them back. Is it ethical cut your losses? ” Farrey said. “The prediction was right. It nailed it, you know. But is it the right answer?”

Critical mistakes that an AI tool makes ultimately stem from the humans who programmed it. Dr. Milos Manic, professor of computer science at VCU and director of the school's Cybersecurity Center explained that mitigating those mistakes comes down to the awareness of the developers of the data they are feeding into their tools.

“If you feed the data that has implicit bias in it, the algorithm is actually successful when catching them because it's learning from data. The problem is, were you aware of what was in the data?” Manic said.

Cultivating talent

“In areas like generative adversarial networks, literally if you haven't read much about it in the last three months, you are so behind"

The panelists seemed to agree that AI will only become more ubiquitous and that companies will continue to need those with the skills to, if not work on it, at least work with it. “The need is greater than ever,” according to Farrey, who said that every level of technical skill is important. Even those in nontechnical analyst jobs ought to be able to “write some SQL” to pull their own insights, he said. Key to filling the skills gap is internal training and university partnerships, Farrey said.

The skills demanded have evolved over time, Accenture’s Ray pointed out. Early on, putting AI to use was purely about data science, he said. But the need for more specialized AI skills specific to individual industries has grown. He compared “a pure data scientist versus a data scientist who understands, say, insurance data.”

"Having ML engineering skills, software engineering skills or even full stack AI architecture skills, to be able to build and deploy and operate these AI-enabled applications is becoming more and more important now,” Ray said.

Universities obviously play a key role in recruiting for those critical skills. VCU’s Manic talked about how the school's various offerings of degree specializations and graduate certificates in computer science, data science, cybersecurity, software engineering and more are filling the gap.

“This is fast-paced work,” Manic said. He described how some aspects of the field progress so fast that parts of his curriculum become outdated within six months. “We are constantly trying to pick up or lead where the world is going.” To keep up with the rapid rate of innovation, schools need to focus on critical thinking skills to stay on top of it all, he said.

“In some areas like generative adversarial networks, literally if you haven't read much about it in the last three months, you are so behind,” Manic said.

Hurlocker, who founded and led ML consulting firm Notch in Richmond before selling it to Capital One, is optimistic about the talent being cultivated in the city across the academic and professional landscapes.

“I'm very excited about the startup ecosystem and some of the stories that are coming out of that,” he said. “We've got a lot of anchor companies that are attracting talent and great universities. So I'm very bullish on what's happening in the AI space.”

Want more? View the full conversation below.

Learn more about Accenture’s AI: Built to Scale research as well as view their User First: Flexible data for better decisions report.

AI conversation was powered by:


Keep Digging

SOI Future of Supply Chain
RVA SOI Clean Tech
Hampton Roads Startups to Watch
RVA health tech
News

Want to stay ahead of who & what is next? Sent twice-a-week, the Beat is your definitive look at Richmond’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward.

Sign Up