Skip to page content

AI experts: Why building a business-use case is key to integration


Women in Tech 2024 Future of AI
Krishna Gnanasekaran —the chief of data and analytics/senior director at GE Appliances, a Haier company — speaks during a panel about the future of AI. To left of her is Jennifer Leigh Kling, the global partnership director with Microsoft for Teksystems. To her the right of her is Fatemeh Khatibloo, the director of responsible AI and technology at Salesforce.
Stephen P. Schmidt

The never-ending sandbox that is AI is great — but have a purpose for what you’re doing and why you’re doing it for business reasons. More importantly, have a business-use case for everything before you put a toe in that proverbial sand.

That was essentially the theme from a panel at the RockIT Women’s 2024 Women in Tech Conference on Wednesday afternoon.

On stage at the Mellwood Art Center in Louisville, a group of three women technology leaders who have been using generative AI for varying lengths of time came together for what was called “The Future AI,” but the conversation steered mainly to present-day applications.

“You need to have a strategy that extends way beyond your IT department into your lines of business,” Jennifer Leigh Kling, global partnership director with Microsoft for Teksystems, told me after the panel talk. “What are the most important things that they’re looking at doing to increase revenue, reduce costs, mitigate risk and improve experience for whoever their end customers [are] … What are the use cases that can help them achieve those four things? ... You don’t want to tackle all of the use cases. You want to get wins, learn and then begin to wash, rinse, repeat.”

The panel was composed of Kling; Fatemeh Khatibloo, director of responsible AI and technology at Salesforce and Krishna Gnanasekaran, chief of data and analytics/senior director at GE Appliances, a Haier company.


Note: It is important to make the distinction, as Gnanasekaran told the audience, that AI has been around for years in various forms, dating back to the 1950s. The first distant descendant of present technology is a machine that was developed to play a human in checkers in 1952.

What is “new” is everything that has been unfurled or happened since OpenAI, which was founded in 2015, launched ChatGPT in November 2022 — giving rise to generative AI, or GenAI for short.

The biggest difference between previous AI is the idea of a machine not being tethered by a much more limited memory as its predecessors.


Kling, who is based out of Canton, Ohio, said that she has been working with generative AI in her everyday life for about six months.

She told me after the panel that using Microsoft Copilot chatbot and Microsoft Viva — as part of an early adopter program — has given her back 20% more time of her average workday, mostly by sending auto reply responses to basic emails.

“Whatever it is that your organization is building [it helps] be more efficient. It doesn’t mean doing less work,” Kling told the crowd. “What it means is you can actually make the customers happier with your efficiency.”

'A hammer looking for a nail'

Khatibloo, who is based out of the San Francisco area, has been working with generative AI technology for close to 10 years. She mentioned to the audience that it is alarming how businesses have been trying to throw things at generative AI tools in the hope of trying to reap some of the benefits of its “magic.”

During that process, they might be unknowingly exposing proprietary/sensitive data and information (which she said is a big no-no, as is putting in any type of customer data or coding).

“It’s a hammer looking for a nail. … We really want to think about how we’re going to build … our generative solutions with trust, with ethics,” said Khatibloo, who later said that Salesforce’s AI practices were always intended to be built on the “fundamental value of trust” between the company and its six-figure number of users.

Khatibloo added that there are three major ethical challenges that businesses can be at risk of falling prey to when deciding to use generative AI and “making technology decisions:" reinforcement of inherent toxicity/biases from historical information, the overall threat to security/privacy and other societal issues that might not initially be taken into consideration.

When asked, Khatibloo told the audience about the two AI tools she has been fiddling with the most. The first is Pi, a conversational chatbot that has the ability to mold its responses to fit people of different backgrounds, which can create a “focus group of one.” The other is Poe, a chatbot aggregator that allows users to use several platforms, including its own.

Gnanasekaran’s team has been working for years with generative AI to, among other things, generate personalized recipes, as evidenced by the 2023 launch of Flavorly AI. She said it was important to realize that the technology is still in its infancy, no matter the height of its potential.

“The expectation is that AI is going to be perfect,” she said. “Please understand that AI has its own limitations.”


Keep Digging



SpotlightMore

See More
See More
Image via Getty Images
See More
Benefits include collaborative digital forums, opportunities to connect with vetted peers locally, regionally and nationally, and the ability to publish insights on the Louisville Business First website.
See More

Want to stay ahead of who & what is next? Sent weekly, the Beat is your definitive look at Kentucky’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow The Beat

Sign Up
)
Presented By