Skip to page content

Local tech leaders weigh in on artificial intelligence as White House launches new responsible use initiatives


PACT Phorum
From left to right: dbt Labs CEO Tristan Handy, Boomi CFO and President Arlen Shenkman, Next Wave Partners CEO John Cowan and Google's Director of AI Practice Caroline Yap, who moderated the conversation.
Ryan Mulligan

The White House on Thursday announced new initiatives aimed at combatting the still untamed capabilities of artificial intelligence, and local tech executives recently offered differing stances on how best to do so.

AI's ability to create lifelike pictures, conversational chatbots, new prose and other original content have begun to blur the lines of reality and computer-generated intelligence. How AI is and will be used in business — and how it should be tamed — was the topic of discussion at the Philadelphia Alliance for Capital and Technologies' 11th annual Phorum on Wednesday.

The event came just a day ahead of the Biden administration's announcement of its new responsible use of AI initiatives geared toward protecting rights and safety. The initiative includes $140 million to the National Science Foundation to fund research around the responsible use of the technology. Administration officials, including Vice President Kamala Harris, also met with chief executives from top tech companies working with AI on Thursday to discuss risks associated with the technology as it imposes on everyday life.

At Phorum, tech executives discussed the inevitability of AI gaining prevalence and the guardrails that will need to come with its growing use. The panel consisted of Tristan Handy, the founder and CEO of dbt Labs; Arlen Shenkman, the newly appointed CFO and president of Boomi; and John Cowan, CEO of Next Wave Partners. In the conversation, moderated by Google's Director of AI Practice, Caroline Yap, they offered differing views on how to implement — and enforce — AI's vast capabilities into business.

How AI is used changes constantly. Generally, generative AI, such as that found in ChatGPT, is the ability for machines to process immense datasets and create a new product, such as text, an image, or data. Handling datasets is part of the fabric of companies like dbt Labs and Boomi, which are looking to leverage the technology. That could mean anything from producing more advanced code to transcribing and analyzing the content of sales calls.

Handy, who has turned his Philadelphia-based data analytics platform into a startup unicorn, said such AI could mean "not just an efficiency gain" but something "more transformational."


Join us May 18 for Steering Startups: Navigating Philadelphia's growing innovation ecosystem


"For my entire career, for 20-plus years, the Holy Grail has been, how do you ask data questions in natural language and get a trusted response back? I think that actually we may not be that far away from from doing that," Handy said.

The use cases for AI are endless, Handy, Shenkman and Cowan agreed, but Shenkman said he worries "about the durability of the technology if no one can trust it at all." He said some sort of public-private partnership to oversee the uses of AI could be one way to put up guardrails.

"Without responsibility there's no accuracy, and I think without accuracy we'll quickly find ourselves in a position where we won't know what's correct and what's not," Shenkman said. "And I think that's the scariest thing about this."

Responsibility would have to come in the form of some regulating body, Shenkman and Cowan said. That could be on the federal level, a public-private partnership, or even on the level of the United Nations, the panelists suggested.

Handy was skeptical about the ability of government to keep up with the evolution of AI, noting he hasn't heard an idea of AI regulation on a large scale that is "workable" or "substantive."

Something like a new copyright regulationcould be needed for general public use, Handy said. Congress has begun to explore ideas like who, if anyone, owns the copyright to generative AI works and if and how they can be legally reproduced in the context of fair use.

For businesses, however, Handy said he's found AI's ability to create new works can foster creativity in the workplace by generating new ideas among workers. At dbt Labs, Handy said he's "investing actively in the future of autonomous everything."

"There are use cases for AI that responsibility matters tremendously," Handy said. "But I think that actually there are so, so many use cases where it's a driver of efficiency. It's a driver of creativity."

The panelists see AI as a tool to enhance the workplace and speed up operations. It's not necessarily a substitute for manpower, but a way to increase the capabilities of a workforce.

"The world that I'm operating in is that there's infinity work to do and I am not having the thought, how can I save money and fire people? I'm thinking, how can I accelerate my roadmap," Handy said.

The tech world is embracing AI, and can act as a model for other businesses to integrate it into daily operations, whatever regulations may be implemented.


Keep Digging

Fundings
News
News


SpotlightMore

See More
See More
See More

Upcoming Events More

Sep
17
TBJ
Sep
26
TBJ
Oct
10
TBJ

Want to stay ahead of who & what is next? The national Inno newsletter is your definitive first-look at the people, companies & ideas shaping and driving the U.S. innovation economy.

Sign Up