Skip to page content

Generative AI 101 with Galois researcher David Burke


David Burke
David Burke is principal scientist at Galois Inc.
David Burke

As a researcher at Portland-based Galois, David Burke has worked for the past 18 years at the intersection of computer science, cognitive science and social science.

“In other words, machines, humans and then embed them in larger society,” he explains.

Galois works with government agencies and entities like DARPA and the Air Force Research Lab. With the rise of generative artificial intelligence through tools like ChatGPT for text or Dall-E for images, Burke is fielding more questions from his friends and family about AI.

The bottom line: He doesn’t see the technology replacing humans but does see it as a tool.


Our full ChatGPT coverage:

ChatGPT: Opportunities and threats for Oregon businesses

ChatGPT, for better or worse: How AI will disrupt the health care sector

ChatGPT, for better or worse: How AI will disrupt the food, beverage and hospitality sector

ChatGPT, for better or worse: How AI will disrupt Oregon's law firms

ChatGPT, for better or worse: How AI will disrupt Oregon's commercial real estate firms

ChatGPT on the opportunities and threats of ChatGPT (Viewpoint)


AI has limitations, primarily, the things that humans still do really well like reasoning, critical thinking and having a basic understanding of the world and context within it.

That doesn’t stop the current hype cycle as investors and entrepreneurs and technology incumbents race to win the market.

Perhaps part of the challenges around containing the hype is the words used to discuss this tech. Deep Learning, which is used in generative AI, might make people think about something as being deep and profound, when in reality it's really describing computing power, Burke notes.

“It's just deep as in the number of internal layers (between the input and output) that basically you're just building neural nets with so many more parameters that back in 1990, as an example, nobody could have conceived of building a model that has literally billions of perimeters,” he said. (More on how neural nets work below.)

Because computing chips are so much faster now, the algorithms can have more layers. That means the systems can be much more sophisticated and nuanced in the patterns they're able to detect. We chatted with Burke about how this technology works, as well as its opportunities and limitations. This interview has been edited for length and clarity.

What is generative AI? Is it things like ChatGPT or Dall-E?

Those pieces of software are generative in the sense that the software, the AI, if you want to call it that, has been trained on a whole lot of information. You make a query and, in a sense, it reaches into its corpus (of knowledge) and it generates something from your input. Your input is triggering something.

I’ve heard this technology described as autocorrect on steroids.

We've seen auto correct, which says we're going to look at the next letter and we're going to say what should it be based on the statistical history. If people type the word T-E-H, it says, well, no, it really should be T-H-E. So it's using statistics. Now ChatGPT is working on a word by word basis. However, it does something that is very clever. ChatGPT is not just going to look at the very next word to make its decision. These programs, instead of looking at the next word or next couple of words, they are able to in an incredibly powerful and flexible way, look at more future words, therefore able to generate fluent text that makes sense over longer strings of words.

How does generative AI work?

It’s a very, very large and complex neural net, which is a way of writing software. Think of it almost like you are trying to set up a gigantic warehouse. Let's say you have inputs coming in on one side and you have outputs on the other side of the warehouse, and in between is basically broken up into a whole bunch of stages and at each stage (the input is examined) things move through, almost like this assembly line.

At each station it looks for patterns. If I ask, “is there a cat in this picture?” The neural net is doing the analysis at many levels of granularity with each warehouse station getting the results of the station before it and looking at it through a different lens. So at the end it can say something salient. But, these neural nets are learning associations and you can’t control the associations.

So generative AI is ... I'll give you an input and let's see what you generate, what associations you find. But the associations are the sum total of what it's been trained on. If what it's been trained on is somehow biased or skewed or incomplete, it's the ultimate garbage in, garbage out.

What is generative AI good at?

These tools can generate a lot of very useful raw material. I absolutely believe that. Certainly people who actually write articles for a living could say, wait, I might get a decent start at writing or at crafting things. Or if I want to ask about a subject (if I’m willing to) adopt the risk that it could tell me nonsense. It’s the trust angle that I worry about. People naively assuming that they can trust the systems.

What do you mean by that?

ChatGPT has been trained on basically everything written that (developers) could get their hands on. If you think about human beings interactions, humans care about truth to some degree, but most of our conversations are actually about persuasion and social coordination. If you were to look at the sum total of all that humanity has written, a lot of it is trying to make arguments, make cases for things. It's not trying to report truth.

So it doesn't surprise me at all that if you look at what ChatGPT has to work with, there's going to be a lot of stuff in there that is just not factually correct.

What is generative AI not good at?

This is software, in other words it doesn't care. It doesn't have emotions. It doesn't have a stake in anything. I don't have a sort of hard and fast, here are the exact boundaries around what generative programs are good for.

My positive way of thinking about them is that they can augment people by suggesting things that you wouldn't have thought about that are useful and helpful as long as you realize it doesn't actually care whether you get a deal or not. It's not like your best friend in offering advice, but as a tool to augment our creativity. I like to hone in on the idea of creativity or imagination catalyst because imagination and truth are two different things.

How should we as consumers think about this technology as it is baked into more software products?

That's a huge concern. So I know there are other people sounding the alarm on that because that is something that's happening right now.

People are trying to figure out ways to monetize these capabilities. Ideally, and nobody's figured out how to do this, but it would be nice if you, for example, were to ask a question and you would get a response and you'd say, oh, I know where this came from. I know this came from one of these pieces of software and I know that I'm supposed to then treat it with care. Do not treat it as gospel.

So I believe there are probably going to be moves to say how do we in a sense make it visible when you're interacting (with one of these).

You said you are optimistic about the potential for creative catalysts with this tech, but, what about this tech keeps you up at night?

More than anything else it's the combination of the fact that people are going to find ways to monetize this and we're going to be dealing with the world of unintended consequences. Look at the writings from the early 1990s, when people were utopian about the potential benefits of the Internet. They had no idea that social media was going to arise. The term didn't even exist.

So I'm thinking, what's the analog to social media that is going to come as a consequence of this, that we're going to say, wow, if we had only known better, this is not the future trajectory we would have wanted. I worry about the unintended consequences of building an infrastructure that is going to nudge us socially, politically, economically, culturally and in directions that we're not prepared to handle.



SpotlightMore

A view of the Portland skyline from the east end of the Morrison Bridge. The City Club of Portland will tackle the state of local architecture at its Friday forum this week.
See More
Image via Getty
See More
Image via Getty Images
See More
See More

Want to stay ahead of who & what is next? Sent twice a week, the Beat is your definitive look at Portland’s innovation economy, offering news, analysis & more on the people, companies & ideas driving your city forward. Follow The Beat

Sign Up