7min chapter

Regulating AI: Innovate Responsibly cover image

The Importance of Diverse Perspectives in Shaping AI Policies with Ari Kaplan

Regulating AI: Innovate Responsibly

CHAPTER

Navigating the Risks of Generative AI and Intellectual Property

Exploring the distinctions between traditional AI and generative AI, focusing on risks related to intellectual property and the challenges of discerning truth from fiction. Emphasizing the importance of truth and intellectual property protection in the evolving landscape of generative AI technologies.

00:00
Speaker 2
10 years, all bets are up. So that leads me to the question as to, I what do you see as the biggest risk from AI that needs to be regulated?
Speaker 1
Yeah, AI is a broad spectrum. There's I break it into two parts. One is traditional AI. And one is like the newer generative AI. So traditional AI is you have existing data. You mentioned multi modal. So that could what that means is structured data could be video images, word docs PDFs. That's how do you make predictions, something in the future based on in the past, or classification, this customer is likely to churn or not. And that means all the regulations that we talked about on this episode, then there's gen AI, which could be content creation, the big risk there and the role government can play is like intellectual property. We're already seeing that you write a paper, like humans plagiarize intentionally unintentionally. But this could be like sources where the AI doesn't know if it's real based in reality or not. If it's fantasy or not, you could have bad actors, intentionally, you could have normal people unintentionally. So you just don't know what is truth. And these LOMs just create information based on either what it's fed or what it somehow web crawled. So that is I think like the most important thing is to figure out what is truth and what is not, if it's generated like art, or it has a basis of some intellectual property. I don't know if it would be a watermark or some digital signature to give credit to the like the original authors to have as equal, if not better, IP protection than just raw data. Even I think more so in generative AI since these are either open or closed, but you're having millions and soon hundreds of millions of people that are going to be making decisions and you need to credit and have IP protection just to keep society together.
Speaker 2
Yeah. So you said whether it's truth or not truth. So let's say if an AI system is not telling the truth or causes some harm, who should be reliable, the developer, the deployer, or the user or all?
Speaker 1
Great question. And yeah, I would say it's not like a lawyer, but it depends. But there's a workflow. There's like a process that it takes to get a model deployed and the blame or the responsibility, probably every step of the way, all the above. But when you have raw data, all the way to when you ingest data, transform data, do the actual model, have a human evaluate if that model is good enough for the needs tweaking. And then once the model is deployed, the world changes new data comes in. It's called data drift or model drift. Who's responsible for changing it? Or whatever, it has a model based on somebody and then that person passes away or changes jobs. And it has outdated information. So there's that responsibility. Then there's the operationalizing. It's called ML ops or LLM ops. Who's responsible to swap in and out new models? So each step along the way, like maybe who's responsible legally goes up or down. So the actual workers should bear some responsibility. Like it's your job if you're air traffic controller to get the planes in safely. But at the other point, you need the FAA in this case, or it's some private company, they're responsible and they should be responsible for putting measures in place, for putting training in place to make sure that the people touching any step of the way, understand the ramifications, they get training on detection. It's also the company's responsibility, in my opinion, to have observability and dashboard of the air traffic control again, where are all the moving pieces? What's being adhered to? What's not? What where's the data being biased? Where's it not? Where's there an outlier malfunction? You have somebody left the company. Are they prohibited from taking model intelligence with them? So ultimately, it should be legally responsible for the companies to ensure a correct level of governance. And like auditing, can they trace who did what when? But then there's a certain degree responsibility of the individual as well. Yeah,
Speaker 2
so obviously, as you said, it will depend on the circumstances at that point. Are you should companies have a legal duty to disclose when AI is being used by them?
Speaker 1
Great question. I was asked that once before. So the challenge is AI is such a broad thing that everything that's data has some form of AI involved. So that would basically be a challenge. Everywhere you would see would say we're using this new has AI for transcribing and surprising things. But there should be when there's anti plagiarism, whenever you're sourcing a quote from somebody, or you're sourcing an original image or a photograph, something with intellectual property. Yes, you should have to disclose that. Then the question for me, which I'm curious how the government's around the world will solve it is there's a spectrum of when you start generating content, like Photoshop, you can have a photo. But if you do like sports illustrated, thank God, not in trouble, but got like some opinions when they would augment some of their photos or cosmopolitan. So there would be like, it's based on a real photo, but it could be 10% modified by Photoshop or 10% modified by AI. What that threshold is, I don't know, maybe it's zero. Maybe just say this image was modified by AI. So maybe it's zero, but at some point, if it's fully generated image, then yeah, absolutely, I want to know AI generated photograph.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode