I wouldn't have predicted that we would be here today. I think it took me by surprise the ability for it to at least seemingly behave as if understanding the conversation and produce very coherent text with, it seems a lot of contextual memory. But sometimes I feel like companies just run a little bit too quickly with deploying these models. And I think it's a bit irresponsible because first we don't know. When things like Bing chat come out and there are mistakes, inevitably there will be mistakes,. Then what happens is the trust by the public dips.
Ken Wenger is the author of the forthcoming book Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. I’ve been reading it and it is excellent. Ken is a deep thinker and a great writer. He’s also the senior director of research and innovation at CoreAVI and chief technology officer at Squint AI.
His work focuses on the intersection of artificial intelligence and determinism, enabling neural networks to execute in safety critical systems. Kenneth has co-authored two articles in the scholarly journal Machine Learning with Applications and several white papers for different publications, including Embedded Computing Design. He also holds several patents under CoreAVI’s auspices.
Listen and learn...
- How neural nets emulate the brain to make decisions
- Why we have to be careful when using the term "intelligence" to describe "AI" systems
- When Ken trusts machines to make decisions... and when he doesn't
- Why LLMs like ChatGPT "hallucinate"
- How generative AI replicates human bias
- Why Ken feels "if we haven't addressed ethical issues we're not ready to deploy AI solutions"
- What AI explainability is and why it's important
References in this episode...