Developing AI models involves addressing ethics and bias issues by implementing guardrails to mitigate human biases present in the training data, ensuring the technology reflects a better version of humanity without offensive errors. This process is critical for building foundational models that can be trusted for mission-critical applications, despite the challenges of training models on vast amounts of internet data containing inherent biases.
ChatGPT has been out for more than a year and has since become the centerpiece of intense discussion and debate about AI.
Christian Hubicki is a renowned robotics research scientist and an Assistant Professor of Mechanical Engineering at Florida State University. In 2023, he was a guest on Software Engineering Daily, where he discussed ChatGPT and its implications with Sean Falconer. Christian now joins Sean again to check in about the state of AI and its future directions.
Sean’s been an academic, startup founder, and Googler. He has published works covering a wide range of topics from information visualization to quantum computing. Currently, Sean is Head of Marketing and Developer Relations at Skyflow and host of the podcast Partially Redacted, a podcast about privacy and security engineering. You can connect with Sean on Twitter @seanfalconer.
The post One Year of ChatGPT with Christian Hubicki appeared first on Software Engineering Daily.