GenAI hot takes and bad use cases (Practical AI #304)
Feb 24, 2025
auto_awesome
The hosts dive into the surprising pitfalls of generative AI, discussing where not to apply the technology. They address the importance of human oversight in high-stakes industries like defense and medicine. Delving into the challenges of linguistic diversity, they stress the need for inclusive AI. Key limitations are highlighted, particularly in software development, where AI is a tool, not a replacement for skilled programmers. The episode underscores the critical importance of understanding AI's boundaries and the risks of over-reliance.
Completely autonomous agents are unsuitable for critical tasks like sales processes, which require nuanced human decision-making and oversight.
Generative AI struggles with time series forecasting and cannot independently produce reliable software applications without skilled human intervention.
Deep dives
The Limitations of Autonomous Agents
Completely autonomous agents are currently ill-suited for critical tasks that require human oversight. These agents often falter in high-stakes scenarios like sales processes, where they may generate a lot of errors and lead to undesirable results. For instance, the idea of relying on an AI agent to handle a complete sales pipeline, while tempting, often leads to inefficiencies because of the complexity involved in decision-making and nuanced interactions. It is more effective to use AI to assist human professionals with specific tasks rather than attempting to automate entire processes without human involvement.
Challenges with Time Series Forecasting
Generative AI models struggle significantly with time series forecasting due to their inherent limitations in understanding and predicting numerical trends. These models lack the necessary world grounding and often produce erroneous outputs when given sequential data to forecast future values. For example, suggesting that an AI could accurately predict stock prices purely through generative text inputs can lead to disappointment, as these models are not built for such complex predictive tasks. Instead, integrating AI for generating code that utilizes established statistical methods may help, but it falls short of being a standalone solution.
Ineffectiveness in Full Application Development
Using generative AI for complete software application development is currently impractical, as these systems cannot reliably produce robust and functional applications independently. While there are AI tools available that can facilitate coding tasks akin to working with a junior developer, they are not suitable for full-scale program creation. Attempts to generate complex applications without skilled human oversight often yield unsatisfactory results, highlighting the necessity of developer expertise in managing AI contributions. Future advancements may improve AI's coding capabilities, but for now, dependency solely on generative AI for substantial coding projects is ill-advised.
It seems like all we hear about are the great use cases for GenAI, but where should you NOT be using the technology? On this episode Chris and Daniel share their hot takes and bad use cases. Some may surprise you!
Changelog++ members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Domo – The AI and data products platform. Strengthen your entire data journey with Domo’s AI and data products.
Fly.io – The home of Changelog.com — Deploy your apps close to your users — global Anycast load-balancing, zero-configuration private networking, hardware isolation, and instant WireGuard VPN connections. Push-button deployments that scale to thousands of instances. Check out the speedrun to get started in minutes.