What to expect from gen AI in enterprise this year, with Philipp Heltewig, CEO, Cognigy
Feb 7, 2024
auto_awesome
Philipp Heltewig, CEO of Cognigy, discusses the rise of generative AI and its impact on the workplace. They also talk about the use of large language models for classification in chatbots and the challenges and solutions for enterprises using generative AI. Additionally, they give advice for businesses hesitant about automation in customer service and explore the frailty of AI in customer service and the potential of traditional technologies.
Generative AI emerged as a breakthrough technology in 2023, leading to transformative conversational experiences.
Companies using large language models (LLMs) face challenges in scalability, reliability, and privacy, but can mitigate risks by implementing fallback cascades and partnering with trusted LLM providers.
Deep dives
The rise of generative AI in 2023
In 2023, generative AI emerged as a breakthrough technology that gained widespread attention. While it had existed before, the release of models like GPT-3 in November 2022 brought generative AI to the forefront of the industry. The technology received significant hype and sparked both excitement and concerns. Companies rapidly modified their roadmaps to incorporate generative AI, exploring new techniques like prompt engineering, retrieval augmented generation, and multi-modal generative AI. The year 2023 marked a transformative phase for the industry, with the emergence of conversational experiences that surpassed what was previously considered possible.
Challenges in deploying large language models
As companies started using large language models (LLMs), they faced challenges in scalability, reliability, and resilience, particularly during high peak periods. Ensuring the availability of services became a crucial concern. While LLMs have improved in reducing hallucinations compared to previous years, complete elimination remains a work in progress. Companies are exploring fallback cascades and other mechanisms to mitigate the risk of incorrect answers. The scalability and reliability aspects are being addressed by major players like Microsoft, Google, and Amazon, who are investing in infrastructure to support LLM deployments.
Mitigating privacy and security concerns
Privacy and security are critical considerations in deploying LLMs. Companies partnering with LLM providers need to assess data sovereignty issues and evaluate guarantees regarding data storage, both for training and logging purposes. Ensuring compliance with data privacy regulations, such as HIPAA, is particularly important in industries like healthcare. Trusted LLM providers, like Microsoft, can alleviate data privacy concerns by offering stateless systems that minimize data storage. Companies can also implement measures like filtering out personally identifiable information before processing data with LLMs.
Progressing from safe use cases to full automation
Enterprises are advised to start with low-risk use cases to gain experience and build confidence with LLMs. Classification tasks can be handled effectively by LLMs, driving up the containment rate and delivering value. Knowledge base retrieval and agent assist tools are also suitable options to enhance customer interactions. It is crucial to strike a balance between leveraging LLM capabilities and addressing privacy and compliance requirements. Enterprises that delay embracing LLMs may find themselves scrambling to catch up with competitors in the future.
What can we expect from AI over the next 12 months? What kind of use cases will make it to production and add value? What will the impact be for businesses that leverage it? What will be the challenges and hurdles we’ll need to overcome to make it work? And what does one of the market leading AI automation platforms plan to do about it?
Join me as I speak with the CEO of Cognigy, Philipp Heltewig, and dive into the year ahead in AI..