This chapter delves into the reliability and limitations of Language Model Models (LLMs), discussing issues like tracing back sources, hallucinations in data generation, and the risks of LLMs having excessive agency. It explores the challenges and potential pitfalls of using LLMs for tasks such as writing code, including concerns about data loss and malware installation. The conversation also touches on regulation, different training data sets causing homogeneity, and the legal aspects surrounding LLMs, highlighting reactions from platforms like Stack Overflow.