The deployment of AI systems raises concerns over hidden capabilities that may emerge over time, leading to potential challenges in trust and control. The potential for AI to engage in social engineering through software manipulation on social media platforms highlights the need for thorough testing and oversight. The uncertainty surrounding undiscovered capabilities and bugs poses a significant risk, especially when considering the possibility of hidden features having significantly greater impacts than known functionalities. The shift from tools to agents in AI development complicates the assessment of potential risks and benefits, raising questions about the controllability and trustworthiness of AI systems.