# What is the goal of ChatProtect?
ChatProtect detects and removes hallucinated content from output of large language models (LLMs).
While LLMs are being increasingly integrated into daily life, they are prone to produce hallucinated information, ranging from factual inaccuracies to completely made up content. This limitation greatly threats LLMs' trustworthiness and practical usability. We propose ChatProtect, a simple yet effective approach to detect and remove hallucinated information from LLM-generated text.