As Large Language Models (LLMs) become increasingly integrated into various aspects of work and education, a phenomenon of ‘secret use’ has emerged where individuals utilize these tools without disclosing their assistance. This paper investigates the motivations, practices, and ethical implications of secret LLM use across different contexts. Through interviews and surveys with users who have engaged in undisclosed LLM use, we identify patterns of behavior, justifications, and perceived risks. Our findings reveal complex social dynamics around AI assistance disclosure, including concerns about judgment, fairness, and authenticity. We discuss the implications for policy development, educational practices, and workplace norms as LLMs become more prevalent, and propose design considerations for LLM interfaces that could address some of the underlying issues driving secret use.