This research examines whether LLMs demonstrate environmental awareness, with particular focus on how they reflect and potentially bias human attitudes toward environmental issues as a result of their training data and processes.
With the widespread adoption of AI technologies, particularly large language models (LLMs) such as ChatGPT, questions arise about their training processes and inherent biases. While these models have become integral to daily life, the opacity of their training data and processes—protected as corporate trade secrets—makes it difficult to clarify their pre-training biases and potential toxicity. Even after developers implement fine-tuning measures aimed at neutrality, initial biases may persist in these systems.
We developed a comprehensive evaluation framework to assess major LLMs from leading AI developers worldwide. The framework examined environmental behavior, knowledge, attitudes, social norms regarding environmental issues, and the ability to predict human behavioral shifts toward pro-environmental behaviors. To establish a comparative baseline, we conducted a cross-sectional survey using the same evaluation framework with a stratified random sample of 385 participants representative of the U.S. population.
The results show that while LLMs possess extensive knowledge, this does not always lead to positive predictions about human environmental behavior. Significant gaps exist in environmental awareness between human participants and LLMs, with these gaps varying based on AI developer, national origin, and training language. Our study highlights the ongoing challenges in developing AI systems that can accurately interpret and represent complex human and societal issues. These findings have important implications for developing more culturally and environmentally aware AI systems, suggesting the need for more sophisticated approaches to AI training and development in the context of environmental understanding and human behaviors.
© 2025 | Privacy & Cookies Policy