ChatGPT at the Academy: The New Focus of Ethical Discussions

Yayın Tarihi | 21 May 2024, Tuesday

Launched by OpenAI in November 2022, the Chat Generative Pre-training Transformer (ChatGPT) stands out as an artificial intelligence chatbot recognized for its human-like communication abilities. ChatGPT has also aroused great interest in the academic world. However, this situation has brought along some ethical discussions. The possibility of researchers delegating writing tasks to ChatGPT raises issues about the role of authorship and ethical use. As technology advances, addressing ethical issues in academic environments becomes inevitable.

 

The Danger of Producing Fake Data with ChatGPT

A two-stage case study conducted to discuss the quantitative data production ability of ChatGPT from an ethical perspective revealed significant results. The possibility of researchers who may have a tendency to fabricate data unethically producing data from artificial intelligence was questioned for the first time. Although ChatGPT demonstrated the ability to respond to different surveys at certain sample sizes, validity issues were detected in the produced data sets. Analyses applied to the artificial data set pointed to low data quality and systematic responses, showing that the produced data were not usable in the hypothesis testing stage. This study claims that the data simulated by ChatGPT do not meet the criteria for hypothesis testing, providing an important perspective for scientific journal editors in particular for article evaluation processes. A new perspective is presented on the ethical use of artificial intelligence through the data produced by ChatGPT.

 

Preventing Fake Data: New Perspectives and Suggestions

This study, which tests ChatGPT's ability to produce fake quantitative data sets, offers important recommendations and perspectives. The study, which reveals the ability of artificial intelligence to simulate responses within ideal statistical parameters, draws attention to validity issues in the produced data sets. The findings prove with practical examples that the data produced by ChatGPT are not suitable for hypothesis tests. However, it reveals that ChatGPT can answer short or long surveys on behalf of hundreds of fake participants. It is suggested that artificial intelligence developers refine their algorithms to prevent the creation of fake data sets. In addition, it is emphasized what journal editors need to pay attention to in order to detect articles prepared with data produced by artificial intelligence. In the future, it is recommended to conduct similar research in different disciplines using updated versions of ChatGPT. It is of great importance to enrich current discussions on the unethical use of artificial intelligence in academia and scientific research. 

 

Study Link: https://www.emerald.com/insight/content/doi/10.1108/JHTT-08-2023-0237/full/html