The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT

Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business. This study sought to develop a conceptual framework that gathers the negative as...

Full description

Saved in:
Bibliographic Details
Published inEntrepreneurial Business and Economics Review Vol. 11; no. 2; pp. 7 - 30
Main Authors Wach, Krzysztof, Dương Công, Doanh, Ejdys, Joanna, Kazlauskaitė , Rūta, Korzyński, Paweł, Mazurek, Grzegorz, Paliszkiewicz, Joanna, Ziemba, Ewa
Format Journal Article
LanguageEnglish
Published Krakow Uniwersytet Ekonomiczny w Krakowie 01.01.2023
Cracow University of Economics
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business. This study sought to develop a conceptual framework that gathers the negative aspects of GAI development in management and economics, with a focus on ChatGPT. Research Design Methods: The study employed a narrative and critical literature review and developed a conceptual framework based on prior literature. We used a line of deductive reasoning in formulating our theoretical framework to make the study’s overall structure rational and productive. Therefore, this article should be viewed as a conceptual article that highlights the controversies and threats of GAI in management and economics, with ChatGPT as a case study. Findings: Based on the conducted deep and extensive query of academic literature on the subject as well as professional press and Internet portals, we identified various controversies, threats, defects, and disadvantages of GAI, in particular ChatGPT. Next, we grouped the identified threats into clusters to summarize the seven main threats we see. In our opinion they are as follows: (i) no regulation of the AI market and urgent need for regula- tion, (ii) poor quality, lack of quality control, disinformation, deepfake content, algorithmic bias, (iii) automation- spurred job losses, (iv) personal data violation, social surveillance, and privacy violation, (v) social manipulation, weakening ethics and goodwill, (vi) widening socio-economic inequalities, and (vii) AI technostress. Implications Recommendations: It is important to regulate the AI/GAI market. Advocating for the regula- tion of the AI market is crucial to ensure a level playing field, promote fair competition, protect intellectual property rights and privacy, and prevent potential geopolitical risks. The changing job market requires workers to continuously acquire new (digital) skills through education and retraining. As the training of AI systems becomes a prominent job category, it is important to adapt and take advantage of new opportunities. To mitigate the risks related to personal data violation, social surveillance, and privacy violation, GAI developers must prioritize ethical considerations and work to develop systems that prioritize user privacy and security. To avoid social manipulation and weaken ethics and goodwill, it is important to implement responsible AI practices and ethical guidelines: transparency in data usage, bias mitigation techniques, and monitoring of generated content for harmful or misleading information. Contribution Value Added: This article may aid in bringing attention to the significance of resolving the ethical and legal considerations that arise from the use of GAI and ChatGPT by drawing attention to the controversies and hazards associated with these technologies.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2353-883X
2353-8821
2353-8821
DOI:10.15678/EBER.2023.110201