Developing an Artificial Intelligence Damage Management Model in the Process of Implementing Sports Researches, and Providing Relevant Solutions

Document Type : Research Paper

Authors

1 Associate Professor, Department of Sport management, Faculty of Sports Sciences, University of Isfahan, Isfahan, Iran.

2 Ph.D Candidate, Department of Sport Management, Faculty of Sports Sciences, University of Isfahan, Isfahan, Iran.

3 Associate Professor, Sport Management Research Center, Sport Sciences Research Institute, Tehran, Iran.

10.30473/arsm.2026.75142.3976

Abstract

Introduction                                  
Artificial Intelligence (AI) is rapidly transforming nearly every aspect of human activity, including industry, education, medicine, and, increasingly, academic research. In the realm of sports science, AI applications are expanding swiftly, ranging from athlete performance tracking, injury prediction, and tactical simulations to the automation of training regimes and strategic decision-making. These developments have revolutionized the way sports are analyzed, taught, and even experienced. One of the most groundbreaking and controversial implementations of AI is its integration into the academic research process. This includes tasks such as literature reviews, data analysis, and scientific writing, all of which were traditionally the domain of human researchers.
AI-powered tools like ChatGPT, DeepSeek, Sider, and Gemini now offer unprecedented capabilities to accelerate the research workflow. They can summarize vast amounts of data, draft coherent texts, detect patterns in data, and even propose research questions or hypotheses. Their presence is reshaping the academic landscape by offering both support and potential disruption. However, despite their practical benefits, the adoption of AI in research, particularly within sports science, raises a wide array of concerns. These concerns range from the generation of technically incorrect or contextually shallow content to deeper ethical issues such as the erosion of human agency, data privacy violations, and algorithmic biases.
AI systems, while impressive in computational ability, are still limited in their understanding of domain-specific contexts and nuanced interpretations. In sports research, which often involves the integration of physiological, psychological, tactical, and sociocultural dimensions, these limitations become even more pronounced. The challenge is compounded when AI is used for content generation in academic publishing, where the accuracy and integrity of scientific communication are paramount. There are growing fears that over-reliance on AI tools might weaken essential research skills, reduce critical thinking, and promote a culture of superficial engagement with scholarly material.
Furthermore, the academic community faces the threat of standardization and homogenization in thought, where AI-generated content follows repetitive and formulaic patterns. The subtlety and originality that define rigorous academic work are at risk. There is also concern about the dilution of scholarly identity, as the line between human-authored and machine-generated content becomes increasingly blurred. Such developments can have far-reaching consequences, especially in fields like sports science that rely heavily on interdisciplinary insights and deep contextual understanding.
The purpose of this study is not to advocate for the abandonment of AI tools but rather to provide a structured framework for their responsible and intelligent use. This research aims to design a comprehensive model that identifies and categorizes the risks associated with AI in sports research and proposes targeted strategies to mitigate these risks. By doing so, it contributes to the development of policies and practices that preserve the integrity of academic output while embracing the productive potential of AI.
Given the accelerated integration of AI in academia, the sports research community must proactively establish standards that ensure scientific quality, ethical rigor, and methodological transparency. This involves not only the technological fine-tuning of AI systems but also the education and empowerment of researchers. It is essential to cultivate a culture where AI is viewed as a complementary tool, not a substitute for human cognition. Such a paradigm shift requires comprehensive stakeholder engagement, including policymakers, educational institutions, journal editors, and technology developers.
The current study addresses this critical need by drawing insights from both human experts and advanced AI systems. Through a methodologically rigorous approach involving qualitative interviews and thematic analysis, it formulates a robust model of AI risk management in sports research. This model seeks to bridge the gap between AI innovation and academic responsibility, thereby ensuring that the future of sports science remains both dynamic and credible.
 
Mothodology
This qualitative research employed thematic analysis to identify key themes and patterns regarding the risks and management strategies associated with AI in sports science research. The study involved deep interviews with two types of participants: 12 human experts in AI and sports research and four major AI chatbots (ChatGPT, DeepSeek, Sider, and Gemini). Participants were selected based on their expertise in fields such as AI ethics, machine learning, sports informatics, and academic writing. The interviews continued until theoretical saturation was achieved. The data collected were coded and analyzed using MAXQDA software. Reliability was ensured through double-coding, expert validation, and structured feedback loops. Two primary research questions guided the interviews: first, to identify the specific risks of using AI in the academic research process within sports science; and second, to explore the most effective strategies for managing these risks. The responses were categorized into thematic codes, followed by the formulation of a structured conceptual model that connects identified risks with targeted mitigation strategies.
 
Findings
The findings revealed a set of critical issues and corresponding management strategies associated with AI use in sports research. The risks were categorized into six major sub-themes encompassing 46 unique codes. Firstly, in terms of content quality and accuracy, AI-generated texts often contained analytical errors, misleading information, repetitive content, and insufficient contextual understanding. Secondly, dependency on AI was shown to erode human research skills such as critical thinking, creativity, and academic writing. Thirdly, ethical and legal concerns arose due to AI’s potential to plagiarize, misuse data, breach privacy, and produce biased or untraceable content. The fourth risk area involved technical and infrastructural challenges, including software bugs, processing limitations, high costs, and cybersecurity vulnerabilities. Fifth, socio-cultural effects included reduced human interaction in research, the loss of traditional academic values, unequal access to technology, and declining trust in AI-generated outcomes. Lastly, a major concern was the threat AI poses to the credibility of scientific research through the proliferation of low-quality publications, reputational damage to authors, and ranking decline of institutions.
In terms of management strategies, six sub-themes were identified with a total of 43 codes. These included improving AI algorithms to enhance transparency and reliability, implementing robust policy and legal frameworks to regulate ethical use, and investing in education to raise awareness and train researchers in responsible AI usage. Additionally, human oversight remained critical in validating AI-generated content and ensuring quality control. Infrastructure development was another pillar, emphasizing equitable access to technology and international collaboration. Finally, transparency and accountability were necessary to ensure responsible use, including open algorithmic disclosures and clearly defined roles for content responsibility.
 
Discussion and Conclusion
The exponential integration of AI into the sports research ecosystem presents both remarkable opportunities and significant vulnerabilities. On the positive side, AI enables efficiency, innovation, and access to vast data-driven insights. It allows researchers to accelerate data processing, identify patterns that may otherwise remain undetected, and simulate complex systems. This leads to increased productivity and, potentially, more impactful outcomes.
However, these benefits come with considerable risks. Without adequate human oversight and ethical safeguards, AI can compromise research quality, dilute scholarly rigor, and introduce systemic biases. The findings emphasize that AI systems frequently struggle with contextual nuances specific to sports science, leading to overly generic or misleading outputs. Such limitations can result in superficial analysis, erroneous interpretations, and ultimately, flawed conclusions.
The risk of skill degradation is particularly alarming. As researchers grow dependent on AI-generated outputs, their capacity for independent analysis, critical evaluation, and creative thinking may erode. This shift represents not only a methodological concern but also a cultural one. The academic tradition is built on inquiry, skepticism, and intellectual independence—all of which are threatened when machines dominate cognitive tasks. The study’s insights suggest a clear need for recalibrating the human-AI relationship in academia to avoid becoming passive consumers of algorithmic content.
Ethical and legal challenges are also pressing. Issues such as data misuse, unintentional plagiarism, algorithmic opacity, and unregulated data scraping are becoming increasingly prevalent. These challenges underscore the urgency of developing institutional and national governance frameworks that can both empower and restrain AI usage. Legal structures must evolve to address the new realities introduced by intelligent systems, particularly in how intellectual property, data rights, and accountability are defined and enforced.
Technical and infrastructural obstacles further complicate the picture. Many institutions, especially in developing regions, lack the computational resources, cybersecurity infrastructure, and technical expertise to effectively implement and monitor AI tools. This creates disparities in research capacity and contributes to a growing digital divide. Such inequalities can marginalize institutions and researchers, limiting the global inclusiveness of sports science and hindering international collaboration.
Social implications are equally significant. The shift in the role of the researcher-from knowledge creator to system operator-can impact professional identity, academic motivation, and the broader culture of inquiry. Trust in research findings may decline if stakeholders suspect that outputs are generated by unvetted or untraceable algorithms. This erosion of trust not only affects individual researchers but also undermines the credibility of institutions, journals, and the broader scientific community.
In response to these multifaceted challenges, the study proposes a conceptual model that incorporates both preventive and corrective measures. These include legal and ethical frameworks, technological enhancements, capacity-building initiatives, and infrastructural support. A key recommendation is to embed transparency and accountability into the development and deployment of AI tools. Researchers must have access to clear documentation about how AI systems operate, what data they use, and how they make decisions. Furthermore, responsibility for AI-generated content must be explicitly defined to avoid ambiguity in authorship and liability.
The model also highlights the importance of educational reform. AI literacy should become a core component of academic training in sports science. Researchers need to understand not only how to use AI tools but also when not to use them. This involves critical thinking, ethical reasoning, and domain-specific expertise. Training programs should emphasize the limitations of AI, the value of human insight, and the necessity of methodological rigor.
Institutions are encouraged to adopt proactive policies that guide AI usage in research settings. These policies should align with global best practices but also reflect the unique ethical and cultural values of each region. Cross-border collaborations are especially important for sharing resources, harmonizing standards, and fostering innovation in a responsible manner.
Journals and peer-review systems must also adapt. Reviewers need tools and training to identify AI-generated content and assess its quality. Editorial guidelines should be updated to reflect the realities of AI-enhanced writing and establish expectations regarding transparency, authorship, and originality.
In summary, this study offers a forward-looking framework for integrating AI into sports science research in a manner that preserves its integrity and enhances its value. The proposed model serves as a roadmap for researchers, institutions, and policymakers seeking to harness the power of AI while safeguarding the foundational principles of academic inquiry. Responsible innovation, guided by ethical foresight and collective commitment, is essential to ensure that the adoption of AI leads to progress, not peril, in the world of sports research.

Keywords


Adarkwah, M. A., Islam, A. A., Schneider, K., Luckin, R., Thomas, M., & Spector, J. M. (2025). Are preprints a threat to the credibility and quality of artificial intelligence literature in the ChatGPT era? A scoping review and qualitative study. International Journal of Human–Computer Interaction, 41(9), 5508-5521. https://doi.org/10.1080/10447318.2024.2364140
Akinwale, O. E., Kuye, O. L., & Doddanavar, I. (2025). Scourge of replacing contemporary work environment with artificial intelligence (AI-dark-side): the role of capacity development in quality of work-life and organisational performance. Journal of Systems and Information Technology, 27(1), 116-145. https://doi.org/10.1108/JSIT-08-2024-0297
Alzahrani, A., & Ullah, A. (2024). Advanced biomechanical analytics: Wearable technologies for precision health monitoring in sports performance. Digital Health, 10, 20552076241256745. https://doi.org/10.1177/20552076241256745
Arabi, L., Roohbakhsh, A., Malaekeh-Nikouei, B., & Bazzaz, B. S. F. (2025). The impact of artificial intelligence (AI) in academic writing and publication: Iranian Journal of Basic Medical Sciences (IJBMS) policy. Iranian Journal of Basic Medical Sciences, 28(1), 1. (In Persian) https://doi.org/10.22038/ijbms.2025.25229
Belanche, D., Belk, R. W., Casaló, L. V., & Flavián, C. (2024). The dark side of artificial intelligence in services. The Service Industries Journal, 44(3-4), 149-172. https://doi.org/10.1080/02642069.2024.2305451
Berliner, L. (2024). Minimizing possible negative effects of artificial intelligence. International Journal of Computer Assisted Radiology and Surgery, 19(8), 1473-1476. https://doi.org/10.1007/s11548-024-03105-2
Binesh, N., Ponnada, K., & Syah, A. (2025). The Future of the Gambling Industry is AI: Insights from Expert Interviews on Human-AI Collaboration, Regulation, and Ethics. Journal of Gambling Studies, 1-18. https://doi.org/10.1007/s10899-025-10422-x
Biró, A., Cuesta-Vargas, A. I., & Szilágyi, L. (2023). AI-assisted fatigue and stamina control for performance sports on IMU-generated multivariate times series datasets. Sensors24(1), 132. https://doi.org/10.3390/s24010132
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Bulathwela, S., Pérez-Ortiz, M., Holloway, C., Cukurova, M., & Shawe-Taylor, J. (2024). Artificial intelligence alone will not democratise education: On educational inequality, techno-solutionism and inclusive tools. Sustainability, 16(2), 781. https://doi.org/10.3390/su16020781
Carobene, A., Padoan, A., Cabitza, F., Banfi, G., & Plebani, M. (2024). Rising adoption of artificial intelligence in scientific publishing: evaluating the role, risks, and ethical implications in paper drafting and review process. Clinical Chemistry and Laboratory Medicine (CCLM), 62(5), 835-843. https://doi.org/10.1515/cclm-2023-1136
Chen, Z., Chen, C., Yang, G., He, X., Chi, X., Zeng, Z., & Chen, X. (2024). Research integrity in the era of artificial intelligence: Challenges and responses. Medicine, 103(27), e38811. https://doi.org/10.1097/MD.0000000000038811
Chirichela, I. A., Mariani, A. W., & Pêgo-Fernandes, P. M. (2024). Artificial intelligence in scientific writing. Sao Paulo Medical Journal142(5), e20241425. https://doi.org/10.1590/1516-3180.2024.1425.26062024
Dolunay, A., & Temel, A. C. (2024). The relationship between personal and professional goals and emotional state in academia: a study on unethical use of artificial intelligence. Frontiers in Psychology, 15, 1363174. https://doi.org/10.3389/fpsyg.2024.1363174
Filetti, S., Fenza, G., & Gallo, A. (2024). Research design and writing of scholarly articles: New artificial intelligence tools available for researchers. Endocrine85(3), 1104-1116. https://doi.org/10.1007/s12020-024-03977-z
Fulton, R., Fulton, D., Hayes, N., & Kaplan, S. (2024). The Transformation Risk-Benefit Model of Artificial Intelligence: Balancing Risks and Benefits Through Practical Solutions and Use Cases. arXiv preprint arXiv:2406.11863. https://doi.org/10.48550/arXiv.2406.11863
Kharipova, R., Khaydarov, I., Akramova, S., Lutfullaeva, D., Saidov, S., Erkinov, A., ... & Erkinova, N. (2024). The Role of Artificial Intelligence Technologies in Evaluating the Veracity of Scientific Research. Journal of Internet Services and Information Security14(4), 554-568. https://doi.org/10.58346/JISIS.2024.I4.035
Kotsis, K. T. (2025). Legality of Employing Artificial Intelligence for Writing Academic Papers in Education. Journal of Contemporary Philosophical and Anthropological Studies3(1). https://doi.org/10.59652/jcpas.v3i1.375
Kwon, S., & Porter, A. L. (2025). Use of exclusive data for corporate research on machine learning and artificial intelligence: Implications for innovation and competition policy. Technology in Society, 102820. https://doi.org/10.1016/j.techsoc.2025.102820
Latzel, R., & Glauner, P. (2024). Artificial Intelligence in Sport Scientific Creation and Writing Process. In Artificial Intelligence in Sports, Movement, and Health (pp. 15-29). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-67256-9_2
Lee, J. L., Choi, S. H., Jeong, S., & Ko, N. (2025). Generative AI in sport advertising: effects of source-message (in) congruence, model types and AI awareness. International Journal of Sports Marketing and Sponsorship. https://doi.org/10.1108/IJSMS-06-2024-0147
Li, A., & Huang, W. (2024). A comprehensive survey of artificial intelligence and cloud computing applications in the sports industry. Wireless Networks30(8), 6973-6984. https://doi.org/10.1007/s11276-023-03567-3
Li, Y., & Wang, T. (2024). Intelligent management process analysis and security performance evaluation of sports equipment based on information security. Measurement: Sensors, 33, 101083. https://doi.org/10.1016/j.measen.2024.101083
Liu, J., Shen, L., & Huang, W. (2025). Navigating Copyright Risk and Governance Challenges in Artificial Intelligence Development: A Case Study From China. Journal of International Development, 37(5), 1168-1193. https://doi.org/10.1002/jid.4007
Mahadevkar, S. V., Patil, S., Kotecha, K., Soong, L. W., & Choudhury, T. (2024). Exploring AI-driven approaches for unstructured document analysis and future horizons. Journal of Big Data11(1), 92. https://doi.org/10.1186/s40537-024-00948-z
Maturo, F., Porreca, A., & Porreca, A. (2025). The risks of artificial intelligence in research: ethical and methodological challenges in the peer review process. AI and Ethics, 1-8.  https://doi.org/10.1007/s43681-025-00775-9
Naughton, M., Salmon, P. M., Compton, H. R., & McLean, S. (2024). Challenges and opportunities of artificial intelligence implementation within sports science and sports medicine teams. Frontiers in Sports and Active Living6, 1332427. https://doi.org/10.3389/fspor.2024.1332427
Onciul, R., Tataru, C. I., Dumitru, A. V., Crivoi, C., Serban, M., Covache-Busuioc, R. A., ... & Toader, C. (2025). Artificial intelligence and neuroscience: transformative synergies in brain research and clinical applications. Journal of Clinical Medicine, 14(2), 550. https://doi.org/10.3390/jcm14020550
Pietraszewski, P., Terbalyan, A., Roczniok, R., Maszczyk, A., Ornowski, K., Manilewska, D., ... & Gołaś, A. (2025). The Role of Artificial Intelligence in Sports Analytics: A Systematic Review and Meta-Analysis of Performance Trends. Applied Sciences, 15(13), 7254. https://doi.org/10.3390/app15137254
Rahbar Yaghobi, S. , Noorbaksh, M. , Kohandel, M. and Khalifeh, S. N. (2025). Forecasting of AI Drivers in the Sports Industry. Organizational Behavior Management in Sport Studies, (), -. https://doi.org/10.30473/fmss.2025.71496.2616
Reis, F. J., Alaiti, R. K., Vallio, C. S., & Hespanhol, L. (2024). Artificial intelligence and machine-learning approaches in sports: Concepts, applications, challenges, and future perspectives. Brazilian Journal of Physical Therapy, 101083. https://doi.org/10.1016/j.bjpt.2024.101083
Resnik, D. B., & Hosseini, M. (2025). The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI and Ethics, 5(2), 1499-1521. https://doi.org/10.1007/s43681-024-00493-8
Sadr, M. M. (2025). Predicting Contingent Decision-Making Styles of Sports Managers Using a Data-Driven Approach: Application of Decision Tree and Machine Learning. Organizational Behavior Management in Sport Studies12(2), 1-9. https://doi.org/10.30473/fmss.2025.74256.2660
Samuel-Okon, A. D., Olateju, O., Okon, S. U., Olaniyi, O. O., & Igwenagu, U. (2024). Formulating global policies and strategies for combating criminal use and abuse of artificial intelligence. Available at SSRN 4873822. https://doi.org/10.9734/acri/2024/v24i5735
Sheykhyoosefi, R. , Azizian Kohan, N. , Moharramzadeh, M. and Naghizadeh Baghi, A. (2024). Designing a model for the use of new technologies in the development of sport for all in Iran: a grounded theory. Applied Research of Sport Management12(4), 53-64. https://doi.org/10.30473/arsm.2024.69743.3829
Shivananda, S., Doddawad, V. G., Vidya, C. S., & Chandrakala, J. (2024). Exploring the bioethical implications of using artificial intelligence in writing research proposals. Perspectives in Clinical Research, 15(4), 172-177. https://doi.org/10.4103/picr.picr_226_23
Trotsyuk, A. A., Waeiss, Q., Bhatia, R. T., Aponte, B. J., Heffernan, I. M., Madgavkar, D., ... & Magnus, D. (2024). Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research. Nature Machine Intelligence, 1-8. https://doi.org/10.1038/s42256-024-00926-3
Uygun İlikhan, S., Özer, M., Tanberkan, H., & Bozkurt, V. (2024). How to mitigate the risks of deployment of artificial intelligence in medicine?. Turkish Journal of Medical Sciences54(3), 483-492. https://doi.org/10.55730/1300-0144.5814
Woodnutt, S., Allen, C., Snowden, J., Flynn, M., Hall, S., Libberton, P., & Purvis, F. (2024). Could artificial intelligence write mental health nursing care plans?. Journal of Psychiatric and Mental Health Nursing, 31(1), 79-86. https://doi.org/10.1111/jpm.12965
Wunderlich, F., Biermann, H., Yang, W., Bassek, M., Raabe, D., Elbert, N., ... & Garnica Caparrós, M. (2025). Assessing machine learning and data imputation approaches to handle the issue of data sparsity in sports forecasting. Machine Learning114(2), 1-28. https://doi.org/10.1007/s10994-024-06651-7
Yasue, N., Mahmoodi, E., Zúñiga, E. R., & Fathi, M. (2025). Analyzing resilient performance of workers with multiple disturbances in production systems. Applied Ergonomics122, 104391. https://doi.org/10.1016/j.apergo.2024.104391
Zhao, X., Zheng, H., & Zhang, Q. (2025). Modern Advances in Artificial Intelligence Across the Athletic Domain. Concurrency and Computation: Practice and Experience, 37(9-11), e70068. https://doi.org/10.1002/cpe.70068