Tres escenarios para la IA en educación: del apoyo responsable a la cocreación
Resumen
Este artículo propone una vía pragmática y proporcionada para integrar la inteligencia artificial generativa en la educación superior mediante tres escenarios graduados por autonomía, agencia y riesgo (apoyo responsable, colaboración guiada y cocreación con declaración reforzada) que convierten principios amplios en decisiones docentes verificables y trazables a lo largo del ciclo docente (planificación, creación de materiales, apoyo y evaluación). El hilo conductor es la inteligencia artificial como complemento bajo juicio académico, nunca sustituto, con transparencia (declaración de uso y marcado de contenido sintético), verificación externa de hechos y citas, y equidad e inclusión por diseño, en coherencia con la guía de la UNESCO (visión centrada en las personas, acciones inmediatas y refuerzo de capacidades), el AI Act (Artículo 50 sobre obligaciones de transparencia y marcado), el Safe AI in Education Manifesto (supervisión humana, privacidad, precisión, explicabilidad, transparencia) y el marco SAFE (Seguridad, Rendición de cuentas, Justicia y Eficacia) como puente operativo entre política y aula. En el Escenario 1 se priorizan bajo riesgo y alta transparencia; en el 2, la iteración trazable con post-edición humana significativa; en el 3, evidencias robustas y auditoría (prompts, versiones, verificación, sesgos/idiomas, revisión humana/pares), con controles reforzados por su mayor impacto. Este gradiente se alinea con la orientación sectorial, que promueve autenticidad, agencia y propiedad del proceso y desaconseja depender de detectores, reforzando diseños que comprueban agencia y trazabilidad. Dos instrumentos facilitan la adopción y la evaluación homogénea. Por un lado, una rúbrica transversal (veracidad y actualidad, trazabilidad, corrección de alucinaciones, equidad e idioma, calidad de interacción) y, por otro lado, listas de verificación por tipo de tarea. El resultado es un mapa operativo para marcar, verificar y documentar con proporcionalidad al riesgo, que permite convertir la inteligencia artificial en oportunidad pedagógica sin ceder en rigor, justicia y responsabilidad.
- Referencias
- Cómo citar
- Del mismo autor
- Métricas
Afreen, J., Mohaghegh, M., & Doborjeh, M. (2025). Systematic literature review on bias mitigation in generative AI. AI and Ethics, 5(5), 4789–4841. https://doi.org/10.1007/s43681-025-00721-9
Alier, M., García-Peñalvo, F. J., Casañ, M. J., Pereira, J. A., & Llorens-Largo, F. (2024). Safe AI in Education Manifesto. Version 0.4.0. https://manifesto.safeaieducation.org.
Alier-Forment, M., Casañ-Guerrero, M. J., Pereira, J., García-Peñalvo, F. J., & Llorens-Largo, F. (2026). Inteligencia artificial generativa y autonomía educativa: metáforas históricas y principios éticos para la transformación pedagógica. RIED: revista iberoamericana de educación a distancia, 29(1). https://doi.org/10.5944/ried.29.1.45536
An, J., Huang, D., Lin, C., & Tai, M. (2025). Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation. PNAS Nexus, 4(3), Article pgaf089. https://doi.org/10.1093/pnasnexus/pgaf089
Anthropic. (2025, September 29). Introducing Claude Sonnet 4.5. Anthropic. https://d66z.short.gy/Gk55eS
Artopoulos, A., & Lliteras, A. (2024). Alfabetización crítica en IA: Recursos educativos para una pedagogía de la descajanegrización. Trayectorias Universitarias, 10, Article e168. https://doi.org/10.24215/24690090e168
Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain‐X, 1(3), Article e30. https://doi.org/10.1002/brx2.30
Bedington, A., Halcomb, E. F., McKee, H. A., Sargent, T., & Smith, A. (2024). Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students. Computers and Composition, 71, Article 102833. https://doi.org/10.1016/j.compcom.2024.102833
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada, March 3 - 10, 2021) (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922
Bittle, K., & El-Gayar, O. (2025). Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information, 16(4), Article 296. https://doi.org/10.3390/info16040296
Boonstra, L. (2025). Prompt Engineering. Google. https://d66z.short.gy/3ok7tY
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv, Article arXiv:2005.14165v4 https://doi.org/10.48550/arXiv.2005.14165
Burneo-Arteaga, P., Lira, Y., Murzi, H., Balula, A., & Costa, A. P. (2025). Capability-based training framework for generative AI in higher education. Frontiers in Education, 10, Article 1594199. https://doi.org/10.3389/feduc.2025.1594199
Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education, 15(1), 22. https://doi.org/10.1186/s41239-018-0109-y
Chatterji, A., Cunningham, T., Deming, D. J., Hitzig, Z., Ong, C., Shan, C. Y., & Wadman, K. (2025). How people use ChatGPT (34255). (NBER Workking Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w34255.
Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J.-L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. Journal of Medical Internet Research, 26, Article e53164. https://doi.org/10.2196/53164
Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2017). Deep reinforcement learning from human preferences. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA (Vol. 30). Curran Associates, Inc.
Clarke, A. C. (1973). Profiles of the Future: An Inquiry into the Limits of the Possible (2nd ed.). Harper & Row.
DeepSeek. (2025, September 29). Introducing DeepSeek-V3.2-Exp. DeepSeek API Docs. https://d66z.short.gy/eXidah
Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8), 423–425. https://doi.org/10.1038/s42256-020-0219-9
Dúo-Terrón, P. (2024). Generative artificial intelligence: Educational reflections from an analysis of scientific production. Journal of Technology and Science Education, 14(3), 756–769. https://doi.org/10.3926/jotse.2680
EDSAFE AI. (2021). What is the EDSAFE AI SAFE Framework? EDSAFE AI. https://d66z.short.gy/RNVmzh.
European Parliament, & Council of the European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Brussels, Belgium: European Commission Retrieved from https://bit.ly/2O2juE9.
European Parliament, & The Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). (Official Journal of the European Union). European Union Retrieved from https://eur-lex.europa.eu/eli/reg/2024/1689/oj.
Frau-Meigs, D. (2024). User empowerment through media and information literacy responses to the evolution of generative artificial intelligence (GAI) (CI/FMD/MIL/2024/3). UNESO. https://d66z.short.gy/Wg2YCU.
Fulsher, A., Pagkratidou, M., & Kendeou, P. (2025). GenAI and misinformation in education: a systematic scoping review of opportunities and challenges. AI & SOCIETY. https://doi.org/10.1007/s00146-025-02536-y
García-Peñalvo, F. J. (2023). The perception of Artificial Intelligence in educational contexts after the launch of ChatGPT: Disruption or Panic? Education in the Knowledge Society, 24, Article e31279. https://doi.org/10.14201/eks.31279
García-Peñalvo, F. J. (2024a). Generative Artificial Intelligence and Education: An Analysis from Multiple Perspectives. Education in the Knowledge Society, 25, Article e31942. https://doi.org/10.14201/eks.31942
García-Peñalvo, F. J. (2024b). Mito de la inteligencia. Más allá de una educación de silicio. In C. Suárez-Guerrero, J. E. Raffaghelli, & P. Rivera-Vargas (Eds.), Mitos EdTech. Desmontando el solucionismo tecnológico en educación (pp. 79–87). Editorial UOC.
García-Peñalvo, F. J., Alier, M., Pereira, J. A., & Casañ, M. J. (2024). Safe, Transparent, and Ethical Artificial Intelligence: Keys to Quality Sustainable Education (SDG4). IJERI – International Journal of Educational Research and Innovation(22), 1–21. https://doi.org/10.46661/ijeri.11036
García-Peñalvo, F. J., Casañ-Guerrero, M. J., Alier-Forment, M., & Pereira-Valera, J. A. (2025). The ethics of generative artificial intelligence in education under debate. A perspective from the development of a theoretical-practical case study. Revista Española de Pedagogía, 83(291), 281–293. https://doi.org/10.22550/2174-0909.4577
García-Peñalvo, F. J., Llorens-Largo, F., & Vidal, J. (2024). The new reality of education in the face of advances in generative artificial intelligence. RIED: revista iberoamericana de educación a distancia, 27(1), 9–39. https://doi.org/10.5944/ried.27.1.37716
García-Peñalvo, F. J., & Vázquez-Ingelmo, A. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI. International Journal of Interactive Multimedia and Artificial Intelligence, 8(4), 7–16. https://doi.org/10.9781/ijimai.2023.07.006
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), Article 6. https://doi.org/10.3390/soc15010006
Gibney, E. (2025). Can researchers stop AI making up citations? Nature, 645, 569–570. https://doi.org/10.1038/d41586-025-02853-8
Glynn, A. (2025). Guarding against artificial intelligence-hallucinated citations: the case for full-text reference deposit. European Science Editing, 51, Article e153973. https://doi.org/10.3897/ese.2025.e153973
Google. (2025). Google Environmental Report 2025. Google. https://d66z.short.gy/uxN9Eu.
Hayes, J., Swanberg, M., Chaudhari, H., Yona, I., Shumailov, I., Nasr, M., Choquette-Choo, C. A., Lee, K., & Cooper, A. F. (2025). Measuring memorization in language models via probabilistic extraction. In L. Chiruzzo, A. Ritter, & L. Wang (Eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (Albuquerque, New Mexico, April 29 - May 4, 2025) (pp. 9266–9291). Association for Computational Linguistics. https://doi.org/10.18653/v1/2025.naacl-long.469
Huang, J., & Chang, K. (2024). Citation: A Key to Building Responsible and Accountable Large Language Models (Mexico City, Mexico, June 16–21, 2024). In K. Duh, H. Gomez, & S. Bethard (Eds.), Mexico City, Mexico (pp. 464–473). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-naacl.31
Jegham, N., Abdelatti, M., Elmoubarki, L., & Hendawi, A. (2025). How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference. arXiv, Article arXiv:2505.09598v4. https://doi.org/10.48550/arXiv.2505.09598
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), Article 248. https://doi.org/10.1145/3571730
Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8. https://doi.org/10.1016/j.caeai.2024.100348
Joint Council for Qualifications. (2025). AI use in assessments: Your role in protecting the integrity of qualifications (Revision two). Joint Council for Qualifications. https://d66z.short.gy/G2eDjK.
Jovanović, M., & Campbell, M. (2022). Generative Artificial Intelligence: Trends and Prospects. Computer, 55(10), 107–112. https://doi.org/10.1109/MC.2022.3192720
Kassorla, M., Georgieva, M., & Papini, A. (2024). AI Literacy in Teaching and Learning: A Durable Framework for Higher Education. Educause. https://d66z.short.gy/bPhL3A.
Kenthapadi, K., Sameki, M., & Taly, A. (2024). Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey). In KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Barcelona, Spain, August 25 - 29, 2024) (pp. 6523–6533). Association for Computing Machinery. https://doi.org/10.1145/3637528.3671467
Knoth, N., Tolzin, A., Janson, A., & Leimeister, J. M. (2024). AI literacy and its implications for prompt engineering strategies. Computers and Education: Artificial Intelligence, 6, Article 100225. https://doi.org/10.1016/j.caeai.2024.100225
Kotha, A., Lee, J., & Zakariasson, E. (2025, August 7). GPT-5 prompting guide. OpenAI Cookbook. https://d66z.short.gy/CaAOnG
Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Computers and Education: Artificial Intelligence, 6, Article 100221. https://doi.org/10.1016/j.caeai.2024.100221
Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: a systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(1), Article 7. https://doi.org/10.1186/s41239-025-00503-7
Li, P., Yang, J., Islam, M. A., & Ren, S. (2025). Making AI Less 'Thirsty'. Communications of the ACM, 68(7), 54–61. https://doi.org/10.1145/3724499
Liu, X., Sun, T., Xu, T., Wu, F., Wang, C., Wang, X., & Gao, J. (2024). SHIELD: Evaluation and Defense Strategies for Copyright Compliance in LLM Text Generation. In Y. Al-Onaizan, M. Bansal, & Y.-N. Chen (Eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (Miami, Florida, USA, November 12-16, 2024) (pp. 1640–1670). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.emnlp-main.98
Molina-Carmona, R., & García-Peñalvo, F. J. (2025). Safeguarding Knowledge: Ethical Artificial Intelligence Governance in the University Digital Transformation. In E. Vendrell Vidal, U. R. Cukierman, & M. E. Auer (Eds.), Advanced Technologies and the University of the Future (pp. 201–220). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-71530-3_14
Mueller, F. B., Görge, R., Bernzen, A. K., Pirk, J. C., & Poretschkin, M. (2024). LLMs and Memorization: On Quality and Specificity of Copyright Compliance. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 984–996. https://doi.org/10.1609/aies.v7i1.31697
Nam, B. H., & Bai, Q. (2023). ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis. International Journal of STEM Education, 10(1), Article 66. https://doi.org/10.1186/s40594-023-00452-5
Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2025). A Comprehensive Overview of Large Language Models. ACM Transactions on Intelligent Systems and Technology, 16(5), Article 106. https://doi.org/10.1145/3744746
Nerantzi, C., Abegglen, S., Karatsiori, M., & Martínez-Arboleda, A. (Eds.). (2023). 101 creative ideas to use AI in education, A crowdsourced collection. Zenodo. https://doi.org/10.5281/zenodo.8355454.
Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847–864. https://doi.org/10.1080/03075079.2024.2323593
Office of Qualifications and Examinations Regulation. (2024, 24 April). Ofqual’s approach to regulating the use of artificial intelligence in the qualifications sector. Office of Qualifications and Examinations Regulation. https://d66z.short.gy/WLoJbW
Office of Qualifications and Examinations Regulation. (2025, 1 May). Ofqual strategy 2025 to 2028 Office of Qualifications and Examinations Regulation. https://d66z.short.gy/8T7iPk
OpenAI. (2025, 7 de agosto). Presentamos GPT-5. OpenAI. https://d66z.short.gy/hJeA79
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A. Oh (Eds.), NIPS'22: Proceedings of the 36th International Conference on Neural Information Processing Systems (New Orleans, LA, USA, 28 November - 9 December 2022) (pp. 27730–27744). Curran Associates Inc.
Perković, G., Drobnjak, A., & Botički, I. (2024). Hallucinations in LLMs: Understanding and Addressing Challenges. In 2024 47th MIPRO ICT and Electronics Convention (MIPRO) (Opatija, Croatia, 20-24 May 2024) (pp. 2084–2088). IEEE. https://doi.org/10.1109/MIPRO60963.2024.10569238
Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 12, Article 241776. https://doi.org/10.1098/rsos.241776
Qiao, H., Bhardwaj, E., Landau, V. G. D., Bonfils, N., Iqbal, M., Jaworsky, O., Munson, R. O. A., Rubisova, L., Smith, N. M., Thapa, A., & Becker, C. (2025). Are You Thirsty? So is Your AI. In COMPASS '25: Proceedings of the ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (Toronto, Canada, July 22 - 25, 2025) (pp. 811–816). Association for Computing Machinery. https://doi.org/10.1145/3715335.3736308
Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Romeo, G., & Conti, D. (2025). Exploring automation bias in human–AI collaboration: a review and implications for explainable AI. AI & SOCIETY. https://doi.org/10.1007/s00146-025-02422-7
Roxas, R. E. (2024). Large Language Models and Natural Language Processing On Minority Languages: A Systematic Review (Tokyo, Japan, 7-9 December 2024). In N. Oco, S. N. Dita, A. M. Borlongan, & J.-B. Kim (Eds.), Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation (pp. 1–8). Institute for the Stufy of Language and Information (ISLI).
Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2025). Can AI-Generated Text be Reliably Detected? arXiv, Article arXiv:2303.11156v4. https://doi.org/10.48550/arXiv.2303.11156
Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54–63. https://doi.org/10.1145/3381831
Shao, A. (2025). New sources of inaccuracy? A conceptual framework for studying AI hallucinations. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-182
Sozon, M., Parnther, C., Wei Lun, W., & Chowdhury, M. A. (2025). Generative AI in higher education: navigating benefits and challenges in the technological era. Journal of Applied Research in Higher Education. https://doi.org/10.1108/JARHE-02-2025-0103
Torres, N., Ulloa, C., Araya, I., Ayala, M., & Jara, S. (2025). A comprehensive analysis of gender, racial, and prompt-induced biases in large language models. International Journal of Data Science and Analytics, 20(4), 3797–3834. https://doi.org/10.1007/s41060-024-00696-6
Towhidul Islam Tonmoy, S. M., Mehedi Zaman, S. M., Jain, V., Rani, A., Rawte, V., Chadha, A., & Das, A. (2024). A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models. arXiv, Article arXiv:2401.01313v3. https://doi.org/10.48550/arXiv.2401.01313
UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. https://d66z.short.gy/SBxqSb
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA (pp. 5998–6008).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2023). Attention is all you need. arXiv, Article arXiv:1706.03762v7. https://doi.org/10.48550/arXiv.1706.03762
Veldhuis, A., Lo, P. Y., Kenny, S., & Antle, A. N. (2025). Critical Artificial Intelligence literacy: A scoping review and framework synthesis. International Journal of Child-Computer Interaction, 43, Article 100708. https://doi.org/10.1016/j.ijcci.2024.100708
Vivas Urias, M. D., & Ruiz Rosillo, M. A. (Eds.). (2025). Inteligencia artificial generativa. Buenas prácticas docentes en educación superior. Octaedro.
Walker, S. (2025). Trends in assessment in higher education: considerations for policy and practice. Jisc. https://d66z.short.gy/ZMRzML.
Wang, C., Fogle, E., & Urban, A. (2024). AI-powered viva exams: advancing academic integrity in online education. In Proceedings of the 17th annual International Conference of Education, Research and Innovation - ICERI 2024 (Seville, Spain, 11-13 November 2024) (pp. 5673–5678). IATED. https://doi.org/10.21125/iceri.2024.1379
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), Article 26. https://doi.org/10.1007/s40979-023-00146-z
Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., & Le, Q. V. (2022). Finetuned Language Models Are Zero-Shot Learners. arXiv, Article arXiv:2109.01652v5. https://doi.org/10.48550/arXiv.2109.01652
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., Isaac, W., Legassick, S., Irving, G., & Gabriel, I. (2021). Ethical and social risks of harm from Language Models. arXiv, Article arXiv:2112.04359v1. https://doi.org/10.48550/arXiv.2112.04359
Xu, Y., Hu, L., Zhao, J., Qiu, Z., Xu, K., Ye, Y., & Gu, H. (2025). A survey on multilingual large language models: corpora, alignment, and bias. Frontiers of Computer Science, 19(11), Article 1911362. https://doi.org/10.1007/s11704-024-40579-4
Yang, Y., Zhang, Y., Sun, D., He, W., & Wei, Y. (2025). Navigating the landscape of AI literacy education: insights from a decade of research (2014–2024). Humanities and Social Sciences Communications, 12(1), Article 374. https://doi.org/10.1057/s41599-025-04583-8
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(1), Article 28. https://doi.org/10.1186/s40561-024-00316-7
Zhao, P., Zhang, H., Yu, Q., Wang, Z., Geng, Y., Fu, F., Yang, L., Zhang, W., Jiang, J., & Cui, B. (2024). Retrieval-Augmented Generation for AI-Generated Content: A Survey. arXiv, Article arXiv:2402.19473v6. https://doi.org/10.48550/arXiv.2402.19473
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., & Wen, J.-R. (2025). A Survey of Large Language Models. arXiv, Article arXiv:2303.18223v16. https://doi.org/10.48550/arXiv.2303.18223
Artículos más leídos del mismo autor/a
- Francisco José García-Peñalvo, La percepción de la Inteligencia Artificial en contextos educativos tras el lanzamiento de ChatGPT: disrupción o pánico , Education in the Knowledge Society (EKS): Vol. 24 (2023)
- Francisco José García-Peñalvo, Inteligencia artificial generativa y educación , Education in the Knowledge Society (EKS): Vol. 25 (2024)
- Francisco José García-Peñalvo, Alfredo Corell, Víctor Abella-García, Mario Grande, La evaluación online en la educación superior en tiempos de la COVID-19 , Education in the Knowledge Society (EKS): Vol. 21 (2020)
- Francisco José García-Peñalvo, Desarrollo de estados de la cuestión robustos: Revisiones Sistemáticas de Literatura , Education in the Knowledge Society (EKS): Vol. 23 (2022)
- Francisco José García-Peñalvo, Antonio Miguel Seoane Pardo, Una revisión actualizada del concepto de eLearning. Décimo Aniversario , Education in the Knowledge Society (EKS): Vol. 16 Núm. 1 (2015)
- Francisco José García-Peñalvo, Transformación digital en las universidades: Implicaciones de la pandemia de la COVID-19 , Education in the Knowledge Society (EKS): Vol. 22 (2021)
- Francisco José García-Peñalvo, Estado actual de los sistemas e-learning , Education in the Knowledge Society (EKS): Vol. 6 Núm. 2 (2005)
- Francisco José García-Peñalvo, Mapa de tendencias en Innovación Educativa , Education in the Knowledge Society (EKS): Vol. 16 Núm. 4 (2015)
- Francisco José García-Peñalvo, Identidad digital como investigadores. La evidencia y la transparencia de la producción científica , Education in the Knowledge Society (EKS): Vol. 19 Núm. 2 (2018)
- Francisco Michavila, Jorge M. Martínez, Martín Martín-González, Francisco José García-Peñalvo, Juan Cruz-Benito, Empleabilidad de los titulados universitarios en España. Proyecto OEEU , Education in the Knowledge Society (EKS): Vol. 19 Núm. 1 (2018)