Logotipo

El contenido generado por IA puede ser incorrecto.

 

MENTOR

Revista de Investigación Educativa y Deportiva

 

 

 

 

Volumen 5  

 

2026

Número 13

 


 

 

 

Director: Ph.D. Richar Posso Pacheco

Email: rjposso@revistamentor.ec

Web: https://revistamentor.ec/

 

 

Editora en Jefe: Ph.D. Susana Paz Viteri

Coordinador Editorial: Ph.D. (c) Josue Marcillo Ñacato

Coordinadora Comité Científico: Ph.D. Laura Barba Miranda

Coordinadora Comité de Editores: Msc. María Gladys Cóndor Chicaiza

Coordinador del Consejo de Revisores: PhD. Javier Fernández-Rio


Editorial

Logotipo

El contenido generado por IA puede ser incorrecto.Prompt-augmented inquiry cycle: a methodological proposal

for the integration of artificial intelligence

 

Ciclo de indagación aumentado por prompts: Propuesta metodológica

para la integración de la inteligencia artificial

 

 

 

Richar Jacobo Posso-Pacheco1

ORCID: https://orcid.org/0000-0003-1279-9852

 

                                                                                                                                       

 

 

 

Director of MENTOR: Journal of Educational and Sports Research. Quito-Ecuador1

 

 

 

 

 

 

Autor de correspondencia

rjposso@revistamentor.ec

 

 

 

 

 

 

 


 

Abstract

The integration of generative artificial intelligence in education has reshaped teaching, assessment, and learning practices across educational levels, prompting growing interest in understanding its role as a pedagogical mediator. The objective of this study was to propose a human-centred teaching methodology that articulates technological innovation with established learning theories and ethical principles. The study employed a qualitative documentary review of academic literature indexed in Scopus and ERIC, which was analytically examined to identify theoretical contributions related to artificial intelligence, prompt use, formative feedback, and self-regulated learning in education. The results indicate that artificial intelligence in education is commonly applied in a fragmented manner, focusing on personalization, feedback, or prompt use without methodological integration. In response to this gap, the Prompt-Augmented Inquiry Cycle was developed as a pedagogical framework that organizes inquiry, guided prompt use, formative feedback, and reflection into a coherent learning sequence. This study contributes to the field of educational technology by highlighting the need for integrated pedagogical frameworks for the use of artificial intelligence.

Keywords: Artificial intelligence, pedagogical methodology, inquiry, self-regulated learning.

 

Resumen

La integración de la inteligencia artificial generativa en la educación ha modificado prácticas de enseñanza, evaluación y aprendizaje en los distintos niveles educativos, esto ha impulsado un el interés por comprender el papel del mediador pedagógico. El objetivo de esta investigación fue proponer una metodología de enseñanza centrada en el ser humano que articule la innovación tecnológica con teorías del aprendizaje consolidadas y principios éticos. El estudio utilizó una revisión documental cualitativa de literatura académica indexada en Scopus y ERIC, los cuales fueron analizados de forma analítica para identificar aportes teóricos sobre inteligencia artificial, uso de prompts, retroalimentación formativa y aprendizaje autorregulado en educación. Los resultados muestran que la inteligencia artificial en educación se aplica de forma fragmentada, centrada en personalización, retroalimentación o uso de prompts sin integración metodológica. A partir de este vacío, se desarrolló el Prompt-Augmented Inquiry Cycle como marco pedagógico que organiza la indagación, el uso guiado de prompts, la retroalimentación formativa y la reflexión en una secuencia coherente de aprendizaje. El estudio aporta al campo de la tecnología educativa al evidenciar la necesidad de marcos pedagógicos integrados para el uso de inteligencia artificial.

Palabras clave: Inteligencia artificial, metodología pedagógica, indagación, aprendizaje autorregulado.

 

 

Introduccion

The accelerated integration of artificial intelligence (AI), particularly generative AI and large language models (LLMs), has triggered one of the most profound transformations in contemporary education. Since 2023, educational research has shifted from predominantly speculative discussions toward empirical examinations of how generative AI is reshaping teaching practices, assessment models, student agency, and institutional policies. By 2025, this body of research has expanded substantially, positioning AI not only as a technological tool but as a pedagogical mediator capable of influencing learning processes, feedback quality, and self-regulated learning outcomes across educational levels (Wang et al., 2024; Xiaoyu et al., 2025).This rapid expansion has generated both opportunities and challenges, thus demanding theoretically grounded methodologies that integrate AI responsibly into instructional design.

One of the most prominent research trends identified in recent systematic reviews is the growing use of generative AI to support learning personalization, formative feedback, and adaptive instruction (Du Plooy et al., 2024; Hariyanto et al., 2025). Studies consistently report that AI-assisted systems can enhance student engagement, provide timely feedback, and support differentiated learning pathways. Nevertheless, the literature also reveals marked variability in learning outcomes, frequently attributable to unstructured implementation, superficial adoption, or the absence of pedagogical frameworks guiding AI use (Tlili et al., 2023). These findings underscore that technological availability alone does not guarantee educational improvement; rather, pedagogical intentionality remains the determining factor.

A critical concern emerging from the literature is the tendency to conceptualize generative AI primarily as a content-generation tool rather than as a structured pedagogical assistant. Several studies warn that unguided AI use may weaken higher-order thinking, reduce epistemic effort, and exacerbate academic integrity challenges if not embedded within carefully designed instructional processes (Yusuf et al., 2024). Consequently, recent research increasingly emphasizes the need to redesign teaching methodologies, assessment practices, and learning activities to ensure that AI use promotes cognitive engagement, reflection, and learner autonomy rather than passive information consumption.

Within this context, prompt engineering has emerged as a key educational construct. Initially associated with the technical optimization of AI outputs, prompt engineering is now recognized as a transferable cognitive skill that directly influences the quality of reasoning, feedback, and knowledge construction mediated by generative AI (Federiakin et al., 2024). Systematic reviews in higher education show that students who receive explicit instruction in prompt design demonstrate improved analytical reasoning, greater argumentative clarity, and more meaningful interactions with AI systems (Lee & Palmer, 2025). These findings suggest that prompt engineering can function as a form of metacognitive scaffolding, guiding learners to articulate goals, constraints, and evaluation criteria with greater precision.

Another converging line of research focuses on AI-supported formative feedback. Evidence indicates that generative AI can deliver immediate, scalable, and criteria-aligned feedback, provided that prompts are carefully structured and contextualized (Jacobsen & Weber, 2025; Posso Pacheco, Arévalo Espinoza, et al., 2025). However, studies also caution that AI-generated feedback varies considerably in terms of accuracy and pedagogical usefulness, reinforcing the need for teacher supervision and human-in-the-loop designs. Systematic analyses of learning analytics further demonstrate that feedback effectiveness increases when it is embedded within reflective cycles that encourage students to revise, justify, and self-evaluate their work (Banihashem et al., 2022).

In parallel, the literature on human-centred learning analytics has gained increasing relevance. Researchers argue that data-driven educational systems must prioritize learner agency, transparency, and ethical responsibility, particularly when AI systems influence instructional decisions (Alfredo et al., 2024). This perspective challenges technocentric models by advocating for analytics that support dialogue, reflection, and shared decision-making between teachers and students rather than automated judgment or surveillance. Ethical considerations related to privacy, consent, and data minimization further reinforce the need for pedagogical models that integrate analytics as supportive, rather than deterministic, elements of the learning process (Posso Ayala et al., 2024; Slade & Prinsloo, 2013).

Despite these advances, the literature reveals a critical gap: existing studies tend to examine AI-supported personalization, prompt engineering, feedback, learning analytics, and self-regulated learning as isolated components rather than as interconnected elements of a coherent teaching methodology. While systematic reviews provide valuable insights into each dimension independently (Hariyanto et al., 2025) conceptual integration explaining how these components can operate synergistically within educational practice remains limited. This fragmentation constrains the development of transferable pedagogical models that educators can adopt across disciplines and educational levels.

Self-regulated learning (SRL) offers an integrative theoretical framework to address this gap. Decades of research demonstrate that effective learning depends on learners’ ability to plan, monitor, and evaluate their cognitive and motivational processes (Zimmerman, 2002). Recent studies suggest that generative AI, when embedded within structured instructional cycles, can support SRL by externalizing planning processes, promoting reflection, and facilitating iterative improvement (Posso Pacheco, Caicedo-Quiroz, et al., 2025). However, these benefits are contingent upon pedagogical designs that position AI as a facilitator of metacognition rather than a substitute for learner effort.

In response to these converging findings, the present study proposes a novel teaching methodology derived from an integrative documentary review: the Human-centred Prompt-Augmented Inquiry Cycle (H-PAI Cycle). This methodology synthesizes evidence from research on generative AI in education, prompt engineering, formative feedback, learning analytics, and self-regulated learning to construct a coherent instructional framework. Rather than introducing new technologies, the H-PAI Cycle reorganizes existing practices into a structured pedagogical process that emphasizes inquiry-based learning, explicit prompt design, feedback-driven revision, ethical use of analytics, and reflective regulation of learning.

Therefore, the objective of this research was to propose a comprehensive, human-centred teaching methodology that articulates technological innovation with established learning theories and ethical principles..

 

Methodology

This study was conducted through a qualitative documentary review aimed at analyzing academic literature related to the pedagogical integration of artificial intelligence in education. The purpose of the review was to identify theoretical approaches, recurring concepts, and relevant contributions that could support the design of a methodological proposal. The study did not seek exhaustiveness nor the evaluation of empirical effects.

The search and selection of documents were carried out using the Scopus and ERIC databases, due to their relevance in the field of educational research. Peer-reviewed journal articles, theoretical studies, and narrative reviews published between 2022 and 2025 were considered, provided they addressed educational uses of artificial intelligence, prompt engineering, formative feedback, or self-regulated learning. Document selection was based on thematic relevance and direct alignment with the object of study.

Data analysis was conducted through an analytical reading of the selected documents. In an initial stage, titles and abstracts were reviewed to identify works related to the topic. This was followed by a full-text reading, considering the pedagogical focus, conceptual clarity, and theoretical elements contributing to the understanding of artificial intelligence in teaching and learning processes. Relevant information was organized using a documentary analysis matrix, which facilitated the grouping of contributions into general categories related to the role of artificial intelligence in education, pedagogical use of prompts, feedback strategies, connections with self-regulated learning, and ethical considerations addressed in the studies.

The analysis indicated that the literature frequently addresses learning personalization, formative feedback, and self-regulated learning; however, these elements are commonly presented separately, without explicit methodological integration. Based on this observation, the identified theoretical contributions were organized into an integrated methodological proposal that connects these components within a single instructional framework.

As a result of the documentary review and the synthesis of the analyzed information, the proposal named Prompt-Augmented Inquiry Cycle was developed. This proposal is conceived as a pedagogical cycle that articulates inquiry-based learning, the pedagogical use of prompts, formative feedback, and reflection on learning. Table 1 summarizes the review process and its relationship to the construction of the methodological proposal.

Table 1

Documentary review process and its relationship to the proposal

Stage

Description

Relationship to the proposal

Thematic delimitation

Identification of concepts related to AI and education

Definition of proposal axes

Documentary review

Analysis of relevant literature in Scopus and ERIC

Theoretical grounding of the model

Conceptual organization

Grouping of common theoretical contributions

Identification of methodological components

Synthesis

Integration of analyzed elements

Design of the Prompt-Augmented Inquiry Cycle

As this study is based on a documentary review, no human participants were involved and no primary data were collected. The research followed academic rigor criteria, appropriate use and citation of sources, and a critical and responsible approach to the analysis of literature on artificial intelligence in education.

 

Results

The analysis of the reviewed literature made it possible to identify how artificial intelligence is being incorporated into educational contexts and which pedagogical approaches accompany its integration. Overall, the studies agree that the presence of artificial intelligence in education has increased in recent years, particularly through tools based on language models. However, its use is mainly described as a support resource for specific activities rather than as part of a structured teaching methodology.

A relevant finding is that learning personalization appears frequently in the literature as one of the main potentials associated with the educational use of artificial intelligence. Several studies indicate that these technologies can adapt to different learning paces, needs, and profiles. Nevertheless, such personalization is usually addressed from a functional, tool-centered perspective, rather than as an element integrated into a clearly defined pedagogical design.

Similarly, AI-assisted formative feedback is frequently mentioned as a relevant application in teaching and learning processes. The reviewed studies highlight its usefulness for providing responses during academic activities. At the same time, the literature indicates that the quality and relevance of this feedback depend on pedagogical criteria and teacher mediation, which limits its autonomous use as an educational strategy.

Regarding prompt engineering, the reviewed studies acknowledge its influence on interactions between students and artificial intelligence systems. Despite this recognition, prompt formulation is rarely presented as an explicit instructional strategy. Instead, it is commonly treated as an implicit technical skill associated with tool use, without a direct connection to learning objectives, assessment processes, or defined teaching strategies.

Another identified finding concerns self-regulated learning. Although several studies suggest that the use of artificial intelligence can support planning, monitoring, and reflection processes, these processes are not usually integrated intentionally within a defined instructional sequence. In most cases, self-regulated learning is described as a potential outcome of technology use rather than as a deliberately designed component of teaching methodology.

From an ethical perspective, the reviewed literature reflects concerns related to transparency, responsibility in the use of artificial intelligence, and the role of teachers in educational decision-making. However, these considerations are generally presented as broad principles or recommendations, without being systematically translated into concrete pedagogical guidelines.

Taken together, the results indicate that the literature addresses key elements such as learning personalization, formative feedback, prompt use, and self-regulated learning in a fragmented manner. The lack of explicit methodological articulation among these elements emerges as a recurring gap in the reviewed studies.

Methodological Proposal

The methodological proposal known as the Prompt-Augmented Inquiry Cycle is conceived as a didactic structure designed to organize the pedagogical use of artificial intelligence within regular teaching and learning processes. Its purpose is to provide teachers with an operational framework that allows artificial intelligence to be integrated in a deliberate and controlled manner into existing curricular activities, without introducing parallel methodologies or altering the fundamental roles of the classroom.

The proposal is based on the assumption that artificial intelligence does not constitute a teaching methodology in itself, but rather a support resource that can intervene in specific phases of learning when its use is clearly defined. From this perspective, the Prompt-Augmented Inquiry Cycle organizes teacher and student activity into a sequence that makes it possible to determine when artificial intelligence is used, for what purpose, and under which pedagogical criteria, thus avoiding indiscriminate or decontextualized use.

From a practical standpoint, the cycle is structured around four interrelated moments that can be implemented within a single lesson, a short instructional sequence, or a learning unit, depending on the educational level and the nature of the activity. Each moment fulfills a specific function within the learning process and clearly defines the role of the teacher, the student, and artificial intelligence.

The first moment corresponds to inquiry, which serves as the starting point of the cycle. In this phase, the teacher presents a problem situation, a guiding question, or an open-ended task related to the curricular content. The purpose of this moment is to activate prior knowledge, delimit the learning problem, and clarify the objectives of the activity. Artificial intelligence is not used at this stage, as the emphasis is placed on initial exploration, question formulation, and the generation of preliminary ideas by students. The teacher guides the process by establishing clear criteria regarding expected outcomes and evidence of learning.

The second moment focuses on the pedagogical use of prompts, understood as an intentional mediation between the student and artificial intelligence. In this phase, the teacher introduces AI as a support tool for exploring information, organizing ideas, contrasting perspectives, or drafting initial versions of a product. Prompts are not used freely or automatically; instead, they are explicitly linked to the defined learning objectives. The teacher guides prompt formulation, provides examples, requests revisions when necessary, and monitors interactions to ensure that the outputs remain aligned with the task. In this way, prompt use becomes part of the formative process rather than a purely technical action.

The third moment corresponds to formative feedback, during which artificial intelligence may support the generation of preliminary comments on student work. This feedback is considered orientative and partial and does not replace teacher intervention. The teacher reviews the generated comments, selects those that are pedagogically relevant, contextualizes them, and complements them with their own observations, ensuring alignment with previously established assessment criteria. At this stage, the focus is on revising and improving student work rather than assigning grades.

The fourth moment is oriented toward reflection on learning, in which students analyze the process they followed, the decisions they made, and the adjustments carried out based on the feedback received. This reflection may take the form of short written reflections, guided questions, or self-assessment activities. The aim is to promote awareness of one’s own learning process and to strengthen self-regulated learning processes such as planning future actions and identifying effective strategies.

To support classroom implementation, Table 2 presents the moments of the cycle along with concrete actions carried out by teachers and students, as well as the intended use of artificial intelligence.

Table 2

Moments of the Prompt-Augmented Inquiry Cycle and classroom actions

Moment

Teacher action

Student action

Use of AI

Inquiry

Presents the task and defines criteria

Explores the problem and formulates initial ideas

Not used

Prompts

Guides and reviews prompt formulation

Interacts with AI in a guided manner

Support for exploration and organization

Feedback

Selects and contextualizes comments

Revises and adjusts work

Support for preliminary feedback

Reflection

Guides reflective questions

Analyzes the learning process

Not used

 

Effective application of the cycle requires that teachers define three basic conditions in advance: the learning objective, the expected evidence, and the specific moment at which artificial intelligence will be used. These conditions help maintain pedagogical control of the process and prevent AI from becoming an end in itself.

Table 3

Minimum conditions for model implementation

Element

Operational definition

Learning objective

What the student is expected to understand or produce

Learning evidence

Observable product of the learning process

Use of AI

Specific moment and purpose within the cycle

 

The model also incorporates examples of prompts with pedagogical purposes, intended to support learning without replacing student reasoning.

Table 4

Examples of prompts with pedagogical purposes

Purpose

Example prompt

Exploration

“Explain this concept considering the specified educational level”

Organization

“Organize this information into main and secondary ideas”

Review

“Identify conceptual gaps or inconsistencies in this text”

Improvement

“Suggest improvements while preserving the original ideas”

Each moment of the cycle generates learning evidence that allows teachers to monitor the process without modifying existing institutional assessment systems.

 

Table 5

Learning evidence by cycle moment

Moment

Evidence

Inquiry

Initial questions, preliminary outlines

Prompts

Drafts, conceptual reorganizations

Feedback

Revised version of the product

Reflection

Self-assessment or reflective writing

 

From an assessment perspective, the Prompt-Augmented Inquiry Cycle focuses on student learning rather than on the use of artificial intelligence itself. Assessment considers the final product, the improvement process, and the reflective component, while artificial intelligence remains a support resource rather than an object of evaluation.

Table 6

Assessment focus of the model

Aspect

Criterion

Product

Coherence, content, and achievement of objectives

Process

Revisions made based on feedback

Reflection

Ability to analyze one’s own learning

 

The proposal is flexible and can be adapted to different educational levels and subject areas, provided that the sequence of the cycle and the active role of the teacher as learning mediator are maintained. It does not require specific technological tools or structural changes to curricular planning, as it can be integrated into existing classroom practices.

Overall, the Prompt-Augmented Inquiry Cycle constitutes an applicable methodology that makes it possible to clearly visualize how artificial intelligence can be pedagogically incorporated into the classroom without displacing the teacher’s role or reducing learning to automatic content generation. Its contribution lies in organizing practices already present in the literature and in teaching experience into a clear, operational, and transferable instructional structure.

 

Discussion

The results of this study reveal a consistent pattern in recent educational research on artificial intelligence: while multiple pedagogical dimensions are widely addressed, their treatment remains largely fragmented. Learning personalization, formative feedback, prompt use, and self-regulated learning are frequently discussed as independent constructs rather than as interconnected components of instructional practice. This fragmentation suggests that current research has prioritized the identification of AI-related affordances over the articulation of coherent pedagogical structures capable of integrating them meaningfully.

One of the most notable implications of these findings concerns the pedagogical positioning of artificial intelligence in educational contexts. The reviewed literature tends to frame AI primarily as a functional support tool, applied to specific instructional tasks such as content generation or feedback provision. Although these applications are valuable, their isolated implementation limits their pedagogical potential and contributes to variability in educational outcomes reported across studies. This reinforces the idea that the effectiveness of AI in education depends less on the technology itself and more on how it is embedded within instructional design.

The findings related to prompt engineering further highlight this issue. While prompt formulation is increasingly recognized as influential in shaping AI-mediated interactions, it is rarely conceptualized as an instructional practice. Instead, it is often treated as an implicit technical skill, disconnected from learning objectives and assessment processes. This disconnect limits the pedagogical value of prompt use and underscores the need to reposition it within teaching and learning frameworks rather than viewing it solely as a tool-specific competence.

Similarly, formative feedback supported by artificial intelligence is widely acknowledged as a promising application, yet the literature emphasizes its dependence on contextualization and teacher mediation. AI-generated feedback varies in relevance and accuracy, which constrains its autonomous pedagogical use. The results suggest that feedback becomes educationally meaningful when it is integrated into cycles of revision and reflection, rather than delivered as a standalone response. This finding aligns with broader research on formative assessment, which emphasizes feedback as a process rather than an isolated event.

The analysis of self-regulated learning reveals a comparable pattern. Although AI tools are often associated with supporting planning, monitoring, and reflection, these processes are seldom embedded intentionally within instructional sequences. Self-regulated learning is more frequently presented as a potential outcome of technology use than as a pedagogical dimension deliberately designed into teaching practice. This observation highlights a gap between theoretical recognition and practical implementation in the literature.

Ethical considerations emerge as a cross-cutting concern throughout the reviewed studies. Issues related to transparency, responsibility, and teacher agency are consistently acknowledged; however, they are commonly framed as general principles rather than operational elements within pedagogical models. This indicates that ethical discourse around AI in education has advanced conceptually, but its translation into instructional design remains limited.

Taken together, these findings suggest that the central challenge in current AI-in-education research is not the lack of technological capability, but the absence of pedagogically integrated frameworks that align AI use with established learning processes. The literature demonstrates a clear need for approaches that move beyond tool-centered adoption and instead focus on structuring instructional practices in ways that preserve pedagogical coherence, teacher agency, and learner engagement.

From this perspective, the contribution of this study lies in addressing an identified gap rather than introducing a technological innovation. By responding to the fragmentation observed in the literature, the proposed methodological approach represents an effort to organize existing pedagogical elements into a coherent instructional logic. This positioning aligns with calls in recent research for human-centred, theory-informed approaches to artificial intelligence in education that prioritize instructional design over technological novelty.

It is important to recognize the scope and limitations of the present discussion. As a study based on documentary analysis, the interpretations offered here are grounded in existing literature rather than empirical classroom data. Consequently, the implications discussed should be understood as conceptual and pedagogical rather than evidential claims of effectiveness. Further empirical research will be necessary to examine how integrated methodological approaches are enacted in practice and how they influence teaching and learning processes across diverse educational contexts..

 

Conclusion

This study contributes to the field of education and educational technology by demonstrating that the primary challenge in integrating artificial intelligence into educational contexts lies not in the availability of tools, but in the lack of pedagogical frameworks that coherently align AI use with teaching and learning processes. This finding supports the need to shift the focus from technocentric approaches toward methodological proposals that prioritize instructional design and pedagogical intentionality.

From a methodological perspective, this work offers a conceptual articulation that integrates dimensions widely discussed in the literatura such as inquiry, prompt engineering, formative feedback, and self-regulated learning within a unified pedagogical logic. This contribution is situated within the domain of instructional design and research on artificial intelligence in education, as it provides a methodological organization that responds to the fragmentation identified in prior studies.

This study responsibly delineates the scope of its contribution by proposing a theoretically grounded methodology that requires empirical validation across diverse educational contexts. In this regard, it opens avenues for future research focused on examining classroom implementation, contextual adaptation across educational levels, and the potential of such approaches to support formative learning processes without displacing the pedagogical role of the teacher or reducing learning to task automation.

 

References

Alfredo, R., Echeverria, V., Jin, Y., Yan, L., Swiecki, Z., Gašević, D., & Martinez-Maldonado, R. (2024). Human-centred learning analytics and AI in education: A systematic literature review. Computers and Education: Artificial Intelligence, 6, 100215. https://doi.org/10.1016/j.caeai.2024.100215

Banihashem, S. K., Noroozi, O., Van Ginkel, S., Macfadyen, L. P., & Biemans, H. J. A. (2022). A systematic review of the role of learning analytics in enhancing feedback practices in higher education. Educational Research Review, 37, 100489. https://doi.org/10.1016/j.edurev.2022.100489

Du Plooy, E., Casteleijn, D., & Franzsen, D. (2024). Personalized adaptive learning in higher education: A scoping review of key characteristics and impact on academic performance and engagement. Heliyon, 10(21), e39630. https://doi.org/10.1016/j.heliyon.2024.e39630

Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., & Maur, A. (2024). Prompt engineering as a new 21st century skill. Frontiers in Education, 9, 1366434. https://doi.org/10.3389/feduc.2024.1366434

Hariyanto, Kristianingsih, F. X. D., & Maharani, R. (2025). Artificial intelligence in adaptive education: A systematic review of techniques for personalized learning. Discover Education, 4(1), 458. https://doi.org/10.1007/s44217-025-00908-6

Jacobsen, L. J., & Weber, K. E. (2025). The Promises and Pitfalls of Large Language Models as Feedback Providers: A Study of Prompt Engineering and the Quality of AI-Driven Feedback. AI, 6(2), 35. https://doi.org/10.3390/ai6020035

Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(1), 7. https://doi.org/10.1186/s41239-025-00503-7

Posso Ayala, D., Posso Pacheco, R., Simba Pozo, A., & Simba Pozo, S. (2024). Perspectivas de la bioética en la práctica deportiva: Un análisis integral. Podium. Revista de Ciencia y Tecnología en la Cultura Física, 19(1), 1-15. http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S1996-24522024000100026&lng=pt&nrm=iso

Posso Pacheco, R. J., Arévalo Espinoza, O. M., & Chicaiza Rengel, V. M. (2025). Impacto del ChatGPT en la planificación microcurricular para docentes de Educación General Básica. MENTOR revista de investigación educativa y deportiva, 4(12), 1-16. https://doi.org/10.56200/mried.v4i12.10942

Posso Pacheco, R. J., Caicedo-Quiroz, R., Maqueira-Caraballo, G., Barzola-Monteses, J., Barba Miranda, L. C., & Amancha Gabela, J. R. (2025). Methodological Proposal for the Design and Validation of Research Instruments Supported by Artificial Intelligence. Data and Metadata, 4, 1103. https://doi.org/10.56294/dm20251103

Slade, S., & Prinsloo, P. (2013). Learning Analytics: Ethical Issues and Dilemmas. American Behavioral Scientist, 57(10), 1510-1529. https://doi.org/10.1177/0002764213479366

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15. https://doi.org/10.1186/s40561-023-00237-x

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252, 124167. https://doi.org/10.1016/j.eswa.2024.124167

Xiaoyu, W., Zainuddin, Z., & Hai Leng, C. (2025). Generative artificial intelligence in pedagogical practices: A systematic review of empirical studies (2022–2024). Cogent Education, 12(1), 2485499. https://doi.org/10.1080/2331186X.2025.2485499

Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. https://doi.org/10.1186/s41239-024-00453-6

Zimmerman, B. J. (2002). Becoming a Self-Regulated Learner: An Overview. Theory Into Practice, 41(2), 64-70. https://doi.org/10.1207/s15430421tip4102_2