This meta-review examines the growing field of Artificial Intelligence in Education (AIEd), specifically in higher education (AIHEd). Think of it like taking a bird's-eye view of all the existing summaries of AIEd research to understand the big picture. The review analyzes 66 summaries of AIEd research, published between 2018 and 2023, to identify key trends, research gaps, and areas for improvement. It's like creating a map of the AIEd landscape to guide future exploration.
Description: This table lists the top research gaps identified in AIEd, such as ethical implications, methodological limitations, and the need for more diverse research contexts.
Relevance: It provides a roadmap for future research, highlighting the most pressing issues that need to be addressed.
Description: This figure shows the trend of AIEd publications over time, indicating a growing interest in the field.
Relevance: It provides context for the meta-review, showing how AIEd research has evolved over the past few years.
This meta-review reveals a rapidly growing yet unevenly developed field of AIEd in higher education. While AI offers great potential for personalized learning and improved educational outcomes, think of it like a powerful new tool that can be used for good or bad. Addressing the identified challenges, particularly ethical concerns and methodological limitations, is crucial for realizing AI's full potential and ensuring its responsible use in higher education. It's like building a bridge to the future of education – we need strong foundations and careful planning to make sure it's safe and effective.
This abstract summarizes a meta-review of research on Artificial Intelligence in Education (AIEd), specifically in higher education (AIHEd). It highlights the rapid growth of AIEd and the importance of a strong research base. The review synthesized secondary research, primarily systematic reviews, to explore the scope and nature of AIEd research, identifying key themes, research gaps, and suggestions for future research.
The abstract clearly defines the scope of the review, focusing on AIEd in higher education, and its purpose, which is to synthesize existing research and identify gaps.
The abstract provides a concise summary of the review methodology, including the types of research synthesized and the databases used.
The abstract effectively highlights the key findings of the review, such as the focus on Adaptive Systems and Personalisation, and identifies important research gaps, like the need for greater ethical considerations.
While the abstract mentions Adaptive Systems and Personalisation, briefly mentioning other key AI applications would provide a more complete picture of the review's scope.
Rationale: This would give readers a better understanding of the specific AI technologies being investigated in higher education.
Implementation: Include a brief mention of other prominent AI applications, such as Intelligent Tutoring Systems or Assessment and Evaluation, if they are also addressed in the review.
The abstract identifies research gaps but could briefly explain why addressing these gaps is important for the future of AIEd.
Rationale: This would highlight the significance of the review's findings and motivate further research in the identified areas.
Implementation: Add a sentence briefly explaining the potential consequences of not addressing these gaps, such as the risk of biased or ineffective AI applications in education.
The abstract mentions synthesizing secondary research but could strengthen its impact by quantifying the number of reviews included.
Rationale: This would provide a clearer indication of the comprehensiveness of the review and the breadth of the literature synthesized.
Implementation: Include the number of secondary research articles included in the review, e.g., "This review synthesized findings from X secondary research articles..."
This introduction sets the stage for a meta-review of research on Artificial Intelligence in Education (AIEd), specifically in higher education. It emphasizes the growing importance of AIEd, the need for a solid research foundation, and the timeliness of this review due to the rapid evolution of AI and increased public discourse.
The introduction effectively justifies the need for this review by highlighting the rapid growth of AIEd literature and the lack of a comprehensive overview.
The introduction clearly establishes the significance of the review by emphasizing its comprehensive nature and its role in providing a foundation for future research.
The introduction effectively contextualizes AIEd within the broader trends of AI evolution and public discourse, highlighting the relevance and timeliness of the review.
While the introduction mentions AI applications generally, providing specific examples of AI tools used in higher education would make the context more concrete.
Rationale: This would help readers unfamiliar with AIEd grasp the practical implications of the review.
Implementation: Include examples like personalized learning platforms, automated grading systems, or AI-powered chatbots for student support.
While the abstract covers the methodology, briefly mentioning the type of review (meta-review) and the data sources in the introduction would enhance clarity.
Rationale: This would reinforce the review's approach and provide context for the findings.
Implementation: Add a sentence like, "This meta-review synthesizes findings from existing reviews indexed in various databases..."
While the introduction mentions challenges generally, briefly previewing specific challenges and opportunities in AIHEd would create more interest.
Rationale: This would give readers a glimpse into the complexities of AIEd and the potential impact of the review.
Implementation: Include a sentence like, "This review explores key challenges such as ethical considerations and bias in AIEd, as well as opportunities for personalized learning and improved student support."
This section details the methods used to conduct a tertiary review (a review of reviews) of AI in higher education. It describes the search strategy, study selection process, data extraction methods, quality assessment criteria, and data synthesis approach. The goal is to provide a transparent and replicable methodology for mapping the AIEd field.
The search strategy is comprehensive, covering multiple relevant databases and platforms, which increases the likelihood of capturing a wide range of studies.
The inclusion and exclusion criteria are well-defined, ensuring that the review focuses specifically on relevant secondary research on AI in higher education.
The method section is transparent, providing details about the search strategy, study selection, data extraction, and quality assessment, allowing for replication and scrutiny.
While the database search is extensive, manually searching key journals specific to AI in education could identify additional relevant studies not indexed in databases.
Rationale: This would ensure that important studies published in niche journals are not missed.
Implementation: Manually search the tables of contents of key journals like "Computers & Education: Artificial Intelligence" or other relevant publications.
While the exclusion of generative AI reviews is mentioned, providing a clearer rationale for this decision would strengthen the methodology.
Rationale: This would address potential concerns about the scope of the review and justify the focus on pre-generative AI research.
Implementation: Explain why generative AI represents a distinct phase of AIEd, requiring a separate review, and how this exclusion contributes to the current review's focus.
The method section mentions inductive coding for key findings and research gaps but could provide more detail on the process.
Rationale: This would enhance transparency and allow readers to understand how these themes were derived from the data.
Implementation: Describe the steps involved in the inductive coding process, such as initial open coding, axial coding to identify relationships between codes, and selective coding to develop overarching themes.
Figure 2 presents the search string used for the tertiary review. It's organized into three main search components combined with "AND": AI, Education Sector, and Evidence Synthesis. Each component lists specific keywords or phrases used in the search, separated by "OR". For example, the AI component includes terms like "artificial intelligence," "machine learning," "chat bot*", and various other related terms. The Education Sector component specifies educational levels and settings like "higher education," "college*", "K-12", and other related terms. The Evidence Synthesis component lists different types of review methodologies, such as "systematic review," "scoping review," "meta-analysis," and many others.
Text: "A search string was developed (see Fig. 2) based on the search strings from the two previous reviews"
Context: The authors explain how they developed their search string for the review, mentioning that it was based on previous reviews and focuses on AI, education settings, and evidence synthesis methods.
Relevance: This figure is crucial as it provides transparency and replicability for the review process. It shows exactly how the authors searched for relevant literature, allowing others to understand and potentially reproduce the search.
Figure 3, a PRISMA flow diagram, visually represents the process of selecting studies for inclusion in the meta-review. It starts with the initial number of records identified through database searching and other sources. Then, it shows the number of records after duplicates were removed. The diagram then details the screening process, showing how many records were screened based on title and abstract, and how many were excluded at this stage with reasons for exclusion. It proceeds to full-text screening, again showing exclusions and reasons. Finally, it shows the number of studies included in the review.
Text: "The search strategy yielded 5609 items (see Fig. 3), which were exported as .ris or .txt files and imported into the evidence synthesis software EPPI Reviewer"
Context: This quote describes the initial stage of the study selection process, where the search results are imported into EPPI Reviewer software for further processing.
Relevance: This figure is essential for understanding the scope and rigor of the review. It clearly shows how many studies were considered and why some were excluded, ensuring transparency and allowing readers to assess the review's comprehensiveness.
Table 2 outlines the criteria used to include or exclude studies from the meta-review. It's divided into two columns: 'Inclusion criteria' and 'Exclusion criteria'. The 'Inclusion criteria' column lists factors like the publication date range (January 2018 to July 18, 2023), the focus on AI applications in formal education settings, the type of publication (journal articles or conference papers), the use of secondary research with a method section, and the language (English). The 'Exclusion criteria' column lists factors like publications before January 2018, studies not about AI or not in formal education settings, specific publication types (editorials, book chapters, etc.), primary research or literature reviews without a method section, and non-English publications.
Text: "following lengthy discussion and agreement on the inclusion and exclusion criteria by all authors, two members of the team (MB and PP) double screened the first 100 items"
Context: This section describes the process of ensuring inter-rater reliability during the screening process, emphasizing the importance of agreed-upon inclusion and exclusion criteria.
Relevance: This table is crucial for understanding the scope and focus of the review. It clearly defines which studies were eligible for inclusion and why, ensuring transparency and allowing readers to assess the review's relevance to their own interests.
Figure 4 provides a table outlining the criteria used for assessing the quality of the included reviews. Each criterion is listed along with a scoring system (Yes = 1, Partly = 0.5, No = 0) and an interpretation of what each score represents. The criteria include aspects like the presence of research questions, the clarity of inclusion/exclusion criteria, the definition of publication years, the adequacy of the search strategy, the reporting of inter-rater reliability, and the provision of a data extraction coding scheme. It also assesses whether a quality assessment was conducted, if sufficient details about the included studies were provided, and if the review reflects on its limitations.
Text: "To answer sub-question 1f about the quality of AIHEd secondary research, the decision was made to use the DARE tool (Centre for Reviews and Dissemination, 1995), which has been used in previous tertiary reviews (e.g., Kitchenham et al., 2009; Tran et al., 2021)."
Context: This section of the paper discusses the quality assessment methods employed in the meta-review. It explains the rationale for choosing the DARE tool and lists the criteria used for evaluating the quality of the included reviews. The criteria are presented in a table format in Figure 4.
Relevance: This figure is crucial as it makes the review process transparent and allows readers to understand how the quality of the included studies was judged. By outlining the specific criteria and their scoring, it provides a clear framework for evaluating the rigor and reliability of the synthesized evidence. This helps establish the trustworthiness of the meta-review's findings.
Figure 6 is a bar chart showing the overall quality assessment of the 66 AIHEd reviews included in the meta-review. The chart categorizes the reviews into five quality levels: Critically Low, Low, Medium, High, and Excellent. The height of each bar represents the number of reviews that fall into each quality category.
Text: "The reviews were given an overall quality assessment score out of 10 (see Fig. 6), averaging 6.57 across the corpus."
Context: This part of the paper discusses the overall quality of the AIHEd reviews included in the meta-review. It mentions that each review received a score out of 10 and that the average score was 6.57. Figure 6 visually represents the distribution of these quality scores.
Relevance: This figure is important because it provides a visual summary of the overall quality of the reviews included in the meta-review. It helps readers quickly grasp the distribution of quality levels and understand the general rigor of the synthesized evidence. This is essential for interpreting the findings and conclusions of the meta-review.
This section presents the findings of the meta-review on AI in higher education. It covers the publication trends, types of reviews conducted, author demographics, quality assessment of the reviews, common AI applications, benefits and challenges, and identified research gaps.
The section provides a comprehensive overview of the findings, covering various aspects of AIEd research, from publication trends to research gaps.
The findings are presented with supporting data and statistics, which strengthens the analysis and provides a clear picture of the AIEd research landscape.
The section clearly presents the key themes emerging from the meta-review, such as the prevalence of adaptive systems and personalization, making the findings accessible and easy to understand.
While the section mentions quality concerns, a deeper analysis of the specific methodological weaknesses and their potential impact on the findings would be beneficial.
Rationale: This would provide a more critical perspective on the state of AIEd research and highlight areas for improvement.
Implementation: Discuss the potential biases or limitations introduced by these methodological weaknesses and how they might affect the generalizability or reliability of the findings.
The section notes regional variations in research but could further explore the reasons behind these differences and their implications.
Rationale: This would provide a more nuanced understanding of the global AIEd landscape and inform strategies for promoting international collaboration.
Implementation: Investigate factors like funding priorities, research infrastructure, or cultural contexts that might contribute to regional variations in AIEd research.
While the section presents the findings, connecting them more explicitly to their implications for educators and policymakers would enhance the practical relevance of the review.
Rationale: This would bridge the gap between research and practice and provide actionable insights for stakeholders.
Implementation: Discuss how the findings can inform the design, implementation, and evaluation of AIEd initiatives in higher education institutions. Provide specific examples of how the findings can be translated into practical strategies or recommendations.
Figure 5 is a bar chart illustrating the number of AIEd evidence syntheses focused on higher education published each year from 2018 to 2023. Each bar represents a year, and its height corresponds to the number of publications. It shows a low number of publications in the initial years (2 in 2018, 10 in 2019), a dip in 2020 (6), a significant rise in 2021 and 2022 (16 and 20 respectively), and a slight decrease in 2023 (12).
Text: "there was a slight reduction in the number published in 2020 before rising again (see Fig. 5)."
Context: The authors are discussing the general publication characteristics of the AIEd evidence syntheses included in their review. They note a decrease in publications in 2020, likely due to the COVID-19 pandemic, before the numbers rise again. Figure 5 visually represents this trend.
Relevance: This figure helps visualize the growth and trends in AIEd research publications specifically focused on higher education. It provides context for the review by showing the increasing interest in this area over recent years, while also acknowledging the impact of external factors like the pandemic.
Table 3 shows the top nine most productive countries in terms of authorship in AIEd evidence syntheses focused on higher education. It lists the countries, their rank, the number of publications from each country, and the percentage of the total publications each country represents. The United States is the most productive, followed by Canada and Australia.
Text: "Whilst it was the most productive country (see Table 3), the United States was closely followed by Canada and Australia."
Context: The authors are discussing the geographical distribution of AIEd evidence synthesis authorship. They mention that the US is the most productive country, but Canada and Australia are close behind. Table 3 provides the data supporting this statement.
Relevance: This table provides insights into the global distribution of research on AI in higher education. It shows which countries are leading in this area and can be used to identify potential collaborations or areas for future research development.
Table 4 presents a quality assessment of the 66 AIEd evidence syntheses included in the review. It lists ten criteria used to evaluate the quality of each review, along with the percentage of reviews that fully met (Yes), partially met (Partly), did not meet (No), or for which the criteria were not applicable (N/A). The criteria include having research questions, inclusion/exclusion criteria, defined publication years, an adequate search, a provided search string, reported inter-rater reliability, a data extraction coding scheme, a quality assessment, sufficient details about included studies, and a reflection on limitations.
Text: "The AIHEd reviews in the corpus were assessed against 10 quality assessment criteria (see Table 4), based on the DARE (Centre for Reviews and Dissemination, 1995; Kitchenham et al., 2009) and AMSTAR 2 (Shea et al., 2017) tools, as well as the method by Buntins et al. (2023)."
Context: The authors are explaining how they assessed the quality of the AIHEd reviews included in their meta-review. They mention using a combination of criteria from the DARE and AMSTAR 2 tools, as well as a method by Buntins et al. (2023). Table 4 details these criteria and the results of the quality assessment.
Relevance: This table is crucial for understanding the rigor and reliability of the included reviews. It provides a transparent overview of the quality assessment process and allows readers to assess the trustworthiness of the meta-review's findings.
Table 5 shows the distribution of AI applications that were the primary focus of the 66 reviews analyzed. It categorizes the reviews based on their main AI focus: General AIEd (covering various AI applications), Profiling and Prediction, Adaptive Systems and Personalisation, Assessment and Evaluation, and Intelligent Tutoring Systems. For each category, it provides the number (n) and percentage of reviews that fell under that focus.
Text: "The reviews were categorised using Zawacki-Richter et al.’s (2019) classification (profiling and prediction; intelligent tutoring systems; adaptive systems and personalisation; assessment and evaluation; see Fig. 1), depending upon their purported focus within the title, abstract, keywords or search terms, with any reviews not specifying a particular focus categorised as ‘General AIEd’ (see Table 5)."
Context: This introduces Table 5 and explains how the reviews were categorized based on their focus, using the classification by Zawacki-Richter et al. (2019).
Relevance: This table is important because it shows the main areas of focus within AIEd research in higher education. It helps to understand which AI applications are receiving the most attention in research and which areas might be under-researched.
Table 6 presents the top six reported benefits of using AI in higher education, based on the analysis of 31 reviews. It lists benefits like personalized learning, greater insight into student understanding, positive influence on learning outcomes, reduced planning and administration time for teachers, greater equity in education, and precise assessment & feedback. For each benefit, the table shows the number of reviews that mentioned it and the corresponding percentage.
Text: "Twelve benefits were identified across the 31 reviews (see Additional file 12: Appendix L), with personalised learning the most prominent (see Table 6)."
Context: This introduces Table 6, highlighting that it shows the top benefits of AI in higher education identified across the 31 general AIEd reviews.
Relevance: This table is important because it summarizes the perceived advantages of using AI in higher education. It highlights the potential positive impacts of AI on various aspects of teaching, learning, and administration, which can inform decisions about AI adoption and implementation.
Table 7 lists the top five challenges of implementing AI in higher education as identified across 31 reviews. These challenges include lack of ethical consideration, curriculum development needs, infrastructure limitations, lack of teacher technical knowledge, and shifting authority. The table provides the number and percentage of reviews that mentioned each challenge.
Text: "The 31 reviews found 17 challenges, but these were mentioned in fewer studies than the benefits (see Additional file 12: Appendix L). Nine studies (see Table 7) reported a lack of ethical consideration, followed by curriculum development, infrastructure, lack of teacher technical knowledge, and shifting authority"
Context: This introduces Table 7 and explains that it presents the top five challenges identified in the 31 general AIEd reviews.
Relevance: This table is important because it highlights the key obstacles to successful AI implementation in higher education. Understanding these challenges is crucial for developing strategies to overcome them and effectively integrate AI into educational settings.
Table 8 shows the top ten research gaps identified across the 66 studies included in the review. It lists each gap, the number of studies (n) that mentioned it, and the percentage (%) of the total studies that mentioned it. The gaps include ethical implications, the need for more diverse methodological approaches, more research within the field of Education, research with a wider range of stakeholders, interdisciplinary approaches, research beyond specific disciplines, research in a wider range of countries (especially developing countries), stronger theoretical foundations, longitudinal studies, and research beyond a few limited topics.
Text: "Each review in this corpus (n = 66) was searched for any research gaps that had been identified within the primary studies, which were then coded inductively (see Additional file 1: Appendix A)."
Context: This explains that the research gaps were identified from the included studies and coded inductively. Appendix A is referenced for a full list.
Relevance: This table is highly relevant as it summarizes the main areas where future research is needed in AIHEd, according to the synthesized reviews. It provides a clear direction for future research efforts and highlights the current limitations of the field.
This discussion section summarizes the key findings of the meta-review on AI in higher education, highlighting the prevalence of adaptive systems and personalization, along with profiling and prediction. It emphasizes the need for increased ethics, collaboration, and rigor in future AIHEd research and practice. The discussion also addresses the global distribution of AIHEd research and the importance of open access publishing.
The discussion effectively summarizes the main findings of the meta-review, providing a concise overview of the current state of AIHEd research.
The discussion provides a balanced perspective by addressing both the benefits and challenges of AI in higher education, acknowledging the complexities of AI adoption.
The discussion emphasizes the importance of open access publishing for disseminating research findings and reducing research waste, which is crucial for advancing the field.
While the discussion mentions the maturity of AI applications in STEM and Health & Welfare, further exploring the implications for other disciplines would be beneficial.
Rationale: This would provide more tailored insights for educators and researchers in various fields.
Implementation: Discuss the specific opportunities and challenges of AI adoption in disciplines like humanities, social sciences, and arts, considering their unique pedagogical approaches and research practices.
While the discussion identifies research gaps, providing more concrete recommendations for future research directions would be more actionable.
Rationale: This would guide researchers in designing and conducting studies that address the identified gaps and advance the field.
Implementation: Formulate specific research questions or suggest research designs that could be used to investigate the ethical implications, collaborative approaches, and methodological rigor in AIHEd.
The discussion focuses on research but could be strengthened by addressing the role of policy and institutional support in promoting ethical and effective AI adoption.
Rationale: This would broaden the discussion beyond research and acknowledge the importance of institutional factors in shaping AIEd practices.
Implementation: Discuss the need for policies and guidelines that address ethical considerations, data privacy, and responsible AI use in higher education. Suggest strategies for institutions to support faculty development in AI literacy and provide resources for implementing AIEd initiatives.
This conclusion summarizes the meta-review's findings, emphasizing the dominance of adaptive systems and personalization in AIHEd research. It reiterates the need for increased ethics, collaboration, and rigor in the field, while also highlighting the global distribution of research and advocating for open access publishing.
The conclusion effectively summarizes the main findings of the meta-review, providing a clear overview of the key themes and trends in AIHEd research.
The conclusion presents a balanced perspective by acknowledging both the promises and challenges of AI in higher education, avoiding overly optimistic or pessimistic views.
The conclusion provides a clear call to action, urging researchers and practitioners to address the identified challenges and prioritize ethics, collaboration, and rigor in future AIHEd work.
While the conclusion mentions the need for ethics, collaboration, and rigor, elaborating on the practical implications of these recommendations would be beneficial.
Rationale: This would provide more concrete guidance for researchers and practitioners on how to translate these principles into action.
Implementation: Provide specific examples of how to incorporate ethical considerations in AIEd research, foster collaboration among stakeholders, and improve methodological rigor in study design and reporting.
The conclusion focuses on researchers but could be strengthened by discussing the role of educational institutions in fostering responsible AI adoption.
Rationale: This would acknowledge the importance of institutional support in promoting ethical AI practices and creating a conducive environment for AIEd innovation.
Implementation: Discuss how institutions can develop policies and guidelines for AI use, provide training and resources for faculty and students, and establish ethical review boards for AIEd projects.
The conclusion could be broadened by connecting the findings and recommendations to the broader societal implications of AI in education.
Rationale: This would situate the discussion within a larger context and highlight the importance of responsible AI development and adoption for the future of education and society.
Implementation: Discuss the potential impact of AI on access to education, equity, and the changing nature of work and learning in the age of AI.