Introduction
Artificial Intelligence (AI), a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. This definition focuses both on the science, that is the theoretical aspects, and the engineering or the practical aspects of building machines that can mimic human intelligence.
AI is not just a buzzword but an influential and promising technology that is here to stay. AI is being integrated into multiple domains, such as agriculture, business, automation, medicine, aerospace, chemistry, and military. Education is one of the domains that has seen a rapid integration of AI.
In this guidebook, we refer to all types of AI, without excluding any specific type of technique. We are discussing the full breadth of the AI field. AI techniques covered in this guidebook include recent developments, such as chatbots like ChatGPT and AI tools that have been used for longer in education, such as early warning systems and intelligent tutoring systems.
AI brings promises and opportunities to improve education, for example: automation of administrative processes and tasks, curriculum and content development, providing instruction, and understanding and improving students’ learning processes through analysis of student data. This is not an exhaustive list and AI is being integrated into education in many other ways.
However, the rapid integration of AI in education (AIED) has stirred a lot of conversation about its application in the learning process and related ethics. The following subsection provides an example of a potential problem with AIED.
In this guidebook, we will provide methods of gaining insight into evaluating AIED tools. In the rest of this section, we'll give examples of the ethical issues that may arise in AIED and provide a framework for evaluating AIED tools based on ethical values and guidelines. In section 2, we'll dive deeper into the theoretical background on how we approach this ethical framework. Finally, in section 3 we give clear guidelines and instructions on how to use this framework to evaluate the ethical implementation of an AIED tool.
The problem with AIED
Let's use the following example to illustrate the problem with AIED: an educational institution deploys an intelligent tutoring system - an AI tool that enables the personalization of learning. The tool detects the knowledge (or knowledge gaps) of students, diagnoses the next appropriate steps for students' learning in the form of new exercises or new learning units, and then notifies the teacher.
Now, imagine you are a student following a course in this educational institution. How can you, as a student, verify that this tool understands your knowledge levels accurately and suggests the right next steps for your learning? Can you ensure that the inherent bias in the data model does not affect the suggested next steps for you? How can you check that the suggested steps align with your knowledge and skills gaps to ensure that you reach the learning goals?
Next, imagine you're an educator in the same educational institution using the intelligent tutoring system. How can you ensure that it does not only help the students who are performing well or are performing poorly? How do you understand the suggested next steps from this system and check them? How do you make sure that the students are actually learning based on these suggested next steps? Can you change the incorrect suggestions from this tool and still guide student learning?
Lastly, imagine you are the head of the same educational institution that has deployed this intelligent tutoring system. How can you ensure that this tool is working fairly for all students across different backgrounds and abilities? How do you verify that educators maintain their pedagogical expertise rather than becoming dependent on the system's suggestions? Can you determine whether the system is actually improving learning outcomes, or is it creating additional work without meaningful benefit? Can you understand how the AI tool makes decisions and how can you intervene in this process?
Above, we provided different perspectives to this scenario from the viewpoint of the different educational stakeholders. These perspectives show that the accelerating pace of AIED use comes with the promise to support and enhance learning. However, it also raises various concerns closely tied to ethical issues that could arise, or ethical problems that should be avoided. These concerns highlight the need for ethical guidance in the usage of AIED to mitigate its potential negative impacts. For this reason, it is important to develop an ethical framework to ethically guide AIED usage. In the next subsection, we provide our solution to these issues.
An ethical framework for AIED
Our solution to the ethical issues of AIED is AI GANESH. AI GANESH is an ethical framework that can be used by the educational stakeholders of AIED to make decisions about which AI tools should be used in education. AI GANESH has been developed based on scientific research - available here and here.
The overarching goal of AI GANESH is to maximize learning benefits from AI in education (AIED) usage while also protecting all educational stakeholders from potential risks present in this technology. It establishes ethical guidelines for AIED to help you understand your own responsibilities as well as the impact of the responsibilities of other stakeholders.
AI GANESH is aimed at the educational stakeholders of AIED - students, educators and educational institutions in the context of higher education. The roles of these 3 educational stakeholders, along with their corresponding responsibilities within the framework, are:
- Students: As a student, you are an end user of AIED tools and you should be well-informed about AIED tools that are designed to maximize the learning benefits for every student. Additionally, you should be able to request accountability for the implementation of these tools.
- Educators: As an educator, you are an end user of AIED tools and you should be trained to use and implement AIED tools such that they align with the learning goals and educational values.
- Educational institutions: This role refers to the administration of educational institutions - those responsible for making decisions about which AIED tools to deploy and how this deployment should be carried out. As an educational institution representative, you are a deployer of education and you should ensure that the tools you use, and the products and services you offer, meet the requirements for education
An additional group within this framework are the 'Authorities', which encompasses regulatory bodies operating across multiple hierarchical levels—from institutional and state authorities to national and supranational governing bodies, as well as intermediate regulatory entities. These authorities are responsible for implementing governance structures, establishing regulations at their respective jurisdictional levels, and implementing specific guidelines outlined in this framework. While they function as the architects and enforcers of regulations, they are not directly impacted by these regulations. Their primary role is to ensure that regulations and guidelines are properly implemented and enforced across the system, while they themselves remain outside the scope of these regulations, enabling them to maintain oversight and governance functions.
How to use this guidebook
This guidebook presents the AI GANESH ethical framework and walks you through all the components of the framework. You can use this guidebook to understand AI GANESH and apply it to different use-cases of AIED.
The rest of this guidebook consists of two parts. Part two - 'Theoretical background' - explains everything required to understand ethical frameworks and sets the theoretical basis for the framework presented in this guidebook. If you already have pre-existing knowledge on ethical frameworks, feel free to skip sections where required.
Part three - 'Applying AI GANESH' - presents the steps to use it in practice. This section also looks at roles that are specific to each stakeholder group and presents the rubric that forms the heart of this framework. Lastly, an example of the usage is provided.
Theoretical background
In this section, we first give a brief explanation of some general ethical concepts - ethical frameworks, values and guidelines. Then, we introduce the background of the AI GANESH framework and look at its values and guidelines. Following this, we explain a specific feature of AI GANESH - a rubric to evaluate AIED systems. Finally, we explain how to give ethical evaluations and how to interpret results.
Ethical concepts
Ethics is the systematic reflection on morality - examining the standards of right and wrong by which human actions, decisions, and opinions about what is good and bad can be judged. However, ethics is not a manual with answers but a process of constantly examining standards of right and wrong to ensure that they are reasonable and well-founded.
An ethical framework is a system of ethical principles or values that provides a structure for guiding individuals or organizations in making decisions about what is right and wrong. Thus, it serves as the foundation of any ethical decision-making process by providing a shared set of standards by which to evaluate potential choices.
Not all ethical frameworks have ethical values and norms. However, values and norms are core concepts of AI GANESH, and therefore we briefly explain these concepts.
Values are principles that describe a certain type of behavior while norms are guidelines that provide the means to realize these values. The values act as a skeleton of ethical principles while the guidelines add the supporting tissues and muscles. For example, the ethical value of privacy describes the need for users to have control over their personal data.
Norms are rules that prescribe what actions are required, permitted, or forbidden. A norm, henceforth referred to as a guideline, provides a way to realize a value, such as 'students should provide informed consent before taking part in a study'. Our SLR elaborates on the ethical concepts underlying our framework.
AI GANESH
The AI GANESH ethical framework provides a structure to the stakeholders of AIED to evaluate the underlying values and guidelines of an AIED tool. This evaluation may help stakeholders make decisions about which AI tool should be used in education. This ethical framework helps raise critical questions about AIED, on topics such as explainability of outputs, equity, human agency, and alignment with learning goals.
Development process
The development of AI GANESH consisted of 2 phases, in which the framework was developed through scientific research. The first phase of the project involved a systematic literature review (SLR) conducted in November 2022 to identify the ethical values and guidelines from the scientific literature in the field of AIED. In this review, six ethical values and 36 guidelines were identified. The SLR consolidated the existing literature, providing a theoretical grounding for the later stages of the research.
The second phase engaged three educational stakeholder groups, namely students, educators and educational institutions, through stakeholder consultation in the form of focus group discussions. This approach allowed for an in-depth exploration of the perspectives, concerns, and expectations of the stakeholders about the ethical implications of AIED. The insights gathered from the stakeholder consultation enriched the understanding of the values involved in the ethics of AIED. We developed a list of definitions of the ethical values and their grouping through the analysis of these discussions. Additionally, the stakeholder consultation identified 30 guidelines that were used to supplement the list of guidelines that was developed during the review. The findings from the systematic literature review were then combined with the stakeholder consultation to form AI GANESH.
Values
AI GANESH contains two main components - values and guidelines. The framework is founded on six ethical values-
- Goodwill: This value represents the intention to promote well-being and beneficence through AIED, while minimizing harms and negative consequences
- Aptness for education: This value represents the alignment of AIED tools with educational values, learning goals and learner competences.
- Non-discrimination: This value represents fairness, equity and equality in AIED applications
- Explicability: This value represents the ability to explain an algorithm's workings, outputs and decisions in human terms, and to justify these when needed
- Stewardship of Data: This value represents various concepts related to data - ranging from accuracy to privacy to security to data transparency
- Human Oversight: This value represents the necessity of human agency, responsibility and human intervention in AIED tools.
Guidelines
Each value of this framework has related guidelines. You can apply these guidelines to specific use-cases of AIED tools. The guidelines are available in the form of a rubric, which is explained in the next subsection and presented for AI GANESH in the next section.
Rubric
In this subsection, the theoretical concepts underlying the AI GANESH rubric are explained. A rubric is an explicit set of criteria used for assessing a particular type of work or performance or tool. The AI GANESH rubric provides an explicit set of criteria to assess whether a given AIED tool fits the ethical requirements of the context in which it is used. The AI GANESH rubric contents themselves can be found here. Each guideline is represented by a category and a code. Additionally, the rubric contains 2 levels of implementation for each guideline - Level 1 is the implementation of the guideline that fully conforms to it while Level 2 is an insufficient implementation. The description of Level 1 refers to the actual guidelines from the literature and the stakeholder consultation. The description of each guideline in the rubric elaborates on what the corresponding level means. The rubric for AI GANESH can be found in the next section.
Conclusion
The above section laid the theoretical foundation of AI GANESH. In the next section, we focus on how this framework can be applied in practice to evaluate the implementation of a given AIED tool .
Applying AI GANESH
This section describes how AI GANESH can be applied. The framework can be applied to information about an AIED tool or an AIED tool itself by following the usage instructions in the form of steps.
Educational stakeholders can utilise AI GANESH to assess whether they would approve of a given description or implementation of an AIED tool. Applying AI GANESH will give a nuanced perspective on the complex matter of ethics for your use case. Complex matters such as ethics cannot be quantified or scored directly. 'Right or wrong' can depend on the specific context, AIED tool and questions at hand. Therefore, AI GANESH offers the means to identify points of friction in the application of an AIED tool, support informed decisions about adopting well-designed and ethically implemented tools, and provide argumentation for dismissing tools that lack ethical implementation.
Ensuring the ethical implementation of an AIED tool is ideally not a one-time process, but an ongoing process involving multiple rounds of assessment and improvement. Furthermore, this process should not be a scoring system that provides a target metric value as it could lead to optimization for a target score (following Goodhart's law [2]). The ethical evaluation process involves assessing a tool, possibly identifying ethical issues and suggesting improvements based on the outcome. Therefore, this framework provides recommended follow-up actions at the end of each round of assessment
Using AI GANESH takes place in three steps. Step 1: Understanding the use case and the context of the AIED tool. Step 2: Assessing the AIED tool based on the guidelines and providing a critical reflection on insufficient scores. Step 3: Receiving advice on follow-up actions based on the outcome of the previous steps. The steps are presented below. You can also find an example for each of the stakeholder groups in this section.
Steps
AI GANESH is to be used following these steps:
Step 1: Form a discussion group
The evaluation of an AIED tool should be initiated by the educational institution and should include all the educational stakeholders - students, teachers and educational institution representatives. To begin the process, the educational institution should bring together at least one person from each of these stakeholder groups and form a discussion group to find common ground on the ethical perspectives of the given AIED tool. Then, each individual stakeholder should follow Steps 2 and 3 independently.
Step 2: Understand the AIED use case
Read the use case description thoroughly and understand how the AIED tool is integrated in the specific educational context. In instances where insufficient information is available, independent research and analysis of the AIED tool and its implementation in the educational context may have to be performed. This can be done, for example, by looking up additional information on the internet about the tool, requesting information from the educational institute on correct usage and implementation, and so on.
Step 3: Assessment
Assess the implementation of the AIED tool. Refer to the information available about the AIED tool (Step 2) and the Rubric for this. For each guideline of the rubric, read the level descriptions and select the level of implementation that applies to the specific AIED tool. There can be three possible outcomes for the assessment of each guideline:
- 'Level 1: Sufficient' - This refers to the implementation of the guideline that fully conforms to the ethical requirements.
- 'Level 2: Insufficient' - This is an insufficient implementation of the guideline that signifies some ethical concerns in the implementation. For example, an AIED tool could have concerns about not being inclusive. A guideline can alternatively be marked as Level 2 if insufficient information is available to assess it. For example, information about the privacy guidelines could be missing.
- 'Not applicable' (NA) - In case a guideline is not applicable to the given situation, you can mark it as 'Not applicable' (NA). For example, a guideline about learning goals would not be applicable in the case of an AI assessment tool because the tool does not facilitate any learning.
For the guidelines that score 'Level 2: Insufficient' on the rubric, you are required to provide an explanation or a critical reflection on this. This is specifically required for Level 2 as it is not a desired level of implementation for a guideline and signals the need for some improvement.
Complete this assessment for all the guidelines in the rubric.
Step 4: Examine the ethical evaluation and further steps
Using the rubric will provide a clear overview of the ethics of the implementation of an AIED tool. Additionally, it will also allow for a more in-depth view on the ethical considerations involved in using the given AIED tool. Specifically, it allows for:
- The identification of strong suits of the AIED tool in terms of ethical implementation
- The identification of possible problem areas of the AIED tool in terms of ethical implementation
- The identification of missing information regarding ethical implementation
- Insight into strong and weak points on the level of individual ethical guidelines as well as overarching ethical values
Depending on the educational context, the specific AIED tool and your stakeholder group, the interpretation of the outcome can differ. Any breach of a guideline deserves a discussion and investigation of its meaning for your specific AIED implementation.
Following the filling in of the rubric by each individual stakeholder, we advise to form a focus group with the other educational stakeholders (Step 1) and review the results of the evaluation together. Any points of disagreement should be discussed from the viewpoints of all the stakeholders, with an attempt to resolve them. We recommend discussing any ethical issues that have been highlighted by any specific stakeholder. A pros and cons analysis can be used to resolve any points of difference between the different stakeholders.
At the end of this discussion, the group decides together if the AIED tool is to be used in the given educational context or if some additional steps are required. If any additional information is required, comprehensive documentation can be requested from the developers. The result of the session can include implementation of future improvements to the AIED tool by the educational institution or a request to the AIED tool developers to execute the necessary modifications as per the L1 description of the guideline. In the rare case where all the stakeholders all agree that the system is clearly ethical, no discussion is required.
Rubric
The AI GANESH rubric is available in an easy-to-fill-in printable format that can be downloaded in the following formats by clicking on the corresponding links: Microsoft Word, Open Document Text, and PDF.
Goodwill
Goodwill represents the intention to promote well-being and beneficence through AIED, while minimizing harms and negative consequences
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
| Student-Centered Decision-Making (EN-G1) | Educators consistently guide AIED decisions around student needs, values, and priorities | AIED decisions are driven by efficiency or administration, with minimal educator or student input. |
| AIED Adaptation & Educator support (EN-G2) | Authorities and institutions allow education to adapt meaningfully to AIED while ensuring it supports educators' roles. | Adaptation is slow or resisted; educators may feel undermined or unsupported by AIED systems. |
| Strategic AI Promotion (EN-G3) | Institutions actively promote AIED and develop practical, research-based strategies for its integration in learning environments. | AI use is sporadic and uncoordinated; no guiding strategies or research-based frameworks exist. |
| Ethical System Design & Use (EN-G4) | Ethical considerations are deeply embedded in AIED tool design, procurement, and use | Ethical concerns are not systematically addressed; AIED tools may be used without ethical review. |
Aptness for Education
Aptness for Education represents the alignment of AIED tools with educational values, learning goals and learner competences.
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
|
Goal Alignment & Evidence (EN-A1) |
AIED tools are closely aligned with learning goals, with strong evidence of enhancing learning outcomes. |
AIED is used without alignment to clear goals or lacking supporting evidence. |
|
Support for Diverse Skills (EN-A2) |
AI tools are used to identify and support a wide range of student skills, including non-academic skills. |
AI tools are narrow in focus, often supporting only limited, traditional metrics. |
|
Reflective Use by Teachers (EN-A3) |
Teachers review and refine their own support methods before introducing AIED into instruction. |
AIED is introduced without prior reflection on teaching practices. |
|
Instructional Alignment (EN-A4) |
AIED tools are explicitly matched to curricular learning goals and come with measurable enhancements in learning. |
Alignment is absent or unclear; tools may distract from intended learning outcomes. |
|
Skill Scaffolding (EN-A5) |
Teachers first teach foundational skills before allowing students to delegate or enhance them with AIED tools. |
Students use AIED without learning core skills first. |
|
Transparent Learning Goals (EN-A6) |
Teachers clearly communicate learning goalstoprevent students from using AIED to take learning shortcuts. |
Learning goals are vague or unstated; students often use AIED as a shortcut. |
|
Tool Evaluation (EN-A7) |
Institutions and teachers critically assess the purpose, effectiveness, and necessity of each AIED tool before adoption. |
Tools are adopted without questioning their purpose, relevance, or necessity. |
|
Student AI Literacy (EN-A8) |
Lessons actively incorporate AI concepts, with examples to help students build basic AI literacy. |
AI concepts are rarely addressed in lessons; students are passive users of technology. |
Non-discrimination
Non-discrimination represents fairness, equity and equality in AIED applications.
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
|
Diversity & Interdisciplinary Teams (EN-N1) |
Institutions proactively build diverse and interdisciplinary AIED implementation teams |
Teams are homogeneous; diversity and interdisciplinary input are not prioritized. |
|
Education-to-Industry Support (EN-N2) |
AIED tools are designed to support career readiness, providing clear pathways from education to industry. |
Career development and industry alignment are not considered in AIED design. |
|
Bias Monitoring & Ethics (EN-N3) |
Institutions routinely monitor AIED tools for bias and ensure transparent, ethical decision-making. |
No clear monitoring of bias or ethical review process; decisions are opaque. |
|
Equity & Inclusion (EN-N4) |
AIED tools are inclusive by design, ensuring equitable outcomes across all learner demographics and proactively avoiding discrimination. |
AIED tools fail to account for diverse learners , often reinforcing existing inequities. |
|
Fairness & Access (EN-N5) |
Institutions promote fairness and guarantee equal access to AIED tools. |
Access is unequal or restricted; fairness is not actively promoted . |
|
Stakeholder Involvement (EN-N6) |
Authorities and EIs consistently involve diverse stakeholders, incorporate feedback, and facilitate open dialogue around AIED decisions. |
Stakeholder input is rare; decisions are made without open consultation. |
Explicability
Explicability represents the ability to explain an algorithm's workings, outputs and decisions in human terms, and to justify these when needed
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
|
Stakeholder Responsibilities & Governance (EN-E1) |
Authorities have clearly defined and enforced stakeholder responsibilities ensuring AI system stability, transparency, and user understandability. |
Stakeholder roles are unclear or unassigned; transparency and system behavior are opaque or inconsistent. |
|
AI in Educational Practice (EN-E2) |
Educational institutions actively research, adopt, and evaluate AI tools across learning, administration, and teaching. |
AI tools are used without research backing; adoption is limited and lacks strategy. |
|
Developer Awareness of Impact (EN-E3) |
AI designers have deep understanding of how tools affect student learning |
Developers build tools with no awareness of educational impact. |
Stewardship of Data
Stewardship of Data represents various concepts related to data – ranging from accuracy to privacy to security to data transparency
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
|
Data Integrity & Context (EN‑S1) |
Institutions embed technical safeguards and promotes contextual understanding of data to prevent misuse, tampering, or degradation. |
Relies on basic IT safeguards; lacks context-based data handling. |
|
Policy Influence (EN‑S2) |
Institutions collaborate with authorities to improve laws and safeguards around data, privacy, and IP. |
Reactively follows existing policies; not engagedin legislative or advocacy efforts. |
|
Education on AI Risks & Bias (EN‑S3) |
Institutions provide education on bias, prediction risks, and promotes transparency. |
Limited or no education on AI risks; transparency not emphasized. |
|
Ethical Data Use (EN‑S4) |
Institutions actively end contracts that misuse student data; enforce robust ethical AI policies. |
Ethical concerns are not regularly evaluated; contracts rarely reviewed. |
|
Privacy & Data Usage (EN‑S5) |
Institutions findbalance between privacy and justifiable educational data use. |
Educational data used without privacy protections. |
|
Privacy-by-design integration (EN‑S6) |
Institutions fully integrate AIED into their own systems to ensure privacy-by-design. |
AIED is not integrated into their own systems with privacy considerations. |
|
Innovation & Testing Environment (EN‑S7) |
Institutions operate dedicated labs for structured AIED testing and deployment. |
AIED tools deployed ad hoc without structured environments. |
|
Correctness of results (EN‑S8) |
Institutions ensure correctness of results of AIED tools, while reducing wrong grouping of results. |
AIED tool results often have wrong grouping andare incorrect. |
|
Legal Compliance & Oversight (EN‑S9) |
Institutions enforce GDPR compliance and developer accountability through active monitoring. |
Assumes compliance; lacks formal oversight mechanisms. |
Human oversight
Human oversight represents the necessity of human agency, responsibility and human intervention in AIED tools.
|
Category |
Level 1: Sufficient |
Level 2: Insufficient |
|
Decision Verification (EN-H1) |
All key AIED decisions are reviewed by teachers or qualified, fair individuals |
AIED decisions are rarely or never reviewed by humans. |
|
Human Connection (EN-H2) |
AI is used to enhance scalability without losing meaningful human interaction in education. |
Scalability is prioritized; human connection is diminished or neglected. |
|
Clear Role Allocation (EN-H3) |
Responsibilities of AI and humans are clearly defined; human oversight is formally built into AIED operations. |
AI and human roles are unclear or blurred; human oversight is lacking. |
|
Professional Development (EN-H4) |
Institutions actively support professionals in developing skills to understand AI and educational management. |
No structured training; staff are unprepared to work effectively with AIED. |
|
Learner Control (EN-H5) |
AIED tools are designed to enhance student agency over their education |
Learners use AIED passively; no meaningful control over tools or learning pathways. |
|
AI Awareness & Impact Knowledge (EN-H6) |
Both teachers and students have basic understanding of AI and its impacts on education. |
Little to no understanding; AI is treated as a black box by users. |
|
Student Accountability (EN-H7) |
Students are encouraged to take responsibility for their own work even when supported by AIED. |
Student responsibility is undervalued; AIED use may lead to overreliance. |
|
Non-AI Alternatives (EN-H8) |
Students are offered a non-AI option, and are clearly informed of the pros and cons of AIED tools. |
No alternative to AIED is offered; students are compelled to use AI tools without full understanding. |
|
Informed Consent (EN-H9) |
Students give informed, explicit consent before engaging with AIED systems |
No clear consent process; students are unaware of how their data or learning is handled by AIED. |
Usage examples
A few usage examples are listed on the page here.
Conclusion
After reading this guidebook, you will have gained an understanding of AI GANESH - the context, the underlying theoretical concepts and the steps to apply it in practice. You are now ready to apply this framework to any use-case of AIED, evaluate its ethical requirements and improve on them iteratively. We hope that this framework can help you think critically about the ethical concerns and issues that arise from AIED integration and make education safer and sounder.