Introduction
Artificial Intelligence (AI), a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. This definition focuses both on the science, that is the theoretical aspects, and the engineering or the practical aspects of building machines that can mimic human intelligence.
AI is not just a buzzword but an influential and promising technology that is here to stay. AI is being integrated into multiple domains, such as agriculture, business, automation, medicine, aerospace, chemistry, and military. Education is one of the domains that has seen a rapid integration of AI.
In this guidebook, we refer to all types of AI, without excluding any specific type of technique. We are discussing the full breadth of the AI field. AI techniques covered in this guidebook include recent developments, such as chatbots like ChatGPT and AI tools that have been used for longer in education, such as early warning systems and intelligent tutoring systems.
AI brings promises and opportunities to improve education, for example: automation of administrative processes and tasks, curriculum and content development, providing instruction, and understanding and improving students’ learning processes through analysis of student data. This is not an exhaustive list and AI is being integrated into education in many other ways.
However, the rapid integration of AI in education (AIED) has stirred a lot of conversation about its application in the learning process and related ethics. The following subsection provides an example of a potential problem with AIED.
The problem with AIED
Let's use the following example to illustrate the problem with AIED: an educational institution deploys an intelligent tutoring system - an AI tool that enables the personalization of learning. The tool detects the knowledge (or knowledge gaps) of students, diagnoses the next appropriate steps for students' learning in the form of new exercises or new learning units, and then notifies the teacher.
Now, imagine you are a student following a course in this educational institution. How can you, as a student, verify that this tool understands your knowledge levels accurately and suggests the right next steps for your learning? Can you ensure that the inherent bias in the data model does not affect the suggested next steps for you? How can you check that the suggested steps align with your knowledge and skills gaps to ensure that you reach the learning goals?
Next, imagine you're a teacher in the same educational institution using the intelligent tutoring system. How can you ensure that it does not only help the students who are performing well? How do you understand the suggested next steps from this system and check them? How do you make sure that the students are actually learning based on these suggested next steps? Can you change the incorrect suggestions from this tool and still guide student learning?
Lastly, imagine you are the head of an educational institution. The institution deploys an AIED tool that uses your organization's administrative data to identify students at risk of dropping out and provides an early warning. Can you ensure that this tool is not inherently biased against certain groups of students? Can you check the source data and how the tool generates an early warning? How can you keep these data secure and ensure its accuracy? Can these data be used for other malicious purposes?
The above scenarios show that while the accelerating pace of use of AIED comes with the promise to support and enhance learning, it also raises various concerns closely tied to ethical issues that could arise, or ethical problems that should be avoided. Think of the issue of data safety and privacy, or the potential of a system giving wrong instructions to students. These concerns highlight the need for ethical guidance in the usage of AIED to mitigate its potential negative impacts. For this reason, it is important to develop an ethical framework to regulate AIED usage. In the next subsection, we provide our solution to these issues.
An ethical framework for AIED
Our solution to the ethical issues of AIED is AI GANESH. AI GANESH is an ethical framework that can be used by the educational stakeholders of AIED to make decisions about which AI tools should be used in education.
The overarching goal of AI GANESH is to maximize learning benefits from AI in education (AIED) usage while also protecting all educational stakeholders from potential risks presented by this technology. It does so by helping you understand your own responsibilities as well as the impact of the responsibilities of other stakeholders by establishing ethical guidelines for AIED.
The development of AI GANESH consisted of 2 phases, in which the framework was developed through scientific research. The first phase of the project involved a systematic literature review (SLR) conducted in November 2022 to identify the ethical values and ethical norms from the scientific literature in the field of AIED. In this review, six ethical values and 36 ethical norms were identified. The SLR consolidated the existing literature, providing a theoretical grounding for the later stages of the research.
The second phase engaged three educational stakeholder groups, namely students, educators and educational institutions, through stakeholder consultation in the form of focus group discussions. This approach allowed for an in-depth exploration of the perspectives, concerns, and expectations of the stakeholders about the ethical implications of AIED. The insights gathered from the stakeholder consultation enriched the understanding of the values involved in the ethics of AIED. We developed a list of definitions of the ethical values and their grouping through the analysis of these discussions. Additionally, the stakeholder consultation identified 30 ethical norms that were used to supplement the list of norms that was developed during the review. The findings from the systematic literature review were then combined with the stakeholder consultation to form AI GANESH.
AI GANESH is aimed at the educational stakeholders of AIED - students, educators and educational institutions in the context of higher education. The roles and responsibilities of these 3 educational stakeholders are:
- Students: As a student, you are an end user of AIED tools and you should be well-informed about AIED tools that are designed to maximize the learning benefits for every student. Additionally, you should be able to request accountability for the implementation of these tools.
- Educators: As an educator, you are an end user of AIED tools and you should be trained to use and implement AIED tools such that they align with the learning goals and educational values.
- Educational institutions: This role refers to the administration of educational institutions – those responsible for making decisions about which AIED tools to deploy and how this deployment should be carried out. As an educational institution representative, you are a deployer of education and you should ensure that the tools you use, and the products and services you offer, meet the requirements for education.
An additional group involved with this framework are the 'Authorities'. This group refers to authorities at various levels ranging from institutional to state to national to supranational and everything in between. These authorities are responsible for implementing governance and regulations at various levels. While the authorities are responsible for establishing regulations, they are not themselves affected by these regulations.
As described above, each of the stakeholders have their own roles and responsibilities towards AIED. Alongside this, it is important to recognize that there are dependencies between stakeholders. For example, teachers and students depend upon the institute's investment in necessary infrastructure, and the institute also depends upon the teacher's expertise and willingness to introduce AIED into their teaching. While this framework does not allow comprehensive insight into this complex interplay regarding AIED (a mission near impossible), it does account for these dependencies with respect to ethics of AIED. It does so by linking specific responsibilities for implementing individual ethical guidelines to different stakeholder groups. This allows each stakeholder group to evaluate an AIED tool through the lens of their own role, while also understanding the responsibilities of other stakeholders.
How was AI GANESH developed?
This guidebook presents the AI GANESH ethical framework and walks you through all the components of the framework. You can use this guidebook to understand AI GANESH and apply it to different use-cases of AIED.
The rest of this guidebook consists of two parts. Part two - ‘Theoretical background’ - explains everything required to understand ethical frameworks and sets the theoretical basis for the framework presented in this guidebook. If you already have pre-existing knowledge on ethical frameworks, feel free to skip sections where required.
Part three - ‘Applying AI GANESH’ - presents the steps to use it in practice. This section also looks at roles that are specific to each stakeholder group and presents the rubric that forms the heart of this framework. Lastly, an example of the usage is provided.
Please note that this guidebook is based on scientific research - available here and here. This guidebook targets three main stakeholder groups - students, educators and educational institution representatives. To ensure that all these stakeholders can easily follow this guidebook, it has been written in an informal and conversational style.
Theoretical background
In this section, we first give a brief explanation of some general ethical concepts - ethical frameworks, values and guidelines. Then, we introduce the background of the AI GANESH framework and look at its values and guidelines. Following this, we explain a specific feature of AI GANESH - a rubric to evaluate AIED systems. Finally, we explain how to give ethical evaluations and how to interpret results.
Ethical concepts
An ethical framework is a system of ethical principles or values that provides a structure for guiding an individual or organization in making decisions about what is right and wrong. Thus, it serves as the foundation of any ethical decision-making process by providing a shared set of standards by which to evaluate potential choices.
Not all ethical frameworks have ethical values and norms. As these are core concepts of AI GANESH, we briefly explain these concepts. Values are principles that describe a certain type of behavior while norms are guidelines that provide the means to realize these values. The values act as a skeleton of ethical principles while the guidelines add the supporting tissues and muscles. For example, the ethical value of privacy describes the need for users to have control over their personal data. A norm, henceforth referred to as a guideline, provides a way to realize this value, such as ‘students should provide informed consent before taking part in a study’. Our SLR elaborates on the ethical concepts underlying our framework.
AI GANESH
The AI GANESH ethical framework provides a structure to the stakeholders of AIED to help them make decisions about which AI tool should be used in education. This ethical framework helps in decision-making by raising some critical questions about AIED, such as issues of explainability of outputs, equity, human agency and alignment with learning goals.
Values
AI GANESH contains two main components – values and guidelines. The framework is founded on six ethical values-
- Goodwill: This value represents the intention to promote well-being and beneficence through AIED, while minimizing harms and negative consequences
- Aptness for education: This value represents the alignment of AIED tools with educational values, learning goals and learner competences.
- Non-discrimination: This value represents fairness, equity and equality in AIED applications
- Explicability: This value represents the ability to explain an algorithm's workings, outputs and decisions in human terms, and to justify these when needed
- Stewardship of Data: This value represents various concepts related to data - ranging from accuracy to privacy to security to data transparency
- Human Oversight: This value represents the necessity of human agency, responsibility and human intervention in AIED tools.
Guidelines
Each value of this framework has related guidelines. You can apply these guidelines (also called ethical norms or norms) to specific use-cases of AIED tools. The guidelines are available in the form of a rubric, which is explained in the next subsection.
Rubric
In this subsection, we explain the theoretical concepts underlying the AI GANESH rubric. The rubric contents themselves can be found here. The rubric can be used to assess whether a given AIED tool fits the ethical requirements of the context in which it is used. Each guideline is represented by a category and a code. Additionally, the rubric contains 3 levels of implementation for each guideline - Level 1 is the implementation of the guideline that fully conforms to it while Level 3 is an insufficient implementation. Level 2 is an intermediate level that covers a partial implementation of a given guideline. The description of each guideline in the rubric elaborates on what the corresponding level means.
Conclusion
The above section laid the theoretical foundation of AI GANESH. In the next section, we focus on how this framework can be applied in practice to evaluate the implementation of a given AIED tool .
Applying AI GANESH
This section walks you through how you can apply AI GANESH. As a user, you can use the description of an AIED tool and apply the framework to it independently by following the usage instructions in the form of steps. Based on the stakeholder group you belong to, your influence to make improvements to the implementation of the AIED tool can differ. This is further explained in the next subsection.
As an educational stakeholder, you can use AI GANESH to assess whether you would use a given implementation of an AIED tool. Ethics is not black and white and there is no right or wrong in taking ethical decisions, but the aim of applying AI GANESH is to guide you towards the best possible decision for your context.
Using AI GANESH takes place in 5 steps, some of which are to be performed iteratively. Step 1 involves understanding the use case and the context of the AIED tool. In Step 2, you set an end-point for the iterations. Step 3 comprises assessing the AIED tool based on the guidelines, and Step 4 comprises a critical reflection on insufficient scores. The last step involves iterating over Steps 3 and 4 until the end-point set in Step 2 is reached. Completing all the rounds of ethical assessment leads to a completed ethical evaluation.
Stakeholder group differences
Due to the dependencies between the roles of the different educational stakeholders, the level of responsibility and influence to implement different guidelines varies between the stakeholder roles. This is represented in AI GANESH by assigning a stakeholder group that is responsible for ensuring the implementation of each guideline.
As a student, you have limited influence to make changes to how the AIED tool is implemented by the other stakeholders. You are only responsible for implementing 2 guidelines for Human oversight: EN-H7, EN-H9. However, you would be directly impacted by the guidelines implemented by educators and educational institutions. Hence, it is good to have information about all the guidelines and the stakeholders responsible for implementing each one. This enables you to request accountability for the implementation of AIED tools. Additionally, this allows you to make an ethically informed choice before using an AIED tool.
As an educator, you are responsible for ensuring that implementation of an AIED tool aligns with the learning goals. You are primarily responsible for the following guidelines: EN-G1, EN-A3, EN-A4, EN-A5, EN-A6, EN-A7, EN-A8, EN-H1. Your responsibility has an impact on students and is impacted by the implementation of guidelines by educational institutions. Hence, it is important to be aware of all the guidelines so that you can request accountability from the educational institutions and ensure that the pedagogical values are upheld.
As an educational institution representative, you have the most responsibility for implementing the rest of the guidelines and ensuring institution-wide compliance to AIED ethics. Institution-wide certification also includes the roles of other stakeholders, implying that the educational institutions need to ensure that educators and students comply with the framework.
Irrespective of the stakeholder group that you belong to, it is important to know your own responsibilities and the responsibilities of other stakeholder groups. This enables you to ensure that you fulfil your own responsibilities ethically, while also requesting accountability from other stakeholder groups. Together, this can ensure an ethically informed choice for the implementation of AIED. In the next subsection, we explain how AI GANESH can be used in practice.
Usage instructions
Ensuring ethical implementation of an AIED tool is ideally not a one-time process, but an iterative process involving multiple rounds of assessment and improvement. Furthermore, this process should not be a scoring system that provides a target metric value as it could lead to optimization for a target score (following Goodhart's law [2]). The ethical evaluation process involves assessing a tool, possibly identifying and making improvements, and then repeating this process until a desired outcome is achieved. So, this framework is designed to be used in an iterative process that gives recommendations at the end of each round of assessment.
The use of AI GANESH follows a series of steps, some of which are iterative. We list the steps first. Finally, we give an example for each of the stakeholder groups.
AI GANESH is to be used following these steps:
Step 1: Understand the AIED use case
Read the use case description thoroughly and understand how the AI tool is integrated in the specific educational context.
Step 2: Set an end-point for iteration
As the ethical evaluation process using AI GANESH is iterative, you should set an end-point for this iteration beforehand. This can be set based on the context and the ethical principles of the educational institution. You can also use the end-point recommendation made by us - a maximum of one guideline per value is marked with 'Level 3: Insufficient'. We recommend this end point as we believe that all the values should be given equal importance and there should be no hierarchy between them.
Step 3: Assessment
Assess the implementation of the AIED tool using the rubric provided in the section 'Rubric'. To assess the ethical requirements of an AIED tool, you should select the level of implementation for each guideline that applies to the specific AIED tool. In case a guideline is not applicable to the given situation or insufficient information is available, you can mark it as 'Not applicable' (NA).
Step 4: Critical reflection on insufficient scores
For the guidelines that score 'Level 3: Insufficient' on the rubric, you are required to provide an explanation or a critical reflection on this. This is specifically required for Level 3 as it is not a desired level of implementation for a guideline and signals the need for some improvement. As the ethical compliance of an AIED tool is a continuous and cyclical process, you can make future improvements by implementing the guideline as stated in Level 1 of the corresponding guideline.
Step 5: Iteration
Follow steps 3 and 4 iteratively until the desired target end-result set in Step 2 is achieved.
Rubric
Goodwill
Goodwill represents the intention to promote well-being and beneficence through AIED, while minimizing harms and negative consequences
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
| Student-Centered Decision-Making (EN-G1) | Educators consistently guide AIED decisions around student needs, values, and priorities | Educators sometimes influence AIED decisions with student needs in mind, but guidance is not consistently student-centered. | AIED decisions are driven by efficiency or administration, with minimal educator or student input. |
| AIED Adaptation & Educator support (EN-G2) | Authorities and institutions allow education to adapt meaningfully to AIED while ensuring it supports educators' roles. | Some adaptation to AIED occurs; educator support is present but fragmented. | Adaptation is slow or resisted; educators may feel undermined or unsupported by AIED systems. |
| Strategic AI Promotion (EN-G3) | Institutions actively promote AIED and develop practical, research-based strategies for its integration in learning environments. | AI use is encouraged selectively with basic or informal strategies for application. | AI use is sporadic and uncoordinated; no guiding strategies or research-based frameworks exist. |
| Ethical System Design & Use (EN-G4) | Ethical considerations are deeply embedded in AIED tool design, procurement, and use | Ethics are considered in principle, but application during AIED design/use is inconsistent or superficial. | Ethical concerns are not systematically addressed; AIED tools may be used without ethical review. |
Non-discrimination
Aptness for Education represents the alignment of AIED tools with educational values, learning goals and learner competences.
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
|
Goal Alignment & Evidence (EN-A1) |
AIED tools are closely aligned with learning goals , with strong evidence of enhancing learning outcomes. |
AIED is mostly aligned with learning goals; some evidence exists but may not be rigorous. |
AIED is used without alignment to clear goals or lacking supporting evidence. |
|
Support for Diverse Skills (EN-A2) |
AI tools are used to identify and support a wide range of student skills, including non-academic skills. |
Some AI tools support varied skills, but focus is primarily academic. |
AI tools are narrow in focus, often supporting only limited, traditional metrics. |
|
Reflective Use by Teachers (EN-A3) |
Teachers review and refine their own support methods before introducing AIED into instruction. |
Teachers are aware of their role, but may not fully assess their current methods before AIED use. |
AIED is introduced without prior reflection on teaching practices. |
|
Instructional Alignment (EN-A4) |
AIED tools are explicitly matched to curricular learning goals and come with measurable enhancements in learning. |
Tools are somewhat aligned; goal matching is partial or implicit. |
Alignment is absent or unclear; tools may distract from intended learning outcomes. |
|
Skill Scaffolding (EN-A5) |
Teachers first teach foundational skills before allowing students to delegate or enhance them with AIED tools. |
Foundational instruction occurs but students may access AIED too early or inconsistently. |
Students use AIED without learning core skills first. |
|
Transparent Learning Goals (EN-A6) |
Teachers clearly communicate learning goalstoprevent students from using AIED to take learning shortcuts. |
Learning goals are communicated, but student use of AIED to take shortcuts is not always addressed. |
Learning goals are vague or unstated; students often use AIED as a shortcut. |
|
Tool Evaluation (EN-A7) |
Institutions and teachers critically assess the purpose, effectiveness, and necessity of each AIED tool before adoption. |
Some tools are evaluated critically, but adoption may also be influenced by other factors. |
Tools are adopted without questioning theirpurpose, relevance, or necessity. |
|
Student AI Literacy (EN-A8) |
Lessons actively incorporate AI concepts, with examples to help students build basic AI literacy. |
Some instruction includes examples or discussion of AI, but not embedded across lessons. |
AI concepts are rarely addressed in lessons; students are passive users of technology. |
Non-discrimination
Non-discrimination represents fairness, equity and equality in AIED applications.
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
|
Diversity & Interdisciplinary Teams (EN-N1) |
Institutions proactively build diverse and interdisciplinary AIED implementation teams |
Some efforts toward team diversity; disciplinary representation is unbalanced or incidental. |
Teams are homogeneous; diversity and interdisciplinary input are not prioritized. |
|
Education-to-Industry Support (EN-N2) |
AIED tools are designed to support career readiness, providing clear pathways from education to industry. |
Tools may support some career skills, and industry alignment. |
Career development and industry alignment are not considered in AIED design. |
|
Bias Monitoring & Ethics (EN-N3) |
Institutions routinely monitor AIED tools for bias and ensure transparent, ethical decision-making. |
Limited bias checks occur but; ethics is acknowledged but not always enforced. |
No clear monitoring of bias or ethical review process; decisions are opaque. |
|
Equity & Inclusion (EN-N4) |
AIED tools are inclusive by design, ensuring equitable outcomes across all learner demographics and proactively avoiding discrimination. |
Some groups are considered in design; inclusion is reactive or partial. |
AIED tools fail to account for diverse learners , often reinforcing existing inequities. |
|
Fairness & Access (EN-N5) |
Institutions promote fairness and guarantee equal access to AIED tools. |
Access is generally available, but inequalities exist and are not fully addressed. |
Access is unequal or restricted; fairness is not actively promoted . |
|
Stakeholder Involvement (EN-N6) |
Authorities and EIs consistently involve diverse stakeholders, incorporate feedback, and facilitate open dialogue around AIED decisions. |
Stakeholders are occasionally consulted, but participation is not structured or ongoing. |
Stakeholder input is rare; decisions are made without open consultation. |
Explicability
Explicability represents the ability to explain an algorithm's workings, outputs and decisions in human terms, and to justify these when needed
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
|
Stakeholder Responsibilities & Governance (EN-E1) |
Authorities have clearly defined and enforced stakeholder responsibilities ensuring AI system stability, transparency, and user understandability. |
Responsibilities are outlined but partially enforced; transparency and system clarity vary by context. |
Stakeholder roles are unclear or unassigned; transparency and system behavior are opaque or inconsistent. |
|
AI in Educational Practice (EN-E2) |
Educational institutions actively research, adopt, and evaluate AI tools across learning, administration, and teaching. |
Institutions use AI tools in some areas, though research or evaluation is inconsistent. |
AI tools are used without research backing; adoption is limited and lacks strategy. |
|
Developer Awareness of Impact (EN-E3) |
AI designers have deep understanding of how tools affect student learning |
Developers have some awareness of impacts. |
Developers build tools with no awareness of educational impact. |
Stewardship of Data
Stewardship of Data represents various concepts related to data – ranging from accuracy to privacy to security to data transparency
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
|
Data Integrity & Context (EN‑S1) |
Institutions embed technical safeguards and promotes contextual understanding of data to prevent misuse, tampering, or degradation. |
Provides basic protection; encourages contextual understanding without systematic support. |
Relies on basic IT safeguards; lacks context-based data handling. |
|
Policy Influence (EN‑S2) |
Institutions collaborate with authorities to improve laws and safeguards around data, privacy, and IP. |
Complies with current laws; limited engagement in broader policy discussions. |
Reactively follows existing policies; not engagedin legislative or advocacy efforts. |
|
Education on AI Risks & Bias (EN‑S3) |
Institutions provide education on bias, prediction risks, and promotes transparency. |
Occasional training or awareness programs; transparency is aspired for. |
Limited or no education on AI risks; transparency not emphasized. |
|
Ethical Data Use (EN‑S4) |
Institutions actively end contracts that misuse student data; enforce robust ethical AI policies. |
Strengthens protections; contract review for misuse is occasional. |
Ethical concerns are not regularly evaluated; contracts rarely reviewed. |
|
Privacy & Data Usage (EN‑S5) |
Institutions findbalance between privacy and justifiable educational data use. |
Educational data used with partial privacy considerations. |
Educational data used without privacy protections. |
|
Privacy-by-design integration (EN‑S6) |
Institutions fully integrate AIED into their own systems to ensure privacy-by-design. |
Selective integration of AIED into their own systems with partial privacy considerations. |
AIED is not integrated into their own systems with privacy considerations. |
|
Innovation & Testing Environment (EN‑S7) |
Institutions operate dedicated labs for structured AIED testing and deployment. |
Pilots AIED in limited settings without a formal testing lab. |
AIED tools deployed ad hoc without structured environments. |
|
Correctness of results (EN‑S8) |
Institutions ensure correctness of results of AIED tools, while reducing wrong grouping of results. |
Institutions partially check for correctness of results with a few results being wrongly grouped. |
AIED tool results often have wrong grouping andare incorrect. |
|
Legal Compliance & Oversight (EN‑S9) |
Institutions enforce GDPR compliance and developer accountability through active monitoring. |
Requires compliance but relies on external audits or occasional reviews. |
Assumes compliance; lacks formal oversight mechanisms. |
Human oversight
Human oversight represents the necessity of human agency, responsibility and human intervention in AIED tools.
|
Category |
Level 1: Exemplary |
Level 2: Fair |
Level 3: Insufficient |
|
Decision Verification (EN-H1) |
All key AIED decisions are reviewed by teachers or qualified, fair individuals |
Verification occurs for most major decisions, but may lack consistency or full transparency. |
AIED decisions are rarely or never reviewed by humans. |
|
Human Connection (EN-H2) |
AI is used to enhance scalability without losing meaningful human interaction in education. |
AI helps with scaling, but some loss of personal connection is evident. |
Scalability is prioritized; human connection is diminished or neglected. |
|
Clear Role Allocation (EN-H3) |
Responsibilities of AI and humans are clearly defined; human oversight is formally built into AIED operations. |
Roles are mostly clear, but oversight may be informal or inconsistently applied. |
AI and human roles are unclear or blurred; human oversight is lacking. |
|
Professional Development (EN-H4) |
Institutions actively support professionals in developing skills to understand AI and educational management. |
Some training opportunities exist, but coverage is partial or voluntary. |
No structured training; staff are unprepared to work effectively with AIED. |
|
Learner Control (EN-H5) |
AIED tools are designed to enhance student agency over their education |
Students can adjust some features, but choice and control are limited. |
Learners use AIED passively; no meaningful control over tools or learning pathways. |
|
AI Awareness & Impact Knowledge (EN-H6) |
Both teachers and students have basic understanding of AI and its impacts on education. |
Some awareness among end users, but not widespread or deep. |
Little to no understanding; AI is treated as a black box by users. |
|
Student Accountability (EN-H7) |
Students are encouraged to take responsibility for their own work even when supported by AIED. |
Students are reminded occasionally of their role and expectations from AI support. |
Student responsibility is undervalued; AIED use may lead to overreliance. |
|
Non-AI Alternatives (EN-H8) |
Students are offered a non-AI option, and are clearly informed of the pros and cons of AIED tools. |
Alternatives are sometimes offered, but not well explained or inconsistently available. |
No alternative to AIED is offered; students are compelled to use AI tools without full understanding. |
|
Informed Consent (EN-H9) |
Students give informed, explicit consent before engaging with AIED systems |
Consent is implied or without sufficient information |
No clear consent process; students are unaware of how their data or learning is handled by AIED. |
Usage examples
A few usage examples are listed on the page here.
Conclusion
After reading this guidebook, you will have gained an understanding of AI GANESH – the context, the underlying theoretical concepts and the steps to apply it in practice. You are now ready to apply this framework to any use-case of AIED, evaluate its ethical requirements and improve on them iteratively. We hope that this framework can help you think critically about the ethical concerns and issues that arise from AIED integration and make education safer and sounder.