Skip to main content

Fairness and Bias of Artificial Intelligence (AI) Technologies in Education: Challenges and Future Directions

February 2024—January 2026

Objective
Artificial Intelligence (AI)-based applications in higher education, particularly in Science, Technology, Engineering, and Mathematics (STEM), have grown rapidly in recent years. Educators are on the front lines of this process. They are tasked with acquiring a sufficient understanding of AI to become proficient users and educators. Therefore, it is crucial to ensure they can utilize AI tools responsibly. This project addresses this topic by focusing on fairness in STEM education. We ask: how do STEM lecturers interpret algorithmic-generated recommendations, and how can they ensure they are trustworthy? More specifically, we examine under what conditions STEM lecturers are willing to use AI tools and whether cultural norms regarding algorithmic fairness shape their decisions.

Background
The rapid evolution of AI tools, especially with the emergence of generative AI, is revolutionizing education technology. While this transformation has the potential to enhance educational practices significantly, it also raises concerns about AI fairness and the risk of algorithmic bias that could harm disadvantaged students. Fairness in AI contexts means algorithmic decisions should not create discriminatory or unjust consequences. What is considered fair might also differ between contexts. For example, what is deemed fair in one culture may not be considered fair in another.

Within a common culture, focal points emerge, emphasizing values such as individualism, equality, and uncertainty avoidance. Therefore, perceptions of fairness can vary depending on cultural norms. Thus, it is crucial to consider these cultural nuances when developing AI systems, particularly those involving decision-making processes. There is a need to create AI tools that are not only innovative but also equitable, bridging rather than widening educational gaps.

About the Digital Futures Postdoc Fellow
Before joining KTH and Digital Future, Yael was a PhD student and a post-doctorate fellow in the Department of Science Teaching at the Weizmann Institute of Science. During that time, she also taught at the Open University of Israel. In her dissertation, conducted in the Chemistry Group, she examined the learning behaviours of students and teacher-learners in online, information-rich environments. In her post-doctorate, as part of the “Computational Approaches in Science Education (CASEd)” group, she studied integrating AI technologies in science education, mainly focusing on trust and explainability.

Yael Feldman-Maggor has expertise in advanced quantitative and qualitative methods. Her main research interests are: 1. Education technology 2. Self-regulated learning 3. Integrating artificial intelligence in science education, and 4. Learning analytics and their application to chemistry education. Before starting her academic career, Yael worked in the health sector, developing blended learning strategies for medical professionals. Yael is an editorial board member of the International Journal of Science Education.

Main supervisor
Olga Viberg, Associate Professor, KTH Royal Institute of Technology, EECS – School of Electrical Engineering and Computer Science, Media Technology & Interaction Design

Co-supervisor
Teresa Cerratto Pargman, Professor in HCI, Department of Computer and Systems Sciences (DSV), Stockholm University, Associate Director Societal Outreach Digital Futures

Contacts

Yael Feldman-Maggor

Digital Futures Postdoctoral Fellow: Fairness and Bias of Artificial Intelligence (AI) Technologies in Education: Challenges and Future Directions

yaelfm@kth.se
Picture of Olga Viberg

Olga Viberg

Associate Professor, Division of Media Technology and Interaction Design at KTH, Working group Educational Transformation, Main supervisor: Fairness and Bias of Artificial Intelligence (AI) Technologies in Education: Challenges and Future Directions, Former PI: Responsible Digital Assessment Futures in Higher Education (REFINE), Former Co-supervisor: Privacy of online proctoring systems in higher education settings, Digital Futures Faculty,

+46 8 790 68 04
oviberg@kth.se
Picture of Teresa Cerratto Pargman

Teresa Cerratto Pargman

Professor, Human-Computer Interaction (HCI), Dept. of Computer and Systems Sciences at Stockholm University (SU), Faculty of Social Sciences at Stockholm University, Member of the Executive Committee, Associate Director Societal Outreach, Working group Educational Transformation, Co-supervisor: Fairness and Bias of Artificial Intelligence (AI) Technologies in Education: Challenges and Future Directions, Co-PI: Transforming HIgher Education Practice Through Generative-AI, Former Co-PI: Responsible Digital Assessment Futures in Higher Education (REFINE), Former Main supervisor: Privacy of online proctoring systems in higher education settings, Digital Futures Faculty,

+ 46 (0)73 460 57 47
tessy@dsv.su.se