About the project
Objective
Online proctoring systems (OPS) in higher education settings are evolving fast. Their use is encouraged by the need to preserve the academic integrity of online assessment, particularly during the COVID-19 and post-pandemic. The acceptance of and trust in these tools are hindered by several ethical challenges, where students’ privacy is at the top. This postdoc research project aims to identify the main privacy issues around the use of OPS in higher education and how they can be addressed. This project will offer higher education institutions the privacy protection framework to be considered in educational and design practices to address the identified challenges.
About the Digital Futures Postdoc Fellow
Chantal Mutimukwe is a postdoc researcher at the Department of Computer Systems and Sciences (DSV), Stockholm University. Before joining Stockholm University, she worked at the KTH Royal Institute of Technology. Chantal got her PhD in informatics from Örebro University in 2019.
Her PhD research concerned the protection of information privacy in an e-government context. The main research goal was to provide an understanding of how the practices of information collection and dissemination by government organizations can match with the protection of citizens’ privacy. Her primary research interest is data privacy and security protection in an online service context.
Main supervisor
Teresa Cerratto-Pargman, Professor, Human-Computer Interaction (HCI), Stockholm University.
Co-supervisor
Olga Viberg, Associate Professor, Division of Media Technology and Interaction Design, KTH.
Watch the recorded presentation at the Digitalize in Stockholm 2023 event.
About the project
Objective
Artificial Intelligence (AI)-based applications in higher education, particularly in Science, Technology, Engineering, and Mathematics (STEM), have grown rapidly in recent years. Educators are on the front lines of this process. They are tasked with acquiring a sufficient understanding of AI to become proficient users and educators. Therefore, it is crucial to ensure they can utilize AI tools responsibly. This project addresses this topic by focusing on fairness in STEM education. We ask: how do STEM lecturers interpret algorithmic-generated recommendations, and how can they ensure they are trustworthy? More specifically, we examine under what conditions STEM lecturers are willing to use AI tools and whether cultural norms regarding algorithmic fairness shape their decisions.
Background
The rapid evolution of AI tools, especially with the emergence of generative AI, is revolutionizing education technology. While this transformation has the potential to enhance educational practices significantly, it also raises concerns about AI fairness and the risk of algorithmic bias that could harm disadvantaged students. Fairness in AI contexts means algorithmic decisions should not create discriminatory or unjust consequences. What is considered fair might also differ between contexts. For example, what is deemed fair in one culture may not be considered fair in another.
Within a common culture, focal points emerge, emphasizing values such as individualism, equality, and uncertainty avoidance. Therefore, perceptions of fairness can vary depending on cultural norms. Thus, it is crucial to consider these cultural nuances when developing AI systems, particularly those involving decision-making processes. There is a need to create AI tools that are not only innovative but also equitable, bridging rather than widening educational gaps.
About the Digital Futures Postdoc Fellow
Before joining KTH and Digital Future, Yael was a PhD student and a post-doctorate fellow in the Department of Science Teaching at the Weizmann Institute of Science. During that time, she also taught at the Open University of Israel. In her dissertation, conducted in the Chemistry Group, she examined the learning behaviours of students and teacher-learners in online, information-rich environments. In her post-doctorate, as part of the “Computational Approaches in Science Education (CASEd)” group, she studied integrating AI technologies in science education, mainly focusing on trust and explainability.
Yael Feldman-Maggor has expertise in advanced quantitative and qualitative methods. Her main research interests are: 1. Education technology 2. Self-regulated learning 3. Integrating artificial intelligence in science education, and 4. Learning analytics and their application to chemistry education. Before starting her academic career, Yael worked in the health sector, developing blended learning strategies for medical professionals. Yael is an editorial board member of the International Journal of Science Education.
Main supervisor
Olga Viberg, Associate Professor, EECS – School of Electrical Engineering and Computer Science, Media Technology & Interaction Design, KTH.
Co-supervisor
Teresa Cerratto Pargman, Professor in HCI, Department of Computer and Systems Sciences (DSV), Stockholm University.
About the project
Objective
My research investigates children and digitalization for more sustainable futures. It draws upon feminist ethics of care and more-than-human theories of collaborative survival to examine new technology roles in and for multi-species flourishing. This will be done through design-based activities (i.e., research-through-design) that will be situated around topics such as human-waste relations, local ecosystems, and nature appreciation.
Background
This research is motivated by a concern for a damaged environment and is oriented towards children as inhabitants and caretakers of its future. It is significant for the following reasons. Firstly, its focus on children is significant in considering new paradigms of digital tools and the long-term role of digitalization in everyday life. Secondly, its relational grounding within theories of care provides a lens to consider humans as interconnected with non-humans, which is important in developing an understanding of designing with distributed and networked digital materials.
Thirdly, its emphasis on nature as critical to the health and well-being of all species situates an important and often overlooked context for digitalization, which is significant in responsibly expanding digital interactions into the outdoors and nonurban environments.
About the Digital Futures Postdoc Fellow
Karey Helms is an interaction designer and design researcher at KTH. Her PhD research draws upon care ethics and posthuman feminism to investigate how interaction design might be otherwise amid a world in crisis. This includes ongoing interests in living materials, human bodily fluids, and ontological design. She implicates herself and unsettles bodily boundaries for a more careful technology design through autoethnographic and speculative design methods. Link to the website of Karey Helms
Main supervisor
Airi Lampinen, SU
Co-supervisor
Meike Schalk, KTH
Watch the recorded presentation at the Digitalize in Stockholm 2023 event.
About the project
Objective
Large Multimodal Models (LμMs) have the potential to transform engineering education by supporting hands-on, experiential learning. LμMs can process images, audio, video, and other data types, making them ideal for supporting physical engineering design tasks. However, these tools must be carefully designed to align with educational theories and support, rather than hinder, student learning. This project aims to develop and evaluate a pedagogically-aligned virtual teaching assistant (μTA) powered by LμMs to support problem-solving with physical systems in real-world settings for engineering education. The project addresses the challenges students face when dealing with complex, ill-defined problems in engineering design courses and other experiential learning contexts and the limitations of current AI tools in these settings.
Background
Generative AI tools, like large language models (LLMs), have revolutionized education but remain largely confined to screen-based, text-centric tasks such as programming and writing. Recent advancements in Large Multimodal Models (LµMs) enable processing of diverse inputs, such as text, images, and videos, offering opportunities to extend AI’s benefits to experiential learning environments like workshops and labs. While current research focuses on screen-based applications, little is known about how LµMs can support hands-on, ill-defined problem-solving tasks central to engineering education. This project pioneers the integration of LµMs into these settings, co-designing tools with students and educators to foster skills critical for engineering innovation, their studies, and work success.
Crossdisciplinary collaboration
The project is led by two principal investigators from the KTH Royal Institute of Technology: Associate Professor Olga Viberg (Human Centered Technology/EECS) and Assistant Professor Richard Lee Davis (Learning in Engineering Sciences/ITM). This cross-disciplinary collaboration integrates Viberg’s expertise in the design and evaluation of educational technologies—with a strong focus on AI adoption in STEM education and participatory design methods—with Davis’s experience in designing AI-driven tools for experiential learning, integrating multimodal systems, and advancing pedagogical alignment for generative AI technologies.
About the project
Objective
The LATEL project aims to harness the potential of data generated by educational technologies to enhance the quality of education. The primary objectives are to identify and retain potential dropout students, motivate learners to achieve their educational goals, and support teachers in refining learning designs. By addressing the practical challenges of implementing learning analytics (LA) in educational institutions, the project seeks to develop a systemic and use-case-based approach to demonstrate how data and evidence can be utilized for informed decision-making. This involves showcasing the application of LA in a real KTH course, exploring its potential in a new KTH program, and examining the legal and ethical frameworks governing data use in learning analytics. Ultimately, the project aims to clarify the legal landscape and promote the value of LA in shaping the future of engineering education, providing a roadmap for data-driven insights and solutions to policy-related obstacles that impede the implementation of LA in universities.
Background
Learning Analytics (LA) is an interdisciplinary field that combines data science, psychology, education, and computer science to optimize learning experiences. By analyzing data from online learning platforms, student information systems, and other sources, LA provides insights into student behavior, learning processes, and institutional performance. Despite its potential to personalize learning and identify at-risk students, the practical application of LA faces significant challenges, particularly related to data identification, curation, and legal and ethical compliance. Many educational institutions struggle with poor student throughput and funding issues, highlighting the need for effective LA solutions.
Current research often focuses on empirical studies, lacking practical applications for implementing LA in academic settings. The LATEL project addresses these gaps by proposing a design-based, iterative approach to demonstrate how data can be used to enhance teaching and learning quality. By exploring legal, ethical, and practical issues, the project aims to provide actionable insights for educators and policymakers, ultimately transforming education through data-driven decision-making.
Cross-disciplinary collaboration
The LATEL project brings together a diverse team of experts from various fields to tackle the complexities of implementing learning analytics in educational settings.
- Dr. Mattias Wiggberg, the principal investigator, holds a PhD in Computer Science Didactics and has extensive experience in digital transformation and the involvement of AI in society. He also contributes with expertise in skills development and policy work in education.
- Dr. Joakim Lilliesköld, an Associate Professor in Systems Engineering Management, contributes his knowledge in engineering education development and digitalization, focusing on legal and system challenges.
- Dr. Olga Viberg, an Associate Professor in Media Technology, specializes in Technology Enhanced Learning and will guide the empirical case study on learning analytics at KTH.
- Dr. Thashmee Karunaratne, an Associate Professor in Digital Learning, brings her background in machine learning and computer science to explore digital transformation and data analytics.
- Dr. Stefan Hrastinski, a Professor with a focus on Digital Learning, offers his extensive research experience in digital learning and learning analytics.
This cross-disciplinary collaboration, supported by the Digital Futures Education Transformation Working Group, ensures a comprehensive approach to addressing the project’s objectives and achieving meaningful educational transformation.
About the project
Objective
To understand how generative AI tools can be used by staff and students in the context of higher education. The project addresses three areas:
- Evaluate how such tools can be used by students to improve their productivity and learning outcomes
- Characterise how the technologies can be used by academic staff to transform education and assessment practices
- Provide guidance to university leadership regarding the regulation of use of such tools as well as capacity building initiatives that should be taken
Background
The sophistication of the latest generation of AI tools far exceeds that of previous generations, and from an educational assessment perspective the output is both sophisticated, and hard to detect. The realistic nature of the output is a product of the complexity of the systems and the scope of the data upon which they have been trained. Like many tools before them generative AI will transform our approach to education.
Crossdisciplinary collaboration
The project combines Human Computing Interaction and Education research competence from KTH and SU to address societal and technological aspects of the integration of generative AI into educational practices. By taking a multi-disciplinary approach the team is able to explore in depth both technological and educational dimensions of the use of AI, helping to craft the educational experience of the future.
About the project
Objective
The Responsible Digital Assessment Futures in Higher Education (REFINE) project aims to envision digital assessment futures by unpacking the opportunities and challenges that digital assessment brings to higher education for students’ improved learning. It specifically focuses on stakeholders directly and indirectly involved in the design and implementation of effective and responsible digital assessment practices. The project addresses the following research questions:
- What are teachers’ experiences of digital assessment in higher education?
- Which are the key opportunities and challenges (e.g., legal, ethical, organizational, technical, and pedagogical) of trusted digital assessment?
In this project, a mixed-method human-centred approach will be undertaken. An online national survey will be distributed to educators across several higher education institutions in Sweden. Two workshops will follow this with stakeholders – educators, lawyers, managers, and system developers – involved in designing and implementing digital assessment in higher education. By adopting a multi-stakeholder perspective, we aim to address several key challenges of digital assessment and suggest ways forward for students’ improved learning, improved learner support and teaching in the post-pandemic higher education setting.
The project will be a collaborative effort by a group of researchers from KTH and Stockholm University, representing the departments of Digital Learning, Media Technology and Interaction Design, Network and Systems Engineering at KTH and the Department of Computer and Systems Science at Stockholm University. All project members have much-needed complementary expertise and knowledge in technology-enhanced learning, organizational aspects of higher education, assessment, learning analytics and responsible approaches related to the use of digital tools and student data in higher education.
The outcomes of this seed project will contribute to digital assessment research and practice.
- First, it will enhance our understanding of current digital assessment practice and their associated challenges and opportunities in the setting of Swedish higher education.
- Second, based on this understanding, a set of recommendations for sustainable future digital assessment practices will be offered.
- Third, it will provide a foundation for creating and advancing competitive research proposals on digital assessment futures.
Background
Effective assessment practices are central to student learning and academic achievement. Whereas online assessment forms have to some degree, been present prior to the COVID-19 pandemic, the traditional examination forms had a stronghold within higher education and have often been the preferred way of assessing students. Restrictions during the COVID-19 pandemic led to a dramatic increase in the use of digital assessment, including formative and summative digital assessment practices in higher education.
In this regard, scholars and practitioners have shown some of the advantages of adopting digital assessment tools, such as personalized and accurate assessment, but also concerns, including issues of ethics, privacy, fairness, trust and security associated with the use of digital assessment tools (e.g., proctoring software).
Crossdisciplinary collaboration
The researchers in the team represent the KTH Royal Institute of Technology and Stockholm University.