photo of a woman - Asreen Rostami

Championing Human-Centered Cybersecurity: Asreen Rostami’s Journey from Digital Futures to RISE

Cybersecurity is often viewed as a purely technical field, but Asreen Rostami’s work challenges this perception. During her postdoctoral fellowship at Digital Futures, she focused on making cybersecurity more inclusive, integrating feminist principles such as diversity, autonomy, respect, and consent into digital security frameworks. Her research explored the human aspects of Internet of Things (IoT) security, particularly its impact on marginalized groups.

Now a Senior Researcher at RISE, Asreen leads work in Human-Centred Cybersecurity, where she integrates social, design, and technical perspectives. Alongside this, she also conducts research on extended reality (XR), exploring its design and the diverse ways of interacting within these immersive environments.

In this interview, Asreen shares insights into her research journey from postdoctoral fellow at Digital Futures to Senior Researcher at RISE, the challenges of building gender-inclusive cybersecurity, and how her time at Digital Futures shaped her current role.

Your research at Digital Futures focused on gender-inclusive cybersecurity. What were the key findings and challenges you encountered?

My research revealed that cybersecurity strategies and systems are not always designed with sensitivity to gender differences. As an HCI researcher, I have focused on cybersecurity as it relates to everyday users rather than corporate contexts. For instance, the kinds of cybersecurity required in domestic settings – such as securing smart home devices – present very different challenges and implications.

One key finding was that these systems often reflect a broader trend in technology design: they are developed through a lens that assumes a default user, typically male and often Western. While cybersecurity is marketed and engineered to protect “everyone,” that generalisation tends to overlook the diverse lived experiences and vulnerabilities of different users, particularly women and marginalised groups.

This issue becomes starkly visible in scenarios such as the use of indoor security cameras. Although these systems may be technically secure and functionally effective, if compromised, they can pose different levels and types of risk depending on who is present in the home. For example, women and children may experience heightened vulnerability compared to men, due to differences in social context, threat perception, and power dynamics.

One of the challenges in addressing this is that cybersecurity continues to be framed as a purely technical problem rather than a socio-technical one. A gender-inclusive approach requires a shift in mindset – one that recognises how social identity and everyday contexts shape the meaning and effectiveness of security. This is not just about designing better tools, but about rethinking who we imagine as the user, and whose safety we prioritise.

How did your work on humanising IoT security contribute to a broader understanding of cybersecurity from a human-centered perspective?

Much of the conversation around cybersecurity frames humans as the weakest link – liable to make errors that compromise otherwise secure systems. While this may be partially true, relying solely on that narrative is both reductive and unhelpful. In my work, I have challenged this and advocated for a shift in mindset: rather than designing systems that constrain or correct human behaviour, we should design systems that support people in shaping security together, recognising their diverse needs, motivations, and levels of understanding.

A human-centred approach calls for systems that accept and adapt to the unpredictability of human interaction. For example, in one of our studies, we found that users frequently attempted to hack or modify their own IoT devices. Their reasons ranged from curiosity and the desire to get better performance from the products they had invested in, to improving their quality of life or health – particularly in the case of medical technologies. Despite strong security features, people still found ways to reconfigure or override these devices to make them work for their real-life needs. Rather than viewing these users as threats, we should question why the system failed to support them in the first place.

Labelling such behaviour as security violations overlooks the unmet needs and agency of users, forcing them into DIY workarounds that may unintentionally compromise security. This reveals a gap in design thinking: manufacturers must recognise that their products will not always be used as intended, and security strategies must be robust enough to accommodate such creative uses.

It is also important to scrutinise the intent behind the design of security mechanisms. These are often presented as being in the user’s best interest, but this perspective warrants deeper examination. In some cases, such mechanisms appear to serve the financial goals of companies more than the safety of users – by restricting autonomy, enforcing closed ecosystems, or discouraging the use of third-party solutions. Here, security becomes not just a technical feature, but a tool of control, shaped as much by economic and political considerations as by concern for human well-being.

This points to a broader need to question who benefits from current security practices. Is the design truly guided by the intention to protect people, or is it primarily driven by economic or political strategies? By addressing these questions, we can move towards a cybersecurity paradigm that recognises users not as liabilities, but as active participants in shaping secure and empowering digital technologies.

What skills, collaborations, or insights gained during your postdoc fellowship at Digital Futures have been most valuable in your transition to leading Human-Centered Cybersecurity research at RISE?

Although I already had established research connections with several units at RISE, including the cybersecurity team, the postdoc fellowship at Digital Futures significantly expanded both the depth and breadth of my engagement. It gave me the opportunity to participate in a variety of interdisciplinary projects and to interact with a broader range of stakeholders interested in human-centred approaches to cybersecurity. This access to a larger pool of case studies and a wider network of experts proved essential in shaping a more comprehensive and practice-oriented research agenda.

The fellowship also provided the momentum needed to elevate human-centred cybersecurity from a peripheral interest to a more recognised research focus within RISE. While related topics had been explored by colleagues in various projects, they were often treated as secondary to technical concerns. The postdoc created a platform to foreground this area and demonstrate its value alongside traditional cybersecurity efforts.

Through this role, I also gained experience in managing projects, leading research initiatives, and organising events that brought together academics, practitioners, and policy stakeholders. These skills have been crucial in my transition to a more senior role and as an active and visible researcher in this field, where coordination across disciplines and sectors is essential. The fellowship allowed me to help position human-centred cybersecurity as an integrated part of RISE’s research strategy – building on existing expertise while introducing new methods, questions, and collaborative models.

How do you see the field of cybersecurity evolving in terms of inclusivity and human-centered design in the coming years?

The field is gradually improving in terms of inclusivity and attention to human-centred design, but progress is uneven and often shaped by broader trends—economic, geopolitical, and institutional. Unfortunately, these forces do not always favour inclusive or reflective approaches.

That said, there is growing recognition of the need for inclusivity within cybersecurity research. An increasing number of researchers and practitioners are calling attention to these issues, raising the alarm about the limitations of current models and advocating for systems that are more equitable and responsive. It is encouraging to see these perspectives gaining ground.

There is also rising interest – both nationally and internationally – in integrating the human dimension into cybersecurity. We can see this in various policy shifts and research agendas, including within the European Union, which is making visible efforts to embed human-centred considerations into its digital and security strategies. These developments suggest that change is not only possible but also increasingly seen as necessary.

However, the attention given to cybersecurity tends to intensify in response to crises or heightened threats. This presents a bittersweet situation for those of us working in the field. On one hand, greater visibility and urgency can lead to more progressive research across diverse areas of cybersecurity. On the other hand, this surge in interest is often a sign that something has already gone wrong – that security has been breached, trust has been damaged, or societal harms have occurred.

Ultimately, while we are moving forward, sustained progress in inclusivity and human-centred design will depend on long-term commitment rather than reactive shifts. We need frameworks that anticipate people’s real-world needs and vulnerabilities, rather than addressing them only after harm has emerged. This means embedding human values into the foundations of cybersecurity – not just its aftermath.

What are your current research priorities at RISE, and how do they build on your work from Digital Futures?

My current research at RISE centres on two main topics within the HCI domain, both of which are closely connected to my work at Digital Futures. The first is human-centred cybersecurity, which grew directly out of my postdoctoral research at Digital Futures and during my ERCIM fellowship. The second focuses on the design and user experience of extended reality (XR), a line of work that began during my PhD and continued through subsequent projects at RISE.

More recently, this work has also begun to explore the use and implications of AI within XR environments, including the design and interaction with AI agents. Human-centred cybersecurity remains a relatively new research focus both within RISE and more broadly in Sweden. Together with academic colleagues from Stockholm University, I am working to raise the visibility of this field within both the research and higher education communities.

At the same time, with support from the RISE Cybersecurity Center, I am working to communicate the value of this field to the broader cybersecurity ecosystem in Sweden and across the Nordic region. The goal is to support both companies and society in improving security practices and in developing more resilient systems and strategies that take human experience seriously.

In the area of XR, I am currently involved in two ongoing projects funded by Digital Futures, in collaboration with a colleague at KTH, as well as several other projects supported by external funding bodies. In parallel, I am contributing to the establishment of XR Sweden, which is aimed to be a national research and innovation platform funded by Vinnova. I am also leading the development of a new research collaboration between Sweden and Brazil on XR and AI, funded by Riksbankens Jubileumsfond.

These projects and initiatives involve collaboration across academia, industry, and public sector stakeholders, including several municipalities in Sweden.

While these two areas of research (XR and human-centred cybersecurity) remain distinct in many respects, I anticipate that parts of each will begin to converge over time. For example, there is increasing interest in the cybersecurity of XR systems and environments, and human-centred cybersecurity becomes especially relevant as we design and interact within these emerging realities – with multiple users, sensory data collected from humans, or novel applications and interaction modalities in XR.

Embedding inclusive and feminist perspectives in both the design of immersive systems and their security and privacy mechanisms is a direction I am actively pursuing. Although not all aspects of my research will overlap, the potential for cross-pollination between these fields is growing – and I see that as an opportunity to shape a more ethical and inclusive technological future.

Text: Johanna Gavefalk

More news

photo four women and six men at the Award ceremony at GEO Global Forum in Rome on 7 May 2025

Digital Futures Researcher Receives Prestigious GEO Lifetime Achievement Award

12/05/2025

Professor Yifang Ban, a leading researcher at Digital Futures and KTH Royal Institute of Technology, has been...

photo of the building Piperska muren

Shaping the Future Together: Digital Futures Faculty Workshop Builds Momentum for 2025–2030

09/05/2025

On May 7, 2025, nearly 90 faculty members and invited guests gathered at the historic Piperska...

City of Stockholm innovation director wins award for industrial collaboration

11/04/2025

Karin Ekdahl Wästberg, the Director of Innovation for the City of Stockholm, has been awarded...

Connecting Minds, Shaping Digital Futures: Highlights from Open Research Day 2025

10/04/2025

On April 9, 2025, the Digital Futures Open Research Day brought together a vibrant community...