Utilising AI to Enhance Student Surveys Analysis and Maximise Your Insight

Utilising AI to enhance student surveys analysis and maximise your insight

Effectively analysing open-ended feedback from students is a significant challenge facing universities today. The emergence of AI, however, presents a promising opportunity in feedback analytics, particularly in higher education.

At the University of Westminster, we have been thematically analysing open comments from student surveys for a number of years now. A major issue we faced, however, is the limited in-house capacity to utilise the wealth of student feedback we receive from both external and internal surveys: sources from which we can draw insight, cascading the wisdom generated through recommendations at all levels of influence.

For a long time, the University’s Institutional Research team only had scope to focus on the National Student Survey (NSS) free-text comments. Working individually, our Senior Institutional Researcher Kirsty Bryant recruited student researchers to aid this process. The training, upskilling and deployment of the students took in the region of six months. This was a long and laborious task – especially in comparison with quantitative insight, which is often received very rapidly by decision-makers – and to maximise our capacity to understand and act on student feedback, we knew we needed a new approach.

 

Deploying AI to process NSS open-comment responses

Westminster has become one of the first UK universities to analyse the results of the NSS, one of the most pre-eminent student surveys, using AI software (Explorance MLY) to support our need for enhanced qualitative analysis of student experience insights emerging in surveys.

Having recognised the need to change our practice via a more innovative approach, we are already seeing the rewards. As a headline impact, prior to the 2023-24 academic year, we manually conducted a thematic analysis of NSS responses from about 3,000 students per year, developing themes and uncovering patterns in the open-comments responses.

Now, using AI, we have gone from analysing around 6-7k NSS comments per year to being able to analyse in the region of 32-35k comments across multiple internal and external datasets. This rapidity, efficiency and real-time impact will only continue to be scalable with MLY.

To inform the malleability of MLY, we have also re-uploaded all our qualitative data from previous years’ NSS results to test its reliability against our institutional knowledge. As such, we have now got those retrospective analyses and can identify longer term trends to assess if what we are doing is moving the student experience in the right direction.

This approach has provided a platform to more capably triangulate Westminster’s emergent quantitative and qualitative data with that of the national picture and to match the rapidity of both sets of outputs. The ability to complement the various data types has allowed for a far more streamlined reporting process that empowers us to make sound recommendations in line with the University strategy, transforming insight into action through College Executive Groups, the University Planning Committee, and the University Executive Board, amongst others. With exigence, we can couple the “what” of the data with the “why”.

 

Extending AI capabilities to all major surveys

The AI gives us a greater capacity. We can provide insight for a number of our surveys, using MLYs Student Experience Insights and Employee Experience Insights coding frameworks enabling us to be more responsive, where we can identify in-year issues and intervene quickly.

Bringing in the qualitative data from our major surveys means we have the capacity to map different levels of study or different time periods, allowing us to identify patterns and trends. We can segment the data to better understand different student user journeys and develop measures to create a successful experience for all our students.

We now share high-level analysis of the open comments almost as quickly as we currently deliver the quantitative aspects of our surveys. This year, we were able to provide top-level qualitative insights in the same week as quantitative insights for the NSS, allowing for better planning for an improved student experience in the upcoming academic year.

However, without action and accountability, the development of this analysis will not realise its potential. A culture shift in listening to and actioning qualitative insight is required. Stakeholders must buy in to make the most of the use of AI and its outcomes. As this is achieved, we expect to see a change in future survey results as the impact of these changes manifests in tangible improvements to student experience.

 

Capacity building and institutional culture change

Since implementing the machine learning software, we have added to and diversified our portfolio of qualitative analyses to include subject module evaluations, colleague well-being surveys, and secondary datasets.

This is a huge difference numerically in the number of comments and a major difference in insight, bringing together various aspects of the student experience. We have created environments where students can feel genuinely heard and valued. This has meant fostering open communication channels and proactively demonstrating a commitment to acting upon the feedback we receive rather than just receiving it.

Students rightly expect their feedback to lead to tangible improvements in teaching, resources and support services. Our commitment to responsiveness fosters a culture of trust and collaboration between students and the institution. It is one thing to open a feedback loop, but it is another greater challenge to close it, and to close it with evidenced impact.

Using AI and machine learning to aid our analysis of open-ended comments has created space for a culture shift within the institution towards good data and robust evaluative practices across the institution.

 

Institutional and student-level impact

With the ability to rapidly analyse open-ended feedback, we can promote the use of both quantitative and qualitative data in the evaluation of student-facing interventions. In this way, AI has furthered our work towards generating a really solid set of Type 2 and Type 3 evidence bases across the board. This is important, of course, considering the regulatory requirements of our Access and Participation Plan and the Teaching Excellence Framework.

As a centralised department working towards meeting our strategic aims and institutional priorities, as a consequence of freeing up time, the Institutional Research Team is also able to deliver upskilling sessions to colleagues across the University, improving the
general understanding and use of available qualitative and quantitative data to inform micro-, meso- and macro-level decision-making.

Moreover, there is an increased degree of visibility of the analysis, further building on the foundation of the mutual expectation that students are giving feedback and know what is being done with it, and I would say that is one of the enhanced aspects of the experience through our use of AI.

Matthew Abley is an Institutional Research Analyst at the University of Westminster.

Next
Next

Webinar | Championing Academic Integrity in the Age of Artificial Intelligence (AI): Warwick Business School's Approach