The FFUR Evaluation Principles provide approaches for assessing and improving FFUR offerings.
Evaluation supports continuous improvement through monitoring, benchmarking, and feedback, ensuring that FFUR courses remain effective and responsive to student needs. Evaluation should capture a range of data, including student outcomes and experience, and be designed in such a way as to be comparable across the sector.
As highlighted by Biesta (2010) and other theorists (see e.g., Burke & Lumb, 2018) who convey the importance of values-based approaches in education, evaluation methodologies must enable identification of the multiple complex variables involved in generating outcomes that matter to the groups they are intended for. They must also respond to changing contexts and needs over time in ways that are valuable to all participants.
A dynamic and iterative process, which integrates delivery with evaluation, is important for ensuring responsiveness to changing contexts and participant needs (Bennett, 2018). An evaluation approach — which considers what matters to the people the courses are intended to serve, and which prioritises understanding the diverse experiences, goals, and outcomes of participants — enables reach, engagement, and success of students in the variable ways which are valuable to them. Evaluations must be focused on the diverse experiences and outcomes of student cohorts, e.g. Aboriginal and Torres Strait Islander students and those from disadvantaged backgrounds. This enables understanding of the continuously changing external pressures, inequalities and prejudices, including racism, that people face in accessing, continuing, and completing all forms of education.
Re-imagining Evaluation, produced through a collaboration between the NSW Department of Education, University of Newcastle, NSW Aboriginal Education Consultative Group and principals from Connected Communities schools (NSW Department of Education, 2023) provides an important, culturally responsive evaluation framework, which embeds illuminating Aboriginal perspectives and values. This work shows how conventional evaluation methods often fail to capture the lived experiences and aspirations of Aboriginal students, families, and communities, as well as serves to reproduce cultural biases in education systems. Implementing the following approaches ensures that both educational practices, and the evaluation of it, is:
- Culturally responsive and educative for all
- Community-driven
- Co-designed with thoseimpacted
- Able to recognise success through student and community-defined outcomes
- Narrative and student experience-based, rather than purely data-driven
This is so that inequalities and other important knowledges are not erased from both consideration and inclusion in education.
The steps that FFUR evaluators should engage in are:
- Engagement and Trust Building: Establish and maintain relationships with school communities and stakeholders.
- Fieldwork and Data Collection: Conducted through interviews, discussion groups, and yarning circles.
- Validation and Co-development: Sharing findings with communities and collaboratively developing recommendations.
Treating units and courses as works in progress, continuously mapping aims and activities to outcomes, and reconsidering how outcomes reflect back on their design is essential (Bennett, 2018). The differential experiences across demographic subgroups, modes of study, and learning environments, and feeding evaluation findings into curriculum planning, reporting, and staff development, fosters innovation and systemic improvement.
Evaluation frameworks such as the Student Equity in Higher Education Evaluation Framework (SEHEEF) (Robinson et al., 2021) provide structured, context-sensitive approaches to assessing equity initiatives. These frameworks emphasise mixed-methods evaluation, embedded design, and student-centred metrics that reflect what matters to participants. The SEHEEF promotes mixed-methods evaluation, combining quantitative impact evaluation (QIE) and theory-based impact evaluation (TBIE). This supports the iterative and context-sensitive evaluation processes described in your document, allowing for both statistical analysis and deeper qualitative insights. The following diagram captures these key points from this literature:
Figure: Student Equity in Higher Education Evaluation Framework (SEHEEF)
The Equity Initiatives Framework 2.0 from the Critical Interventions Framework Part 3 (Bennett et al., 2024) offers a mapping of impactful initiative types and tools for evaluating impact in ways that are inclusive and responsive to diverse student needs.
Evaluating FFUR offerings using these approaches ensures that offerings are not only accessible but also meaningful and effective for those they aim to support.
In terms of approaches to evaluation, the Critical Interventions Framework (Bennett et al., 2015 and 2024) project studied access and the impact of equity programs and found :
- Evaluation is undertaken most frequently through mixed-methods approaches that utilise quantitative and qualitative data;
- Multiple effects and outcomes, including: increased access, retention and performance; improved student experiences, connectedness and engagement; informing aspirations for higher education;
- Collaborations that join program providers’ specialist knowledge with evaluation and research expertise promote rigorous forms of evaluation and high-quality provision.
- Programs that demonstrate impact use evaluation that is stakeholder centred, context specific and iterative.
Rich information may be gained from a mixed-methods approach (usually combining qualitative and quantitative methods) to understanding the impact of an initiative/suite of interventions.
The following are examples of evaluation methods and data sources relevant to equity interventions:
- Course logic analysis (including plausibility analysis, needs analysis and input/output requirements)
- Surveys of student and other stakeholder characteristics and experiences (using qualitative and/or quantitative designs)
- Focus groups with students and other stakeholders (for eliciting targeted feedback and information)
One-to-one interviews with students and other stakeholders (for exploring more detailed or complex issues) N.B. Focus groups and interviews may be conducted online or by telephone to overcome challenges of distance and cost
- Documentary/narrative/discourse analysis of program information and resources
- Documented reflective activities, which may be conducted before and after an initiative to explore its impact
- Creative forms of feedback from participants (via journal entries, illustrations, responses to narratives, mentors and other stimuli)
- Participant observation of courses in action (e.g. in learning contexts)
- Benchmarking (through external program review or comparisons with other interventions or sectoral and/or institutional norms)
- Case studies of specific interventions (which may involve comparisons between different interventions)
- Analysis of input/output measures (e.g. numbers of participants, qualifications, numbers of scholarships awarded, etc.)
- Longitudinal tracking of individual student experience and outcomes
- Cohort analysis (comparing program offers, admissions, enrolments, attrition, retention, success and completion rates)
- Service process tracking (e.g. changes in contact waiting times)
- Web analytics (using the increasing amount of online data to track and analyse student and/or course performance)
- Randomised control trials (initially designed for testing new drugs but now being used for educational interventions)
- Economic modelling (to estimate economic and community-wide or individual benefit from participating in a course)
The Australian Centre for Student Equity and Success (ACSES) provides support and development for evaluations through various schemes, and particularly for developing large, quantitative trials.