November 27, 2011
A closer look at student feedback
By Chi Nguyen
This paper was extracted from a course leader report covering 3 computer networks engineering courses, which required evaluation of the National Student Survey (NSS), course feedback survey and unit feedback surveys (Nguyen, 2011). All surveys were voluntary
. The course and unit surveys often took place during timetabled events, which caused the surveys to be susceptible to student absences
1. Context for evaluating feedback surveys
In the book, Practical Statistics for Educators
, Ruth David reminds us that experimental and descriptive research require different methods of analysis. In experimental research
projects, "researchers plan an intervention and study its effect on groups or individuals". By comparison, nonexperimental "descriptive research is aimed at studying a phenomenon as it is occurring naturally, without any manipulation or intervention. Researchers are attempting to describe and study phenomena and are not investigating cause-and-effect relationships
Dianne Hinds reminds us that good research depends on data, "that are both reliable and valid. Reliability
refers to matters such as the consistency of a measure – for example, the likelihood of the same results being obtained if the procedures were repeated. Validity
relates broadly to the extent to which the measure achieves its aim, i.e. the extent to which an instrument measures what it claims to measure, or tests what it is intended to test" (2000). Gray, Williamson, Karp and Dalphin emphasize that "if we do our sampling carefully and in accordance with one of the standard sampling plans, it should be possible for another researcher to replicate our findings; this is an important aspect of reliability. Careful sampling ensures we have drawn our cases so that our sample accurately reflects the composition of the population of cases about which we wish to generalize; this contributes to the validity of the generalizations we make on the basis of our sample
The difficulty and importance of translating survey objectives to survey questions is emphasized by many researchers (Gray, Williamson, Karp, Dalphin, 2007; Saris, Gallhofer, 2007; Hinds, 2000; Wilkinson, 2000). Survey design is particularly affected by content validity
(do questions cover the entire range of meanings associated with an objective), internal validity
(does the data accurately reflect the people who participated) and external validity
(the extent to which the data reflects people similar to the participants). For example, there are a number of opinions (often conflicting) about which questions might be the most reliable indicators of teaching effectiveness, student feedback, opportunities for student learning, amount of student effort, academic progress or mastery of a specific skill
2. National Student Survey: Core questions
The Higher Education Funding Council for England (HEFCE) commissioned research in 2010 to investigate possible improvements to the NSS. Prof Paul Ramsden, Chief Executive of the Higher Education Academy until 2009, led the project. The research confirmed that the NSS was "originally conceived primarily as a way of helping potential students make informed choices", which indicates that the NSS should be considered as a descriptive survey
. The risk of using NSS data incorrectly was sufficiently high to warrant its own recommendation. The report contained 18 recommendations. Recommendation 5 from the report is displayed below in ordered list format for clarity (Centre for Higher Education Studies at the Institute of Education, 2010).
- It is desirable to make available clear guidance about the risks and issues associated with using NSS results for purposes of comparison. We confirm that the NSS results can be used responsibly in the following ways, with proper caution:
- To track the development of responses over time
- To report absolute scores at local and national levels
- To compare results with agreed internal benchmarks
- To compare the responses of different student groups, including equity target groups
- To make comparisons, with appropriate vigilance and knowledge of statistical variance, between programmes in the same subject area at different institutions
- To help stimulate change and enhance dialogue about teaching and learning
- However, they cannot be used responsibly in these ways:
- To compare subject areas, e.g. Art & Design vs. Engineering, within an institution – unless adjustments are made for typical subject area differences nationally
- To compare scores on different aspects of the student experience (between different scales, e.g. assessment vs. teaching) in an unsophisticated way
- To compare whole institutions without taking account of sources of variation such as subject mix and student characteristics
- To construct league tables of programmes or institutions that do not allow for the fact that the majority of results are not materially different
Recommendation B.2 is particularly relevant when considering the NSS reporting method. According to a HEFCE paper in 2010, the NSS reports "on the percentage of respondents that are satisfied; in other words the sum of Definitely agree
and Mostly agree
respondents, divided by the total number of respondents (defined as the sum of Definitely agree
to Definitely disagree
respondents) for that question or category of question." (HEFCE, 2010) The implication is that the number of students answering each question may vary
because students who choose option f, Not applicable
, have selectively opted-out of answering that individual question. Additionally, the NSS data published to the public includes a reminder that "comparisons between years should be made with caution
because the profile of the respondents will differ and this has not been adjusted for" (HEFCE, 2011a).
NSS results are not reported to the public
if there are less than 23 responses
or less than 50 percent response rate
in a subject group (HEFCE, 2010). Some researchers are even more cautious about data quality. For example, the author of Practical Statistics for Educators
, wrote, "a sample size of at least thirty cases
or subjects is recommended in most studies in education" (David, 2011). The threshold requirement of 23 responses is the NSS compromise to protect the quality of the NSS data while minimising the unintentional bias against smaller educational institutions.
With considerations for HEFCE recommendations A.5, B.3 and B.4., no comparison have been made to national results.
With considerations for HEFCE recommendations A.1, A.6 and B.2., the table below shows the University and the Course results. This report focused on the number of participants and 8 questions which have the highest potential for evaluation and change by the Course team
(indicated by inverse background formatting).
|Number of participants||2878||21||2850||17|
|Participation percentage|| ||46%|| ||38%|
|The teaching on my course||85||82||85||75|
| 1. Staff are good at explaining things||89||81||90||94|
| 2. Staff have made the subject interesting.||82||81||81||76|
| 3. Staff are enthusiastic about what they are teaching.||85||86||85||53|
| 4. The course is intellectually stimulating.||85||81||83||76|
|Assessment and feedback||66||60||67||62|
| 5. The criteria used in marking have been clear in advance.||77||71||77||82|
| 6. Assessment arrangements and marking have been fair.||74||76||73||82|
| 7. Feedback on my work has been prompt.||58||48||59||53|
| 8. I have received detailed comments on my work.||64||52||67||41|
| 9. Feedback on my work has helped me clarify things I did not understand. ||58||52||60||53|
| 10. I have received sufficient advice and support with my studies.||77||85||80||75|
| 11. I have been able to contact staff when I needed to.||85||86||85||88|
| 12. Good advice was available when I needed to make study choices.||73||81||76||59|
|Organisation and management||75||79||76||71|
| 13. The timetable works efficiently as far as my activities are concerned.||76||81||75||71|
| 14. Any changes in the course or teaching have been communicated effectively.||72||70||76||71|
| 15. The course is well organised and is running smoothly.||76||86||77||71|
| 16. The library resources and services are good enough for my needs.||85||88||83||88|
| 17. I have been able to access general IT resources when I needed to.||78||90||74||100|
| 18. I have been able to access specialised equipment, facilities or rooms when I needed to.||76||85||76||81|
| 19. The course has helped me to present myself with confidence.||80||76||80||65|
| 20. My communication skills have improved.||83||67||83||65|
| 21. As a result of the course, I feel confident in tackling unfamiliar problems.||80||75||81||65|
Observations about the NSS core questions data:
- In 2010 and 2011, a low number of students on the Course participated in the NSS.
- The University has requirements that determine whether a student is eligible to participate in the NSS, which prevented an accurate calculation of participation percentage at the University level. Additionally, it reduced the number of students on the Course which were permitted to participate in the NSS.
- In 2010 and 2011, the number of responses were below the NSS threshold for public reporting. This raised doubts about the reliability and validity of the NSS data for the Course.
- Students on the Course in 2010 and 2011 had a similar satisfaction level about assessment and feedback.
- This observation was weakened by the low number of responses.
- Students on the Course in 2011 were least satisfied with the feedback received on their work.
- This observation was weakened by the low number of responses.
- Questions 7-9 do not indicate whether the low satisfaction was in relation to formative or summative feedback. If the questions were about formative feedback, then the low satisfaction was inconsistent with the higher satisfaction for questions 10 and 11.
- The low satisfaction with feedback was inconsistent with the open response comments. There was only 1 comment about assessments and feedback from 22 open response comments. By comparison, there were 15 comments about the teaching on the course and 4 comments about academic support.
The following inferences were made with extra caution due to the low number of responses:
- One sample t-tests suggested that the satisfaction profile of students on the Course in 2010 and 2011 were similar to students throughout the University.
- 2010: t(21) = -0.3366, p = .740
- 2011: t(21) = -1.7675, p = .092
- An independent samples t-test suggested that students on the Course in 2010 had a similar satisfaction profile to students in 2011.
- t(42) = 1.0598, p = .295
- Non-parametric tests were also performed and available in Appendix B.
In the future, it would be desirable to have sufficient statistical data for comparison with agreed internal benchmarks (see HEFCE recommendation A.3). Course level data are not published to the public, which prevent the possibility of comparison with similar programmes at other universities (see HEFCE recommendation A.5). Creating benchmarks is a difficult task. This year is the first time since the NSS started in 2005 that HEFCE has published benchmarks for each university specifically for Question 22, "Overall, I am satisfied with the quality of the course" (2011a). This benchmark is for one question. According to Sue Littlemore at The Guardian, HEFCE published the new benchmark this year based on research that found "women tended to be more positive than men about their courses as were students in their 30s or 40s, but Asian, mixed-race students and people with a disability were generally less satisfied along with students following creative arts courses, although students of historical and philosophical studies tended to be more positive" (2011). We should approach the development of internal benchmarks with similar caution and respect for the difficulty of the task.
3. National Student Survey: Open response comments
HEFCE research has reported that universities "said that the open responses were not easy to analyse and that the analysis was time-consuming." The research recommended "a study to explore the feasibility of developing an analytical tool to enable institutions to analyse comments in the free text area of the NSS in a consistent manner" (Centre for Higher Education Studies at the Institute of Education, 2010).
The table below encoded the open response comments from NSS 2011 into positive and negative categories.
|NSS core questions category||Course 2011 NSS rating||Count of positive comments||Count of negative comments|
|The teaching on my course||75||6||9|
|Assessment and feedback||62||0||1|
|Organisation and management||71||1||0|
Observations about the NSS open response comments:
- There was a high interest in providing comments.
- There were 22 open response comments from 17 students.
- Teaching on the course and academic support attracted the majority of comments.
- There was a balance between positive and negative comments.
- The lowest NSS rating category, assessment and feedback, attracted only 1 comment.
The number of negative comments about teaching (category 1) may indicate that the rating of 75% is below the normal and average range in that question category. By contrast, the lack of comments about assessment and feedback (category 2) may indicate that the rating of 62% is within the normal and average range in that question category.
4. Course feedback survey
The course feedback survey is a descriptive survey that is subject to the same HEFCE recommendations as previously described for the NSS data.
The survey form has 37 questions. The responses to each question are the same as with the NSS. However, the course feedback survey uses a different reporting method than the NSS. The responses are assigned values from 1 (Strongly disagree) to 5 (Strongly agree). The average value of each question is reported, which is different than the percentage value reported by the NSS. Not applicable responses are excluded from the average calculation, which is similar to the NSS method.
The HEFCE research report contained two recommendations relating to the number of questions on the survey form (Centre for Higher Education Studies at the Institute of Education, 2010):
- We have noted proposals for additions and changes to the core NSS, including the addition of items related to employability, disability and contact hours. As discussed in chapter 3, there is a risk to response rates if the instrument is lengthened and there are questions about whether the NSS is the appropriate vehicle for obtaining valid responses about issues such as employability. We conclude that the core NSS should remain as it is for the present.
- If there is a need for additional information about students' experiences that is not supplied by the NSS, this should be satisfied through ad hoc surveys rather than by adding questions to the NSS.
The HEFCE recommendations above suggest that it is a disadvantage to have 37 questions on the course feedback form (as compared to 22 questions on the NSS form). For example, questions about the library (questions 26-28), IS services (questions 29, 30) and disability (questions 12, 13) might be more effective when obtained directly at the point at which students are using those services rather than spread out across all students on the course, which may include students that did not use the service or used at a much lesser extent. Those questions are difficult to evaluate because they are reported in the same manner as all questions, yet, the extent to which students use those services may vary greatly. Questions about timetable (questions 9, 10) and teaching spaces (questions 24, 25) might be more effective when collected ad hoc in response to specific issues, at the unit level in relation to student numbers or teaching methods, or at longer intervals across the whole School. There is a great amount of constraints on timetabling and teaching spaces, so it is not likely a Course specific issue nor is there much capacity available for changes. We should not give students an illusion of feedback when the possibility for change is severely limited. Question 8, "The pastoral support offered by student services met my needs", is confusing because we advise students that an important role of personal tutors (based in the School) is to provide pastoral care. This question is intended to broadly enquire about the student services provided by the University. Due to the wide range of services provided centrally by the University, it would be very difficult to evaluate question 8 in order to identify which services the students had in mind when completing the survey. Furthermore, question 8 immediately follows a direct question about the personal tutor, which is likely to confuse students and reduce the effectiveness of question 7, "The support offered by my personal tutor was good." There are also questions which do not lend themselves to evaluation or action, e.g. question 1, "The course has met my expectations." It would be equally difficult to draw out best practices based on a very positive response as finding a suitable action to address a very negative response.
Question 15, "The course was intellectually challenging", does not fit with the satisfaction response choices. In the current reporting method, a Strongly agree response is the highest score of 5. But, that is not equivalent to the highest satisfaction for this question. In fact, the question may be interpreted in 2 different ways, "I am satisfied with the intellectual level of the course" (without any indication whether the student finds it difficult or not), or "I am satisfied that the intellectual level of the course is about right for me" (which is a better indication of how difficult the student perceives the course to be).
With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the table below shows the course feedback survey results for years 1 and 2. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the Course team (indicated by inverse background formatting).
|Year of study||1||2|
|Number of participants||20||34|
|Academic and tutorial guidance, support and supervision|
| 1. The course has met my expectations||3.3||3.9|
| 2. The induction process was helpful||3.8||3.9|
| 3. Accurate information about the course was available||3.6||3.8|
| 4. I was offered enough choice in study||2.7||3.4|
| 5. Effective guidance was provided in selection of choices||3.2||3.3|
| 6. Information on my academic progress was helpful||3.6||3.6|
| 7. The support offered by my personal tutor was good||3.7||4.1|
| 8. The pastoral support offered by student services met my needs||3.7||3.7|
| 9. Timetables were provided in good time||4.0||3.7|
| 10. The timetable met my needs||3.5||3.8|
| 11. My course was well managed||3.7||3.8|
| 12. Effective support was provided||3.6||3.5|
| 13. Reasonable adjustments have been made to enable my learning||2.7||3.5|
|Learning and teaching|
| 14. The general quality of teaching on units was good||3.8||3.5|
| 15. The course was intellectually challenging||4.2||4.3|
| 16. The range of teaching methods used to support my learning was good||3.7||3.5|
| 17. The range of assessment methods used to support my learning was good||3.9||3.5|
| 18. Adequate guidance was provided before assessments||3.7||3.7|
| 19. Feedback on assessments was normally provided according to published timescales||3.0||3.6|
| 20. Feedback on assessments was clear constructive and helpful||3.6||3.5|
| 21. The scheduling of assessments was appropriate||3.5||3.6|
| 22. The overall assessment load was manageable||3.5||3.5|
| 23. The overall workload for my course was about right||3.4||3.7|
| 24. Teaching accommodation was good||4.1||3.6|
| 25. The equipment available in rooms was suitable||4.1||3.8|
| 26. Library stock was good||4.0||3.8|
| 27. There was a good study environment in the library||3.5||3.8|
| 28. Generally the services of the library were good||4.1||3.9|
| 29. The quality of IT / computing facilities was good||3.9||3.7|
| 30. The availability of computing / IT facilities was good||3.7||3.6|
| 31. Overall my experience of the course this year was good||3.7||3.8|
| 32. Arrangements for considering the student view were appropriate||3.4||3.5|
| 33. Student views about the course are influential||3.6||3.5|
| 34. The development of my subject knowledge and skills this year was good||3.7||3.8|
| 35. I have contributed well to my own learning this year||3.8||4.0|
| 36. The course has prepared me well for future employment and /or further study||3.9||3.7|
| 37. Overall I was satisfied with the quality of the course||3.7||3.7|
Observations about the course feedback survey data:
- A low number of year 1 students participated in the course feedback survey. This number was slightly below the NSS threshold for public reporting.
- A moderate number of year 2 students participated in the course feedback survey.
- Year 1 students were least satisfied that feedback on assessments was normally provided according to published timescales.
- This observation was weakened by the low number of responses.
The following inference was made with caution due to the low number of responses from year 1 students:
- An independent samples t-test suggested that year 1 students have a similar satisfaction profile to year 2 students.
- t(72) = -0.8282, p = .4103
- Non-parametric tests were also performed and available in Appendix B.
5. Unit feedback surveys
The unit feedback surveys are descriptive surveys using the same response format and average value reporting method as the course feedback survey. It is subject to the same HEFCE recommendations as previously described for the NSS data and the course feedback survey.
The survey form has 12 questions, which is shorter than both the NSS and course feedback forms. All questions are reported in the same manner, with slight emphasis on the last question, "Overall I was satisfied with the quality of the unit." In practice, each question has a different priority in terms of impact to academic quality. Low satisfaction with question 2, "The assessment for this unit was appropriate" is more urgent than low satisfaction with question 5, "I enjoyed the unit."
With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the graphs below summarise the unit feedback survey results for 24 units. One unit did not conduct the unit feedback survey. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the unit coordinators and the Course team (indicated in bold format). The complete unit feedback survey data are available in Appendix A.
- The aims of the unit were clear
- The assessment for this unit was appropriate
- The unit content was appropriate to its aims
- The delivery of the unit was satisfactory
- I enjoyed the unit
- The information that I received about the assessment requirements for this unit was helpful
- I found the unit interesting
- I learnt what I had hoped to from this unit
- It was taught at an appropriate level
- The workload for this unit was manageable
- It was taught at an appropriate pace
- Overall I was satisfied with the quality of the unit
Charts 2-4 are box plots of all unit feedback survey data grouped by year. Each box represents half of the observed values for a question. The black line in bold found inside each box is the median value. The vertical bar (whisker) at the left represents the lowest observed value which is equal to or higher than the 25th percentile minus 1.5 times the width of the box. The whisker at the right represents the highest observed value which is equal to or lesser than the 75th percentile plus 1.5 times the width of the box. The triangle symbol represents observed values found outside of the whiskers (called outliers). Box plots are considered to be robust because outliers have minimal impact on the shape of the chart. David Harrison provides a longer description of box plots with additional examples (1998).
Observations about the unit feedback survey data:
- The number of participating students in 6 units were below the NSS threshold for public reporting (5 units and 1 missing unit).
- Year 1
- The median values were clustered at response value 4 (Agree).
- Responses have the largest spread of values when compared to years 2 and 3.
- Question 6, "information about assessment requirements was helpful", has a very small spread of values.
- Year 2
- The median values were clustered slightly below response value 4.
- There were 9 outliers above response value 4.
- There were 4 outliers below response value 4.
- Question 4, "delivery of the unit was satisfactory", has a large spread of values.
- Question 12, "overall I was satisfied", has a large spread of values.
- Year 3
- The median values were clustered slightly above response value 4.
- Responses have the smallest spread of values when compared to years 1 and 2.
- Question 10, "workload was manageable", has a large spread of values.
6. Evaluation and next steps
The evidence above confirmed that a great deal of effort was invested in student feedback activities. Yet, much of the hard work from both students and staff were lost due to two problems. The low number of participating students and the lack of a sampling procedure severely decreased the potential to use the feedback data for curricula enhancement. These problems reach out much further than today. Since we cannot use the feedback data from academic years 2009 and 2010 with confidence today, it means that we will not be able to use them in the future. This prevents any attempts to analyse feedback from our students because we do not have reliable historical data. THIS MUST CHANGE. It is an urgent responsibility that we start collecting reliable student feedback data so that students and colleagues in the future will have a better chance of learning from our experiences today. Below is a proposed list of next steps starting with the highest priority:
- Expand activities aimed at increasing student participation in NSS. We are already working to encourage student participation. Within NSS guidelines, we might explore possible incentives to increase student participation. The University has determined that 25 students on the Computer Network Management and Design course are eligible to participate in the 2012 NSS. It is critically important that all the eligible students participate in order to meet the NSS reporting threshold requirement of 23 participants.
- Course feedback forms should change to align with the NSS form. This has been reported by other universities to have positive effects on their NSS participation (HEA, 2007). Furthermore, it creates the potential to conduct longitudinal studies of curricula design across all years of study. It also creates the potential to conduct correlation studies about the extent to which satisfaction in years 1 and 2 is a predictor of NSS satisfaction.
- Unit feedback forms should reduce to 7 questions. This is aimed at reducing feedback fatigue and increasing focus on issues in which the Course team has the highest potential to make curricula changes. It is easy enough to add more questions in the future. It is worse to collect data which cannot be used.
- Unit feedback forms should only be used with units that have 30 or more registered students. This is aimed at protecting the reliability and validity of the whole data set and reducing feedback fatigue. We do not need to incur the efforts of students and staff when there is a high risk of not using the feedback data. Units with 30 or more registered students provide a good chance of obtaining a sufficient sample size.
- A systematic sampling should be applied to course and unit feedback surveys. Currently, participation is based on students who attend a particular (random) timetabled event selected to distribute the forms. This creates a self-selecting sample which reduces the validity and reliability of the feedback data. Furthermore, it is not possible to follow up students who missed the event or decided not to participate. A systematic sampling method would set a target number for each unit, using a rule to order the list of students in the unit (e.g. ID numbers), then, select every Nth student from beginning to end of the list such that the target number is met. Feedback forms are given specifically to those students. We track whether the feedback has been completed, but, the feedback becomes anonymous once received.
- Cease collection of qualitative feedback in the unit feedback survey. This is aimed at reducing feedback fatigue and acknowledging the difficulty of analysing open response comments and other forms of qualitative feedback (Centre for Higher Education Studies at the Institute of Education, 2010). It is easy enough to resume these activities once we have a working and reliable method to analyse qualitative feedback. Students will continue to have opportunities to provide qualitative feedback via the staff student consultative committee (SSCC) and informal feedback with their unit lecturers, personal tutors, course leaders and Head of School.
- Start the work to create a reliable method to analyse qualitative feedback from NSS and course feedback surveys. As reported by HEFCE and in this report, NSS and course feedback results do not change much between years. Thus, by focusing our efforts to systematically analyse qualitative feedback at the course level, we can also use it to triangulate and enhance our potential to use the quantitative data from NSS (which is now firmly entrenched in the University and in the sector at large) and the course feedback (which may have a predictive relationship to the NSS).
Observations and statistical tests of the NSS and course feedback data indicate that students on the Course have a similar satisfaction profile to the University at large. One question on the NSS form and one question on the course feedback form were low, but, there were no further corroborating information that they were long term concerns or how they could be addressed. In fact, it is inconsistent when compared to the generally positive year 3 unit feedback data (although unit feedback contained students from other courses). Additionally, the low number of students who participated in the NSS and course feedback acted as a discounting factor.
Visual observations of the unit feedback survey data indicate broadly that students are most satisfied with year 3 and have a wide spread of opinions about years 1 and 2. Caution is required to evaluate the unit feedback survey data because many units have students from multiple courses of study. The wide spread of values and the higher frequency of outliers collected with the year 2 units may be due to low internal validity of the data due to the lack of a sampling method. Across all 3 years, the spread of values frequently extended towards response value 5 (Strongly agree), which is a positive indication of existing good practices.
Giving and receiving feedback is a human activity. We must encourage colleagues and students to think about feedback as a dialogue. Good feedback requires time, effort and responsibility. If feedback is only used to identify problems, then it leads to two disturbing possibilities. Either there are no good practices or that we do not know how to identify good practices. But, we have all observed good practices. The problem must in the way we conduct feedback activities. Feedback as a dialogue helps us to untangle the different (and often difficult) factors that increase long term student success from the factors that only increase short term student satisfaction. We will know that changes have been successful when we can point to the spread and adoption of specific best practices arising from the student feedback activities.
- Alan Brickwood & Associates. (2008). Review of the 2008 national student survey (nss) process. Retrieved from http://www.hefce.ac.uk/pubs/rdreports/2008/rd26_08
- Centre for Higher Education Studies at the Institute of Education. (2010). Enhancing and developing the national student survey. Retrieved from http://www.hefce.ac.uk/pubs/rdreports/2010/rd12_10
- David, R. (2011). Practical statistics for educators. Lanham, MD, USA: Rowman & Littlefield Publishers, Inc.
- Gray, P., Williamson, J., Karp, D., Dalphin, J. (2007). The research imagination: an introduction to qualitative and quantitative methods. Cambridge, UK: Cambridge University Press.
- Harrison, D. (1998). Visualisation and transformation of data. Retrieved from http://www.upscale.toronto.edu/GeneralInterest/Harrison/Visualisation/Visualisation.html
- Higher Education Academy. (2007). National student survey institutional case studies. Retrieved from http://www.heacademy.ac.uk/assets/documents/nss/nss_case_studies_nov07_v5.doc
- Higher Education Funding Council for England. (2010). National student survey: findings and trends 2006 to 2009. Retrieved from http://www.hefce.ac.uk/pubs/hefce/2010/10_18
- Higher Education Funding Council for England. (2011a). 2011 teaching quality information data. Retrieved from http://www.hefce.ac.uk/learning/nss/data/2011
- Higher Education Funding Council for England. (2011b). National student survey. Retrieved from http://www.hefce.ac.uk/learning/nss
- Hinds, D. (2000). The researcher's toolkit. D. Wilkinson, (Ed.). London, UK: Routledge Falmer.
- Littlemore, S. (2011, August 17). National student survey: what's new this year? The Guardian. Retrieved from http://www.guardian.co.uk/higher-education-network/2011/aug/17/national-student-survey
- Matt Hiely-Rayner, M. (2011, May 17). Guardian university guide 2012: methodology. The Guardian. Retrieved from http://www.guardian.co.uk/education/2011/may/17/guardian-university-league-table-methodology
- Nguyen, C. (2011). Course leaders report – computer networks engineering. Retrieved from http://cnfolio.com/NetworkEngineeringCourseLeaderReport2011
- Saris, W., Gallhofer, I. (2007). Design, evaluation and analysis of questionnaires for survey research. Hoboken, NJ, USA: John Wiley & Sons, Inc.
- Surridge, P. (2009). The national student survey three years on: what have we learned? Retrieved from http://www.heacademy.ac.uk/assets/documents/nss/NSS_three_years_on_surridge_02.06.09.pdf
- Wilkinson, D. (2000). The researcher's toolkit. D. Wilkinson, (Ed.). London, UK: Routledge Falmer.
With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the tables below show the unit feedback survey results for 24 units. One unit did not conduct the unit feedback survey. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the unit coordinators and the Course team (indicated by inverse background formatting).
|Year 1 unit codes||B101||B105||B142L||B144L||B146||B148||B163||B164L|
|Number of participants||40||12||57||18||30||36||49||27|
| 1. The aims of the unit were clear||3.7||4.5||4.1||4.8||4.1||4.4||4.3||3.6|
| 2. The assessment for this unit was appropriate||3.9||4.3||3.8||4.7||4.0||4.4||3.7||3.9|
| 3. The unit content was appropriate to its aims||3.8||4.5||4.1||4.7||4.1||4.4||4.2||3.5|
| 4. The delivery of the unit was satisfactory||4.1||4.2||4.0||4.7||3.4||4.4||4.5||3.3|
| 5. I enjoyed the unit||3.0||4.1||3.9||4.7||3.5||4.3||4.3||2.6|
| 6. The information that I received about the assessment requirements for this unit was helpful||3.9||4.3||4.1||4.7||3.9||4.1||3.8||3.9|
| 7. I found the unit interesting||3.3||4.1||3.9||4.6||3.7||4.3||4.5||2.9|
| 8. I learnt what I had hoped to from this unit||3.4||4.1||3.9||4.7||3.5||4.2||4.1||2.9|
| 9. It was taught at an appropriate level||4.0||4.3||3.9||4.8||3.5||4.3||4.2||3.1|
| 10. The workload for this unit was manageable||3.9||4.3||3.9||4.6||3.5||4.2||3.8||3.4|
| 11. It was taught at an appropriate pace||4.0||4.3||3.7||4.7||3.3||4.1||3.8||3.1|
| 12. Overall I was satisfied with the quality of the unit||3.6||4.2||3.8||4.7||3.6||4.4||4.3||3.3|
|Year 2 unit codes||B201||B202||B242L||B244L||B247||B248||B253||B254||B265||B266|
|Number of participants||35||69||32||4||25||21||51||0||26||32|
| 1. The aims of the unit were clear||3.7||4.2||4.0||4.8||4.1||4.1||4.1||-||4.2||3.8|
| 2. The assessment for this unit was appropriate||3.8||3.8||3.4||4.4||3.8||4.1||3.9||-||3.8||3.7|
| 3. The unit content was appropriate to its aims||3.8||4.0||3.7||5.0||4.0||4.1||4.0||-||4.2||4.0|
| 4. The delivery of the unit was satisfactory||3.6||4.0||3.3||4.8||3.8||4.4||3.8||-||4.3||3.4|
| 5. I enjoyed the unit||3.3||3.9||3.5||4.8||3.5||3.7||3.9||-||4.1||3.2|
| 6. The information that I received about the assessment requirements for this unit was helpful||3.6||4.0||3.7||5.0||3.8||4.1||3.6||-||4.1||3.4|
| 7. I found the unit interesting||3.5||4.0||3.8||5.0||3.6||3.5||4.2||-||4.2||3.4|
| 8. I learnt what I had hoped to from this unit||3.8||3.9||3.5||4.4||3.7||3.9||3.9||-||4.0||3.3|
| 9. It was taught at an appropriate level||3.7||4.0||3.5||4.6||4.0||4.2||4.0||-||4.0||3.7|
| 10. The workload for this unit was manageable||3.6||3.9||3.3||4.2||3.8||4.1||3.8||-||4.1||3.7|
| 11. It was taught at an appropriate pace||3.5||3.9||3.7||4.8||4.0||4.1||3.8||-||4.1||3.8|
| 12. Overall I was satisfied with the quality of the unit||3.5||4.1||3.3||4.8||3.7||4.1||3.8||-||4.1||3.5|
|Year 3 unit codes||B302||B351||B352||B353||B355||B357||B359|
|Number of participants||84||29||47||25||35||51||17|
| 1. The aims of the unit were clear||3.9||4.5||4.5||4.4||4.1||4.2||4.4|
| 2. The assessment for this unit was appropriate||4.0||4.3||4.2||4.3||3.6||4.2||3.9|
| 3. The unit content was appropriate to its aims||4.0||4.6||4.5||4.3||4.0||4.3||4.2|
| 4. The delivery of the unit was satisfactory||4.1||4.6||4.5||4.2||3.9||3.9||4.3|
| 5. I enjoyed the unit||3.9||4.3||4.3||4.1||3.9||4.1||3.9|
| 6. The information that I received about the assessment requirements for this unit was helpful||3.9||4.4||4.4||4.2||3.8||4.1||3.9|
| 7. I found the unit interesting||3.9||4.5||4.6||4.2||4.3||4.2||4.2|
| 8. I learnt what I had hoped to from this unit||3.7||4.1||4.2||4.2||3.8||4.2||3.9|
| 9. It was taught at an appropriate level||4.0||4.3||4.2||4.2||3.8||4.0||4.3|
| 10. The workload for this unit was manageable||3.8||4.4||4.2||4.2||3.8||3.9||3.4|
| 11. It was taught at an appropriate pace||3.8||4.0||3.8||4.2||3.8||3.8||4.3|
| 12. Overall I was satisfied with the quality of the unit||3.9||4.5||4.5||4.3||3.9||4.2||4.1|
Non-parametric tests were also performed to check the possibility that the student feedback data were not normally distributed. These tests were consistent with the t-test findings.
- Course NSS data in 2010 compared to 2011 using Levene's test for homogeneity of variance (center = mean)
- Year 1 course survey data compared to year 2 using Levene's test for homogeneity of variance (center = mean)