November 27, 2011

A closer look at student feedback

By Chi Nguyen


This paper was extracted from a course leader report covering 3 computer networks engineering courses, which required evaluation of the National Student Survey (NSS), course feedback survey and unit feedback surveys (Nguyen, 2011). All surveys were voluntary. The course and unit surveys often took place during timetabled events, which caused the surveys to be susceptible to student absences.


1.   Context for evaluating feedback surveys


In the book, Practical Statistics for Educators, Ruth David reminds us that experimental and descriptive research require different methods of analysis. In experimental research projects, "researchers plan an intervention and study its effect on groups or individuals". By comparison, nonexperimental "descriptive research is aimed at studying a phenomenon as it is occurring naturally, without any manipulation or intervention. Researchers are attempting to describe and study phenomena and are not investigating cause-and-effect relationships" (2011).

Dianne Hinds reminds us that good research depends on data, "that are both reliable and valid. Reliability refers to matters such as the consistency of a measure – for example, the likelihood of the same results being obtained if the procedures were repeated. Validity relates broadly to the extent to which the measure achieves its aim, i.e. the extent to which an instrument measures what it claims to measure, or tests what it is intended to test" (2000). Gray, Williamson, Karp and Dalphin emphasize that "if we do our sampling carefully and in accordance with one of the standard sampling plans, it should be possible for another researcher to replicate our findings; this is an important aspect of reliability. Careful sampling ensures we have drawn our cases so that our sample accurately reflects the composition of the population of cases about which we wish to generalize; this contributes to the validity of the generalizations we make on the basis of our sample" (2007).

The difficulty and importance of translating survey objectives to survey questions is emphasized by many researchers (Gray, Williamson, Karp, Dalphin, 2007; Saris, Gallhofer, 2007; Hinds, 2000; Wilkinson, 2000). Survey design is particularly affected by content validity (do questions cover the entire range of meanings associated with an objective), internal validity (does the data accurately reflect the people who participated) and external validity (the extent to which the data reflects people similar to the participants). For example, there are a number of opinions (often conflicting) about which questions might be the most reliable indicators of teaching effectiveness, student feedback, opportunities for student learning, amount of student effort, academic progress or mastery of a specific skill.


2.   National Student Survey: Core questions


The Higher Education Funding Council for England (HEFCE) commissioned research in 2010 to investigate possible improvements to the NSS. Prof Paul Ramsden, Chief Executive of the Higher Education Academy until 2009, led the project. The research confirmed that the NSS was "originally conceived primarily as a way of helping potential students make informed choices", which indicates that the NSS should be considered as a descriptive survey. The risk of using NSS data incorrectly was sufficiently high to warrant its own recommendation. The report contained 18 recommendations. Recommendation 5 from the report is displayed below in ordered list format for clarity (Centre for Higher Education Studies at the Institute of Education, 2010).
  1. It is desirable to make available clear guidance about the risks and issues associated with using NSS results for purposes of comparison. We confirm that the NSS results can be used responsibly in the following ways, with proper caution:
    1. To track the development of responses over time
    2. To report absolute scores at local and national levels
    3. To compare results with agreed internal benchmarks
    4. To compare the responses of different student groups, including equity target groups
    5. To make comparisons, with appropriate vigilance and knowledge of statistical variance, between programmes in the same subject area at different institutions
    6. To help stimulate change and enhance dialogue about teaching and learning
  2. However, they cannot be used responsibly in these ways:
    1. To compare subject areas, e.g. Art & Design vs. Engineering, within an institution – unless adjustments are made for typical subject area differences nationally
    2. To compare scores on different aspects of the student experience (between different scales, e.g. assessment vs. teaching) in an unsophisticated way
    3. To compare whole institutions without taking account of sources of variation such as subject mix and student characteristics
    4. To construct league tables of programmes or institutions that do not allow for the fact that the majority of results are not materially different

Recommendation B.2 is particularly relevant when considering the NSS reporting method. According to a HEFCE paper in 2010, the NSS reports "on the percentage of respondents that are satisfied; in other words the sum of Definitely agree and Mostly agree respondents, divided by the total number of respondents (defined as the sum of Definitely agree to Definitely disagree respondents) for that question or category of question." (HEFCE, 2010) The implication is that the number of students answering each question may vary because students who choose option f, Not applicable, have selectively opted-out of answering that individual question. Additionally, the NSS data published to the public includes a reminder that "comparisons between years should be made with caution because the profile of the respondents will differ and this has not been adjusted for" (HEFCE, 2011a).

NSS results are not reported to the public if there are less than 23 responses or less than 50 percent response rate in a subject group (HEFCE, 2010). Some researchers are even more cautious about data quality. For example, the author of Practical Statistics for Educators, wrote, "a sample size of at least thirty cases or subjects is recommended in most studies in education" (David, 2011). The threshold requirement of 23 responses is the NSS compromise to protect the quality of the NSS data while minimising the unintentional bias against smaller educational institutions.

With considerations for HEFCE recommendations A.5, B.3 and B.4., no comparison have been made to national results.

With considerations for HEFCE recommendations A.1, A.6 and B.2., the table below shows the University and the Course results. This report focused on the number of participants and 8 questions which have the highest potential for evaluation and change by the Course team (indicated by inverse background formatting).
 20102011
UniversityCourseUniversityCourse
Number of participants287821285017
Participation percentage 46% 38%
The teaching on my course85828575
    1. Staff are good at explaining things89819094
    2. Staff have made the subject interesting.82818176
    3. Staff are enthusiastic about what they are teaching.85868553
    4. The course is intellectually stimulating.85818376
Assessment and feedback66606762
    5. The criteria used in marking have been clear in advance.77717782
    6. Assessment arrangements and marking have been fair.74767382
    7. Feedback on my work has been prompt.58485953
    8. I have received detailed comments on my work.64526741
    9. Feedback on my work has helped me clarify things I did not understand. 58526053
Academic support78848074
    10. I have received sufficient advice and support with my studies.77858075
    11. I have been able to contact staff when I needed to.85868588
    12. Good advice was available when I needed to make study choices.73817659
Organisation and management75797671
    13. The timetable works efficiently as far as my activities are concerned.76817571
    14. Any changes in the course or teaching have been communicated effectively.72707671
    15. The course is well organised and is running smoothly.76867771
Learning resources80887890
    16. The library resources and services are good enough for my needs.85888388
    17. I have been able to access general IT resources when I needed to.789074100
    18. I have been able to access specialised equipment, facilities or rooms when I needed to.76857681
Personal development81738265
    19. The course has helped me to present myself with confidence.80768065
    20. My communication skills have improved.83678365
    21. As a result of the course, I feel confident in tackling unfamiliar problems.80758165
Overall satisfaction85818576

Observations about the NSS core questions data:
  1. In 2010 and 2011, a low number of students on the Course participated in the NSS.
    • The University has requirements that determine whether a student is eligible to participate in the NSS, which prevented an accurate calculation of participation percentage at the University level. Additionally, it reduced the number of students on the Course which were permitted to participate in the NSS.
  2. In 2010 and 2011, the number of responses were below the NSS threshold for public reporting. This raised doubts about the reliability and validity of the NSS data for the Course.
  3. Students on the Course in 2010 and 2011 had a similar satisfaction level about assessment and feedback.
    • This observation was weakened by the low number of responses.
  4. Students on the Course in 2011 were least satisfied with the feedback received on their work.
    • This observation was weakened by the low number of responses.
    • Questions 7-9 do not indicate whether the low satisfaction was in relation to formative or summative feedback. If the questions were about formative feedback, then the low satisfaction was inconsistent with the higher satisfaction for questions 10 and 11.
    • The low satisfaction with feedback was inconsistent with the open response comments. There was only 1 comment about assessments and feedback from 22 open response comments. By comparison, there were 15 comments about the teaching on the course and 4 comments about academic support.

The following inferences were made with extra caution due to the low number of responses:
  1. One sample t-tests suggested that the satisfaction profile of students on the Course in 2010 and 2011 were similar to students throughout the University.
    • 2010: t(21) = -0.3366, p = .740
    • 2011: t(21) = -1.7675, p = .092
  2. An independent samples t-test suggested that students on the Course in 2010 had a similar satisfaction profile to students in 2011.
    • t(42) = 1.0598, p = .295
    • Non-parametric tests were also performed and available in Appendix B.

In the future, it would be desirable to have sufficient statistical data for comparison with agreed internal benchmarks (see HEFCE recommendation A.3). Course level data are not published to the public, which prevent the possibility of comparison with similar programmes at other universities (see HEFCE recommendation A.5). Creating benchmarks is a difficult task. This year is the first time since the NSS started in 2005 that HEFCE has published benchmarks for each university specifically for Question 22, "Overall, I am satisfied with the quality of the course" (2011a). This benchmark is for one question. According to Sue Littlemore at The Guardian, HEFCE published the new benchmark this year based on research that found "women tended to be more positive than men about their courses as were students in their 30s or 40s, but Asian, mixed-race students and people with a disability were generally less satisfied along with students following creative arts courses, although students of historical and philosophical studies tended to be more positive" (2011). We should approach the development of internal benchmarks with similar caution and respect for the difficulty of the task.


3.   National Student Survey: Open response comments


HEFCE research has reported that universities "said that the open responses were not easy to analyse and that the analysis was time-consuming." The research recommended "a study to explore the feasibility of developing an analytical tool to enable institutions to analyse comments in the free text area of the NSS in a consistent manner" (Centre for Higher Education Studies at the Institute of Education, 2010).

The table below encoded the open response comments from NSS 2011 into positive and negative categories.
NSS core questions categoryCourse 2011 NSS ratingCount of positive commentsCount of negative comments
The teaching on my course7569
Assessment and feedback6201
Academic support7422
Organisation and management7110
Learning resources9010
Personal development6500

Observations about the NSS open response comments:

The number of negative comments about teaching (category 1) may indicate that the rating of 75% is below the normal and average range in that question category. By contrast, the lack of comments about assessment and feedback (category 2) may indicate that the rating of 62% is within the normal and average range in that question category.


4.   Course feedback survey


The course feedback survey is a descriptive survey that is subject to the same HEFCE recommendations as previously described for the NSS data.

The survey form has 37 questions. The responses to each question are the same as with the NSS. However, the course feedback survey uses a different reporting method than the NSS. The responses are assigned values from 1 (Strongly disagree) to 5 (Strongly agree). The average value of each question is reported, which is different than the percentage value reported by the NSS. Not applicable responses are excluded from the average calculation, which is similar to the NSS method.

The HEFCE research report contained two recommendations relating to the number of questions on the survey form (Centre for Higher Education Studies at the Institute of Education, 2010):

The HEFCE recommendations above suggest that it is a disadvantage to have 37 questions on the course feedback form (as compared to 22 questions on the NSS form). For example, questions about the library (questions 26-28), IS services (questions 29, 30) and disability (questions 12, 13) might be more effective when obtained directly at the point at which students are using those services rather than spread out across all students on the course, which may include students that did not use the service or used at a much lesser extent. Those questions are difficult to evaluate because they are reported in the same manner as all questions, yet, the extent to which students use those services may vary greatly. Questions about timetable (questions 9, 10) and teaching spaces (questions 24, 25) might be more effective when collected ad hoc in response to specific issues, at the unit level in relation to student numbers or teaching methods, or at longer intervals across the whole School. There is a great amount of constraints on timetabling and teaching spaces, so it is not likely a Course specific issue nor is there much capacity available for changes. We should not give students an illusion of feedback when the possibility for change is severely limited. Question 8, "The pastoral support offered by student services met my needs", is confusing because we advise students that an important role of personal tutors (based in the School) is to provide pastoral care. This question is intended to broadly enquire about the student services provided by the University. Due to the wide range of services provided centrally by the University, it would be very difficult to evaluate question 8 in order to identify which services the students had in mind when completing the survey. Furthermore, question 8 immediately follows a direct question about the personal tutor, which is likely to confuse students and reduce the effectiveness of question 7, "The support offered by my personal tutor was good." There are also questions which do not lend themselves to evaluation or action, e.g. question 1, "The course has met my expectations." It would be equally difficult to draw out best practices based on a very positive response as finding a suitable action to address a very negative response.

Question 15, "The course was intellectually challenging", does not fit with the satisfaction response choices. In the current reporting method, a Strongly agree response is the highest score of 5. But, that is not equivalent to the highest satisfaction for this question. In fact, the question may be interpreted in 2 different ways, "I am satisfied with the intellectual level of the course" (without any indication whether the student finds it difficult or not), or "I am satisfied that the intellectual level of the course is about right for me" (which is a better indication of how difficult the student perceives the course to be).

With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the table below shows the course feedback survey results for years 1 and 2. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the Course team (indicated by inverse background formatting).
Year of study12
Number of participants2034
Participation percentage42%63%
Academic and tutorial guidance, support and supervision
    1. The course has met my expectations3.33.9
    2. The induction process was helpful3.83.9
    3. Accurate information about the course was available3.63.8
    4. I was offered enough choice in study2.73.4
    5. Effective guidance was provided in selection of choices3.23.3
    6. Information on my academic progress was helpful3.63.6
    7. The support offered by my personal tutor was good3.74.1
    8. The pastoral support offered by student services met my needs3.73.7
    9. Timetables were provided in good time4.03.7
    10. The timetable met my needs3.53.8
    11. My course was well managed3.73.8
Disability support
    12. Effective support was provided3.63.5
    13. Reasonable adjustments have been made to enable my learning2.73.5
Learning and teaching
    14. The general quality of teaching on units was good3.83.5
    15. The course was intellectually challenging4.24.3
    16. The range of teaching methods used to support my learning was good3.73.5
    17. The range of assessment methods used to support my learning was good3.93.5
    18. Adequate guidance was provided before assessments3.73.7
    19. Feedback on assessments was normally provided according to published timescales3.03.6
    20. Feedback on assessments was clear constructive and helpful3.63.5
    21. The scheduling of assessments was appropriate3.53.6
    22. The overall assessment load was manageable3.53.5
    23. The overall workload for my course was about right3.43.7
Learning resources
    24. Teaching accommodation was good4.13.6
    25. The equipment available in rooms was suitable4.13.8
    26. Library stock was good4.03.8
    27. There was a good study environment in the library3.53.8
    28. Generally the services of the library were good4.13.9
    29. The quality of IT / computing facilities was good3.93.7
    30. The availability of computing / IT facilities was good3.73.6
General
    31. Overall my experience of the course this year was good3.73.8
    32. Arrangements for considering the student view were appropriate3.43.5
    33. Student views about the course are influential3.63.5
    34. The development of my subject knowledge and skills this year was good3.73.8
    35. I have contributed well to my own learning this year3.84.0
    36. The course has prepared me well for future employment and /or further study3.93.7
    37. Overall I was satisfied with the quality of the course3.73.7

Observations about the course feedback survey data:
  1. A low number of year 1 students participated in the course feedback survey. This number was slightly below the NSS threshold for public reporting.
  2. A moderate number of year 2 students participated in the course feedback survey.
  3. Year 1 students were least satisfied that feedback on assessments was normally provided according to published timescales.
    • This observation was weakened by the low number of responses.

The following inference was made with caution due to the low number of responses from year 1 students:
  1. An independent samples t-test suggested that year 1 students have a similar satisfaction profile to year 2 students.
    • t(72) = -0.8282, p = .4103
    • Non-parametric tests were also performed and available in Appendix B.


5.   Unit feedback surveys


The unit feedback surveys are descriptive surveys using the same response format and average value reporting method as the course feedback survey. It is subject to the same HEFCE recommendations as previously described for the NSS data and the course feedback survey.

The survey form has 12 questions, which is shorter than both the NSS and course feedback forms. All questions are reported in the same manner, with slight emphasis on the last question, "Overall I was satisfied with the quality of the unit." In practice, each question has a different priority in terms of impact to academic quality. Low satisfaction with question 2, "The assessment for this unit was appropriate" is more urgent than low satisfaction with question 5, "I enjoyed the unit."

With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the graphs below summarise the unit feedback survey results for 24 units. One unit did not conduct the unit feedback survey. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the unit coordinators and the Course team (indicated in bold format). The complete unit feedback survey data are available in Appendix A.
  1. The aims of the unit were clear
  2. The assessment for this unit was appropriate
  3. The unit content was appropriate to its aims
  4. The delivery of the unit was satisfactory
  5. I enjoyed the unit
  6. The information that I received about the assessment requirements for this unit was helpful
  7. I found the unit interesting
  8. I learnt what I had hoped to from this unit
  9. It was taught at an appropriate level
  10. The workload for this unit was manageable
  11. It was taught at an appropriate pace
  12. Overall I was satisfied with the quality of the unit




Charts 2-4 are box plots of all unit feedback survey data grouped by year. Each box represents half of the observed values for a question. The black line in bold found inside each box is the median value. The vertical bar (whisker) at the left represents the lowest observed value which is equal to or higher than the 25th percentile minus 1.5 times the width of the box. The whisker at the right represents the highest observed value which is equal to or lesser than the 75th percentile plus 1.5 times the width of the box. The triangle symbol represents observed values found outside of the whiskers (called outliers). Box plots are considered to be robust because outliers have minimal impact on the shape of the chart. David Harrison provides a longer description of box plots with additional examples (1998).










Observations about the unit feedback survey data:


6.   Evaluation and next steps


The evidence above confirmed that a great deal of effort was invested in student feedback activities. Yet, much of the hard work from both students and staff were lost due to two problems. The low number of participating students and the lack of a sampling procedure severely decreased the potential to use the feedback data for curricula enhancement. These problems reach out much further than today. Since we cannot use the feedback data from academic years 2009 and 2010 with confidence today, it means that we will not be able to use them in the future. This prevents any attempts to analyse feedback from our students because we do not have reliable historical data. THIS MUST CHANGE. It is an urgent responsibility that we start collecting reliable student feedback data so that students and colleagues in the future will have a better chance of learning from our experiences today. Below is a proposed list of next steps starting with the highest priority:
  1. Expand activities aimed at increasing student participation in NSS. We are already working to encourage student participation. Within NSS guidelines, we might explore possible incentives to increase student participation. The University has determined that 25 students on the Computer Network Management and Design course are eligible to participate in the 2012 NSS. It is critically important that all the eligible students participate in order to meet the NSS reporting threshold requirement of 23 participants.
  2. Course feedback forms should change to align with the NSS form. This has been reported by other universities to have positive effects on their NSS participation (HEA, 2007). Furthermore, it creates the potential to conduct longitudinal studies of curricula design across all years of study. It also creates the potential to conduct correlation studies about the extent to which satisfaction in years 1 and 2 is a predictor of NSS satisfaction.
  3. Unit feedback forms should reduce to 7 questions. This is aimed at reducing feedback fatigue and increasing focus on issues in which the Course team has the highest potential to make curricula changes. It is easy enough to add more questions in the future. It is worse to collect data which cannot be used.
  4. Unit feedback forms should only be used with units that have 30 or more registered students. This is aimed at protecting the reliability and validity of the whole data set and reducing feedback fatigue. We do not need to incur the efforts of students and staff when there is a high risk of not using the feedback data. Units with 30 or more registered students provide a good chance of obtaining a sufficient sample size.
  5. A systematic sampling should be applied to course and unit feedback surveys. Currently, participation is based on students who attend a particular (random) timetabled event selected to distribute the forms. This creates a self-selecting sample which reduces the validity and reliability of the feedback data. Furthermore, it is not possible to follow up students who missed the event or decided not to participate. A systematic sampling method would set a target number for each unit, using a rule to order the list of students in the unit (e.g. ID numbers), then, select every Nth student from beginning to end of the list such that the target number is met. Feedback forms are given specifically to those students. We track whether the feedback has been completed, but, the feedback becomes anonymous once received.
  6. Cease collection of qualitative feedback in the unit feedback survey. This is aimed at reducing feedback fatigue and acknowledging the difficulty of analysing open response comments and other forms of qualitative feedback (Centre for Higher Education Studies at the Institute of Education, 2010). It is easy enough to resume these activities once we have a working and reliable method to analyse qualitative feedback. Students will continue to have opportunities to provide qualitative feedback via the staff student consultative committee (SSCC) and informal feedback with their unit lecturers, personal tutors, course leaders and Head of School.
  7. Start the work to create a reliable method to analyse qualitative feedback from NSS and course feedback surveys. As reported by HEFCE and in this report, NSS and course feedback results do not change much between years. Thus, by focusing our efforts to systematically analyse qualitative feedback at the course level, we can also use it to triangulate and enhance our potential to use the quantitative data from NSS (which is now firmly entrenched in the University and in the sector at large) and the course feedback (which may have a predictive relationship to the NSS).

Observations and statistical tests of the NSS and course feedback data indicate that students on the Course have a similar satisfaction profile to the University at large. One question on the NSS form and one question on the course feedback form were low, but, there were no further corroborating information that they were long term concerns or how they could be addressed. In fact, it is inconsistent when compared to the generally positive year 3 unit feedback data (although unit feedback contained students from other courses). Additionally, the low number of students who participated in the NSS and course feedback acted as a discounting factor.

Visual observations of the unit feedback survey data indicate broadly that students are most satisfied with year 3 and have a wide spread of opinions about years 1 and 2. Caution is required to evaluate the unit feedback survey data because many units have students from multiple courses of study. The wide spread of values and the higher frequency of outliers collected with the year 2 units may be due to low internal validity of the data due to the lack of a sampling method. Across all 3 years, the spread of values frequently extended towards response value 5 (Strongly agree), which is a positive indication of existing good practices.

Giving and receiving feedback is a human activity. We must encourage colleagues and students to think about feedback as a dialogue. Good feedback requires time, effort and responsibility. If feedback is only used to identify problems, then it leads to two disturbing possibilities. Either there are no good practices or that we do not know how to identify good practices. But, we have all observed good practices. The problem must in the way we conduct feedback activities. Feedback as a dialogue helps us to untangle the different (and often difficult) factors that increase long term student success from the factors that only increase short term student satisfaction. We will know that changes have been successful when we can point to the spread and adoption of specific best practices arising from the student feedback activities.


References




Appendix A


With considerations for HEFCE recommendations A.1, A.3, A.6, B.2 and recommendations relating to the number of questions on the survey form, the tables below show the unit feedback survey results for 24 units. One unit did not conduct the unit feedback survey. This report focused on the number of participants and 7 questions which have the highest potential for evaluation and change by the unit coordinators and the Course team (indicated by inverse background formatting).

Year 1 unit codesB101B105B142LB144LB146B148B163B164L
Credits1010202010201020
Number of participants4012571830364927
Participation percentage30%26%44%40%65%78%37%60%
    1. The aims of the unit were clear3.74.54.14.84.14.44.33.6
    2. The assessment for this unit was appropriate3.94.33.84.74.04.43.73.9
    3. The unit content was appropriate to its aims3.84.54.14.74.14.44.23.5
    4. The delivery of the unit was satisfactory4.14.24.04.73.44.44.53.3
    5. I enjoyed the unit3.04.13.94.73.54.34.32.6
    6. The information that I received about the assessment requirements for this unit was helpful3.94.34.14.73.94.13.83.9
    7. I found the unit interesting3.34.13.94.63.74.34.52.9
    8. I learnt what I had hoped to from this unit3.44.13.94.73.54.24.12.9
    9. It was taught at an appropriate level4.04.33.94.83.54.34.23.1
    10. The workload for this unit was manageable3.94.33.94.63.54.23.83.4
    11. It was taught at an appropriate pace4.04.33.74.73.34.13.83.1
    12. Overall I was satisfied with the quality of the unit3.64.23.84.73.64.44.33.3


Year 2 unit codesB201B202B242LB244LB247B248B253B254B265B266
Credits10102020101020201010
Number of participants356932425215102632
Participation percentage32%65%46%57%39%29%71%0%38%46%
    1. The aims of the unit were clear3.74.24.04.84.14.14.1-4.23.8
    2. The assessment for this unit was appropriate3.83.83.44.43.84.13.9-3.83.7
    3. The unit content was appropriate to its aims3.84.03.75.04.04.14.0-4.24.0
    4. The delivery of the unit was satisfactory3.64.03.34.83.84.43.8-4.33.4
    5. I enjoyed the unit3.33.93.54.83.53.73.9-4.13.2
    6. The information that I received about the assessment requirements for this unit was helpful3.64.03.75.03.84.13.6-4.13.4
    7. I found the unit interesting3.54.03.85.03.63.54.2-4.23.4
    8. I learnt what I had hoped to from this unit3.83.93.54.43.73.93.9-4.03.3
    9. It was taught at an appropriate level3.74.03.54.64.04.24.0-4.03.7
    10. The workload for this unit was manageable3.63.93.34.23.84.13.8-4.13.7
    11. It was taught at an appropriate pace3.53.93.74.84.04.13.8-4.13.8
    12. Overall I was satisfied with the quality of the unit3.54.13.34.83.74.13.8-4.13.5


Year 3 unit codesB302B351B352B353B355B357B359
Credits10101010102020
Number of participants84294725355117
Participation percentage62%78%71%40%58%98%50%
    1. The aims of the unit were clear3.94.54.54.44.14.24.4
    2. The assessment for this unit was appropriate4.04.34.24.33.64.23.9
    3. The unit content was appropriate to its aims4.04.64.54.34.04.34.2
    4. The delivery of the unit was satisfactory4.14.64.54.23.93.94.3
    5. I enjoyed the unit3.94.34.34.13.94.13.9
    6. The information that I received about the assessment requirements for this unit was helpful3.94.44.44.23.84.13.9
    7. I found the unit interesting3.94.54.64.24.34.24.2
    8. I learnt what I had hoped to from this unit3.74.14.24.23.84.23.9
    9. It was taught at an appropriate level4.04.34.24.23.84.04.3
    10. The workload for this unit was manageable3.84.44.24.23.83.93.4
    11. It was taught at an appropriate pace3.84.03.84.23.83.84.3
    12. Overall I was satisfied with the quality of the unit3.94.54.54.33.94.24.1



Appendix B


Non-parametric tests were also performed to check the possibility that the student feedback data were not normally distributed. These tests were consistent with the t-test findings.