top of page

Statistical Thinking as a Volunteer Teacher

After I gave my first test to JSS 1 (n=23) and JSS 2 (n=22) classes, I tabulated each student’s test score to compile a dataset of test scores categorized by class. I calculated (through Excel) descriptive statistics for each class to observe cross-class performance patterns.


My first observation was that the class average score for JSS3 (73.5%) – whose test was given and marked by the retiring teacher – for the same subject was significantly greater than that of JSS1 (57.8%) and JSS2 (57.5%). At face value it seemed that my teaching method was the culprit for this variation in average scores – as the class taught by another teacher achieved a significantly higher average score than each of the two classes I taught (as well as the identical average score between these two, indicative of a common factor as a determinant of the class performance). However, another pattern was apparent: The standard deviation of class scores declined (in a significant manner) with class. JSS1’s was 25.1, JSS2’s was 17.5, and JSS3’s was 8.5.


I was therefore left with the question: is an inadequacy of my teaching method to blame for these patterns of performance?


One approach was to collect data on past 1st CA test social studies class scores, and observe if the averages support expectation of the present score pattern. I discovered that average JSS1 (for which I acquired data for 3 terms) 1st CA test scores ranged between 55.6% to 61.7% (with overall average being 58.1%), while that of JSS2 (for which I could only acquire data for one term) was 60.8%. These average performance data are consistent with present performance average (as they are not too deviant from the expectations). The standard deviations provided inconclusive information, as the JSS 2 data was not extensive enough.


With regards to the JSS3 relatively high performance, an opportunity was at hand to settle the question. A natural experiment would be to teach JSS3 after their teacher had left, give them a test, invigilate, and mark the tests. If it is truly my teaching method that was the culprit, I would expect the JSS3 average to drop significantly for the 2nd CA test.


After getting the test results, the JSS 3 average was 71.4% (from the 73.5% of the 1st test). However, interesting things were occurring with other data that revealed more occurring than what met the eyes. The JSS3 test score standard deviation rose sharply to 21.4 (from 8.5 in the 1st test). It was then I realized, and confirmed from the students, that my invigilation of the test was stricter and more vigilant than had occurred during their 1st test with their previous teacher. This factor is probably the primary explanatory factor for the high homogeneity of test scores in JSS3. To corroborate this theory empirically, I hypothesized that if it was true, then I would expect that the correlation between student effort/seriousness and 1st CA test score would be lowest for JSS3, as compared with JSS 2 and 1; while the same correlation would be more uniform if 2nd CA test scores are used. Indeed, using the degree of note completion as a proxy for student effort, I found that JSS 1’s correlation coefficient was 0.345; JSS 2’s was 0.362, while JSS 3’s was a substantially low 0.012. On the contrary, the correlations using the 2nd test scores are 0.168 for JSS 2, and a jump to 0.2779 for JSS 3 (indicating that student’s internal factors were more explanatory for 2nd test performance than for the 1st test).


Yet, I was still concerned about the slightly lower 2nd test class average for JSS3. However, I the data show that actually 57.7% of JSS3 students made performance improvement or had no change in their 2nd test over the 1st test. In other words, for the majority of the students (who were equally exposed to my teaching as the other 42.3%), my teaching method seemed adequate. Yet, I realize I cannot absolve myself of any error, as I carried out a quick survey in all three classes, asking the students to rate my teaching method viz a viz whether they learned well from me. The average class ratings show that while JSS1 and JSS2 rated me above 98% on average, JSS3 rated me at 88.6% - indicating that I need to put more effort into how I teach JSS3 as a senior class. Additionally, the variance of the rating is substantially higher for JSS3. This fact, combined with the fact that 42.3% of JSS3 students did worse in their 2nd test under me, suggest that while some (probably those exhibiting greater ability and seriousness) may find it easier to learn under me, many do not do so. The implication for me is that I must make my teaching style more inclusive of a wider range of learning types.


Presently, I am working on identifying the determinants of the variation in student’s test performance, using a multiple regression model.


Featured Posts
Check back soon
Once posts are published, you’ll see them here.
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page