Blogging from The Tenth Southern Hemisphere Conference on the Teaching and Learning of Undergraduate Mathematics and Statistics
Dr Justin Munyakazi – UWC (photo taken from https://www.uwc.ac.za/Biography/Pages/Dr.-Justin-Munyakazi.aspx)
Live blogging: Note that these are notes I’ve taken live, but will edit this today into a more readable format. I want to put this up straight away though to see if I have any obvious misunderstanding. Equations will also be put into more readable format ASAP.
Are students actually learning what we intend to teach them? An analysis of student responses to first-year calculus examination questions (Munyakazi, Kizito, Elengemoke).
Background and Context
first year university maths for BSc students in South Africa: around 240 students. This particular course (MAT105) has a very low pass rate: around 20-30% despite interventions.
Uses Stewart Calculus as a text.
Mode of delivery: lectures, tutorials, one to one consultations.
Using tablets and posts lectures on the learning management system.
Failure seems to come mainly on the calculus component of the course.
Problem statement: there seems to be a disjuncture between what is taught and hat is learned.
Study aim: Reflectively analyse student responses to selected calculus examination questions to establish the size of the gap between the planned learning outcomes and the resultant learning outcomes in order to find ways of reducing this gap
Theoretical frameworks: Mathematical Assessment of Task Hierarchy (MATH): A1-C3
Lithner’s framework for creative and imitative reasoning (for categorising student reasoning used in their responses): Students are not going to the creative side but are simply using the memorised side of Lithner’s framework.
For each task, we are comparing the levels of difficulty of the question on one side and the student reasoning on the other side.
Limitations:
- It is difficult to put a question into one category
- It is difficult to exactly pinpoint what reasoning stills students are employing when responding to the tasks.
- Little validation of the taxonomies themselves.
Selected a student sample (36 of 138 out of 240 who got over 40%)
Selected questions for analysis:
- Question related to the gradient and turning points of a graph
- Question on finding derivatives of a function
and 10 more questions.
eq. one question: 9 didn’t respond, 17 got zero, no students got 5/5.
eq. A Student could recall what they must do but couldn’t apply it properly. The strategy is not supported by any observed verification argumentation. The task is partially correct.
In summary: performed a qualitative assessment of each student’s responses in order to put into Lithner’s framework.
In most instances the student strategies are not supported by any observed verification argumentation
Conclusions:
Are we examining what we set out to teach?
To some extent yes, because the questions are designed with and understanding of the levels of mathematical competencies required.
The disjuncture between the lecturer expectations and actual student performance is caused by a number of reasons.
One of the reasons could be students tendency to use sometimes inefficient and mathematically superficial imitative strategies rather than creating their own solutions through reasoning (Sidenvall, Lithner and Jader 2015, p.533)
Lecturers are often required to cover content in very short periods leaving very little time for students to gain the required mathematical reasoning skills.
Perhaps the focus should be on designing activities that encourage students to provide their own explanations as to why solutions and answers are correct.
[…] Dr Justin Munyakazi from The University of the Western Cape on Assessments: Are students actually le… […]