Blogging from The Tenth Southern Hemisphere Conference on the Teaching and Learning of Undergraduate Mathematics and Statistics

R Nazim Khan – UWA (photo taken from http://www.uwa.edu.au/people/nazim.khan)

See paper: Assessments: an open and closed case.

Looking at the difference between open book exams and closed book exams.

Closed book exams: How much can a student hold in their head. Only demonstrates what students can do with what they’ve memorised.

Open books exams are more like real life.

These ideas don’t have any data in them. There is research about which is ‘better’. Bailie and Toohey: Not clear that open book is really any better.

First year stats level: Can we investigate whether open or closed book leads to different understanding. Observational study:

Univariate statistical methods and some probability with an open book assessment.

Not possible to compare open book assessment with previous years due to other variations.

Because it was open book:

Students had copied down solutions from old tests: Not a good sign.

Students used model answers.

Many blank scripts: More than ever for closed scripts.

Top students were missing.

Then tried closed book exam with similar questions.

Comparisons:

Students did better in the second open book exam (they had some experience at this point), and in the following closed book exam than the first open book exam.

Students who did worse preferred open book.

open book exams didn’t inflate marks

Ioannidou (1997) also found that students did better in closed book exams.

Students are less prepared in an open book exam.

Even the top students didn’t prepare themselves as well.

Literature is unclear on whether OBEs do lead to better understanding.

Closed book exams don’t rely purely on memory.

Based on this research: Recommend closed book exams for low-level mathematics and statistics courses. You can still include questions which are more demanding.

How clear is this post?