From the Self-Paced Logic Project
Professor Austen Clark
Department of Philosophy
University of Connecticut
A pure "Keller plan" sets out a long sequence of units, and tests on each unit are graded pass-fail, with the "pass" criterion set relatively high. The final grade a student earns is determined simply by the number of units completed by the end of the semester. In a small class this is a fantastic way to motivate the best students to learn as much as they can. But you need ten or more units in such a scheme, and in a class with eight sections we thought the number of tests would be prohibitive. So we set up six units, with the final grade determined by the average of best scores earned over all six units.
You must give students the wherewithal to learn the material on their own if they so choose. Study guides describe the objectives of each unit, the text to be studied, and the sample exercises to practice. Our study guides have evolved to be rather explicit about the kinds of items to be found in each test, with pointers to sample exercises for each such kind. It seems fair to be explicit about such details; if you are not, the students will come take tests before studying, just to "see what the test is like".
It seems impossible to have too many of these. As items get retired from tests we plan to move them over into the textbook, as exercises. Because students need answers, the database includes answers. (Each type of problem corresponds to a section in a test, which corresponds in turn to a "group" of items in the database: the entire set of those from which the given test section is drawn.)
The textbook I prepared for the course started as nothing but sets of exercises with answers. With the notable exception of The Logic Book, by Merrie Bergmann, Jim Moor, and Jack Nelson, most logic books do not provide anywhere nearly enough of the latter. One semester I realized that if I simply added revised lecture notes to the exercise sets, it might be enough for students to learn the material without buying an additional book, as they had been doing up to that point. It worked fine, and the students liked it. I recommend an additional book for those students who want to do more reading, but it is no longer required.
Very few students take more than three or four tests on a given unit, but given the size of the class we wanted a bigger selection, to make it unpredictable what test will be at the top of the pile in the next testing session.
The initial threshold for us to teach the course self-paced to eight sections was 36 tests. This turned out to be roughly 1,000 test questions. If you have that many test questions spread out in your word processing documents, you could certainly convert to a self-paced format; less will do if the class is smaller.
These spell out full and partial credit answers. We wanted them anyway in case we could later figure out how to do web-based testing in an acceptable way. Asking oneself how one would grade potential answers to a potential test item is a very good discipline for writing good items.
It is one thing to teach a self-paced course once. The hard part is to do it all again and again, semester after semester. The purpose of the software is to provide technology to make the latter feasible. In particular, it will help you to:
This turns out to be a central (and unanticipated) advantage to keeping all your test items in a database. An item is stored in just one place, and if you fix it there, it will be fixed forever.
Teaching assistants write sample tests for each unit, and we meet once per unit to talk about those tests. Good items are used to create the tests for the "final exam" testing session, and absorbed into the database. When the course was taught self-paced for the first time (in spring 2000) we had roughly 1000 items in the database. By the end of spring 2002 it had grown to over 3000.
Initially teaching assistants entered items directly in the database, but we found it was far easier to distribute dummy scripts for the data loader for each unit. Now they can write a test in a word processor, and the resulting file is run through the data loader to chop up the items and enter them appropriately into the test bank. (See the data loader script "freeitem.txt" to see how the items distributed in selfpace.mdb were loaded. It and a dummy script for one of the units are included in the documentation package.)
I cannot imagine trying to do this using cut-and-paste in a word processor. If you want to create multiple tests built on the same plan, you will find that for each section of the test, you will want to look at the entire group of potential items that are available for that section. This is actually the core function of selfpace.mdb: the screen in "create test" lets you do exactly this. You simply click on an item to select it for the test in question. There are a bunch of subsidiary functions that are needed to make this possible:
Posted June 2002
The Self-paced Logic Project homepage.
Austen Clark's homepage.
The Philosophy Department homepage.