The Self Paced Logic Project:
A Significant Impact Grant

Institute for Teaching & Learning
University of Connecticut

Submitted by Austen Clark
Department of Philosophy
November 1999;
Funded January 2000

 


Abstract

Philosophy 102 ("Philosophy and Logic") is a course on logic and critical thinking, taught by several members of the Philosophy faculty, and satisfying a portion of the general education requirements for all baccalaureate degrees awarded by this University. Most students who take it are in their first year at University of Connecticut. I propose to convert the large lecture version of Phil 102 (the one I teach) to a "self-paced" course.

Objectives of the Proposal.

In a "self-paced" logic course, the work for the semester is divided into relatively small units, and four to eight tests are written for each unit. Students can take multiple tests on a given unit, until they achieve the score they want on that unit. They then proceed to the next unit, and this process continues until they either get the final grade they want or run out of time. A final grade is simply the average of best scores for each unit. Students can proceed as quickly as they like through this progression. When they get an average with which they are satisfied, they can stop.

Philosophy 102 as it is currently taught relies on lectures, one hour tests, and exams. Tests in logic (and math generally) arouse considerable anxiety in some of our students. The date is set by someone else, the content is rather unpredictable, and a bad performance during one hour of the semester can affect oneís final grade for the entire semester. All of these bad features go away in a self-paced course. Students choose when to take tests. They know that they can take another test on the same unit if they bomb on the first one. They know that their final grade is more or less entirely under their control. The responsibility for learning is placed squarely on their backs, which is where it is located anyway. The self-paced format simply makes that placement overt and controllable.

Most students respond magnificently. I have several years of experience teaching both symbolic logic and critical thinking in self-paced courses at the University of Tulsa, before I came to University of Connecticut. As one might guess, students love the format; but more to the point, they also learn more critical thinking. Even a student who is in the course solely to satisfy a curricular requirement, and hoping simply for a "C", can be tempted into taking more tests, and trying to do better. Contrariwise, students who are capable of zooming through the general education requirement are here allowed to do so. Everyone is happier.

Everyone except, perhaps, the instructors. It takes a lot of work to set up the units, write the tests, and grade them all. Lectures proceed at a pace sufficient to finish all the units by the end of the semester, but even more important for the success of the course is to provide a text-book that details the skills to be learned in each unit, presents the material, and then provides lots of exercises with answers, so that students can practice before taking a test. One must also set up independent testing sessions, write all the tests, and grade them all. One must write coding frames and grading guidelines for each item of each test. Results of tests and running averages must be communicated to students in an expeditious fashion.

Proposed Methodology.

Currently I teach Philosophy 102 in a large lecture, to some 240 students, who once a week split into eight sections met by graduate teaching assistants. It occurred to me a number of years ago that those TA sections would make ideal testing sessions if the course could ever be taught self-paced. I have been working to create the resources necessary to do this, but there is a big hump between having the resources needed to teach a self-paced course to 35 students, and having enough to teach 240--much less to do the latter year after year. The purpose of the grant is to get us over that hump.

I have already developed a text for the course, and it has a fairly large collection of exercises with answers. That text needs to be reorganized into six units, and some material needs to be added. But the bigger job is put in the foundations and set up the infrastructure for creating many, many tests. That infrastructure needs to be extensible, so that it can continue to grow over the years, and sufficiently flexible, so that different instructors at University of Connecticut could use it. To eliminate potential problems with cheating, there must be many tests on a given unit, and which particular one a student gets at a particular session must be random. (Thus even if a student has seen a copy of test 2.2, there would be no way to predict that the next test they take on unit 2 will be that one.) We need at least 36 different tests (six per unit) the first time the course is taught. A given set of tests should be used just once; a new batch should be produced each time the course is taught. The eventual goal is to create a universe of items so large that it could be printed and placed in the library without giving anyone the means to cheat. Anyone who gets an "A" because they memorized that whole fat telephone book of items and answers would deserve an A.

The smart way to do this would be to create a relational database of test items (in Microsoft Access), including details on test specifications, types of item, headers for common questions, types of question, grading guidelines, full credit answers, partial credit answers, and whatever other data one needs to create, print, and grade tests on the subject. I have a fairly large collection of items from past versions of Phil 102 and from earlier logic courses, but they are scattered about in word processing files. Once collated and converted they would provide the nucleus for the database, but even the first pass of the course would still require many new items, as well as mechanisms for generating the tests, generating the answer frame/codebooks for the graduate teaching assistants, and getting grades back to students.

I believe it could be done in one semester of rather intense work. Target date: spring 2000. I would like to teach Philosophy 102 in a self-paced format the next time I teach it: viz., next semester!

Resources Needed.

To get over the hump the main cost is for graduate research assistant time (before and during the semester); and for some software. I estimate the work requires the equivalent of two graduate research assistants, split between one who knows database programming and one who knows logic. The database programming must start before the semester does, so I include the equivalent of three additional weeks of graduate assistant summer time, to be used before the spring semester begins:

Departmental Contributions.

During the grant itself very little is needed from the Philosophy Department beyond the resources already listed: a philosophy research assistant and teaching relief for Clark. By far the largest contribution the Department will make to this project is in long-term maintenance costs. After the grant is over the Department will provide all the resources needed to allow continued development of the course for an indefinite period thereafter. Those resources are basically people and time. Every year the Department provides five to eight highly motivated graduate students in philosophy to serve as Teaching Assistants in Philosophy 102. Not only will they do all the test administration and grading for the course, but more importantly I plan to use the course as a means of teaching novice teachers how to write good test items and grade them fairly. The graduate TAs will write sample tests for each unit in the data bank, which we will critique and edit. Good items will be absorbed into the data bank for the following year, and in that way it can continue to evolve indefinitely. See "Maintenance Model" for more details.

The Department will also continue to cover the incidental costs of providing a self-paced course for roughly 240 students in a semester. Given the large number of tests required, those costs are not insignificant. Each student may fill up 40-50 pages worth of tests in the course of the semester, and the chore of printing, collating, and distributing them all is a big one. TAs will use the Departmental computers to record grades, make up new tests, and post materials on the internet.

Assessment and Evaluation Procedures.

In some ways the best but crudest assessment of the success of the project would be the sheer number of sections of Philosophy 102 that we could fill with happy logic students every semester. If my experience in Tulsa is any guide, student evaluations of the course and word of mouth will both be highly positive. I wager that Philosophy could shortly offer more sections of a self-placed Phil 102 than the eight I would continue doing per year. I would certainly invite others in the Department to participate in the project, contribute items, and have use of the test bank in return.

I should point out here that this project, once completed, would have a continuing impact on a fresh batch of at least 240 students per year, for year after year, stretching into the indefinite future. If we add more sections, as I expect we will, it could easily benefit twice that number per year. If we consider the fact that most of those students will be first-year students, the benefit of converting this general education course to a student-centered format becomes even larger.

The standard university "student evaluation of teaching" may be of limited usefulness in assessing this change in instructional format. One problem is that in the two most recent evaluations of Philosophy 102, my "mean for the first eleven items" was 8.0; but more importantly the standard deviations were quite large--from 1.5 to 2.5---with medians typically above the means. Given the large dispersion in these "pre-test" distributions, with responses clumped near the top, it is hard to imagine that a change to self-paced would yield a statistically significant change in the distribution as a whole; though of course I will check this hypothesis, and would be delighted to be proven wrong.

It is likewise difficult to construct an unobtrusive pre-test post-test measurement of learning within the course. The various tests for a given unit are meant to be parallel in form, and it would be easy to assess whether means and medians in a given unit track upwards over time. In fact this simple comparison of test averages over time would continue to be the simplest and most intuitive way of assessing the success of the course: how many students do well on the harder units, such as analyzing inferences, providing critiques of statistical reasoning, or detecting fallacies. But interpretation of such data is problematic, since the tests and the students themselves are changing along with the change in instructional format. The item composition of tests within the course must change over time.

The most sophisticated and least obtrusive way of measuring changes in learning over time would be to go all the way down to the item analysis level, and perhaps even look at a time series of performance on a given item over a number of years. A given test item will eventually show up in multiple tests over multiple years. While test composition changes, a given item itself is relatively stable, and with the database it could be tracked in this way over time. But currently I have no means of recording scores at the item level. It still imposes too much of a constraint on test design--it changes too much--to make all the tests machine-readable. The typical logic test requires students to formulate definitions, state suppressed premises, diagnose fallacies, and critique arguments; and I would resist any move to funnel all this complicated linguistic behavior into a multiple choice format. The alternative--require graduate teaching assistants to record student responses on each item--would put an enormous burden on the graduate teaching assistants. If the Institute for Teaching & Learning is interested in this sort of information, I would certainly consider writing another grant down the road to record and analyse item data within the test bank.

Maintenance Model. Applicability to Other Program Areas.

As mentioned under "Departmental Contributions", once the development work is finished, the Philosophy Department will provide all the resources necessary for continued development of the course. No increase in faculty teaching load is required. The most significant long-term requirement is that we continue to add new items to the database of potential test items, so that every semester the test content remains unpredictable, fresh, and stimulating.

But this is precisely where the project could benefit another aspect of the Philosophy program, and indeed another aspect of teaching at the University of Connecticut. Not only do we teach undergraduates: we also teach graduate students how to teach. One central skill one must acquire when learning how to teach is how to write good test items, and how to grade them fairly. Graduate student teaching assistantships in this course would provide a group tutorial on how to write and grade good test items. Each graduate teaching assistant would be required to write a sample test (and a coding frame for it) for each of the units over the course of the semester. Editing and critiquing these efforts would be an invaluable exercise in our discussion on how to grade tests on a given unit. It would also provide invaluable guidance and instruction to novice teachers on how to write and grade a test, before they have to do it for real. Such a course would hence provide a low-stress but highly informative introduction to teaching, for graduate students in their first or second year in the program. (It would also be highly motivating: the self-paced format brings out the best in undergraduates, and as an instructor it is quite fun to interact with self-paced students during office hours. Those office hours yield a positive and invigorating sample of undergraduate academic behavior.)

Items written by the graduate teaching assistants will be edited and good ones absorbed into the database for the following year. Besides the need for new items, the other main challenge to continuing to teach a course self-paced year after year is the chore of generating a new batch of tests each time the course is taught. But thatís where putting all the items in a database would be smart; the test simply becomes a database "form". In the long haul it really would be rather interesting to have data at the item analysis level, so that test construction could become even more sophisticated.

Past experience with self-paced logic

As mentioned above, I taught courses in both symbolic logic and critical thinking ("Reasoning") in a self-paced format at the University of Tulsa, before coming to University of Connecticut. The symbolic logic courses came first: I was introduced to the "self-paced" idea in 1985 by James Moor, of Dartmouth College, whose textbook on symbolic logic (The Logic Book, by Merrie Bergmann, James Moor, & Jack Nelson) I admire greatly. That book is designed for a self-paced course (which they call the "Keller plan"). I was so taken with the idea that by 1992 I had written two pieces of instructional software to accompany that book; the two programs were adopted and published by McGraw-Hill in conjunction with the second edition of the text. (See the Bertie/Twootie home page below; both programs are now distributed from my website.) I subsequently converted the Tulsa course on "Reasoning" to a self-paced format, and this course, which is close to University of Connecticutís "Philosophy and Logic", worked well too.

From this earlier experience I discovered the two main challenges in continuing to teach a course "self paced": the first is that one must continually add new items to the bank of test items, and the second is that without a carefully designed infrastructure in place to simplify the process, the task of generating a new batch of tests each semester quickly becomes onerous. This proposal solves both of these problems. It would be a delight to set it up and do it right, the second time around!


The Self-Paced Logic Project homepage.

The Bertie/Twootie homepage.

Austen Clark's homepage.

The Philosophy Department homepage.