The attitude I present in the classroom is crucial to having a good first few weeks. In particular, this means projecting an overwhelmingly patient, cheery, positive, and helpful face, with as much energy as I can muster at any given moment.

This is especially important for my largish Math in Decision Making courses (55 students each). But it is also helping set a better tone for my linear algebra class.

Last term things were not so easy for me personally. It bled into my work. I did not consistently get across a positive attitude about students, their learning, and my involvement. This term, things are going better. I just have to keep it up for 13 more weeks.

]]>Usually, I combat this by picking topics to study that I find interesting to let my enthusiasm shine through. Also, I try to sprinkle in a few high energy activity days. That energy comes from me, though. It doesn’t always infect the students. This term, I will try to get a better hook into the students by supplementing our intro days by asking them to ask questions about the subject the activity introduces. I don’t have a better plan, yet. It can be hard to get students to generate meaningful questions when confronted with new material. So, I will spend some time this week thinking about how to support them through this activity.

Tomorrow, Spring 2016 begins in earnest. I have a dangerously large to-do list forming for the week, but I refuse to worry about it until 9am.

]]>But the lack of posting here is a symptom of my distance from deep thinking about my work in the last year. I have plenty of excuses, even some good ones. But I did find it useful to write here at one point. I found it even more useful when people decided to read and comment. I am still amazed that some of you have done this. Thank you.

Anyway. I am going to try regular reflective writing again.

Here I go.

Today I wrote a letter of recommendation for a student. It was a simple thing, but it got me thinking about what I value in students and why. And maybe I am a bit worried about how I communicate those values to students, both through my daily interactions with them and through my assessment practices.

I always spend time on the plane to the Joint Meetings thinking *Deep Thoughts*, so this will probably occupy my mind on the way to or from Seattle next week.

Tomorrow, I plan to work on a writing project.

]]>Today we have a guest post by John Ross of Southwestern University. I met John at the Legacy of R.L. Moore meeting this summer, so I already know he is interested in effective teaching methods. This past weekend he mentioned lightly on twitter that he is using a new assessment setup. I wanted to hear the details, so I invited him to write about it. I am very pleased that he accepted my challenge.

This semester I am running my calculus class using a specifications-based grading system. The decision to do this was made after discovering Robert Talbert’s blog and reading the many informative things he had to say about specs grading. If you’re unfamiliar with this style of grading, I’d recommend starting there (http://rtalbert.org/blog/2015/Specs-grading-report-part-1/).

I developed my syllabus late last summer after talking with Robert at the IBL conference and with several people at MathFest (Spencer Bagley and Tom Clark, among others). My version is similar to many other versions that I’ve read about, but it does differ in a few ways. I use two major types of assessment that run parallel to each other: (weekly) quizzes and (approximately monthly) tests. Quizzes and tests cover most of the same material, but use different “grain sizes” and are assessed differently. I don’t think this quiz/test setup is particularly innovative, but it does allow me to take some liberties with my grading method for my quizzes. Quizzes are assessed using a mastery-based grading system with four possible grades. After a quiz is returned, students have the opportunity to revise their grade by doing additional work.

As mentioned above, my students take quizzes and tests with different “grain sizes” and different grading systems for each. Quizzes are used to assess discrete “skills.” There are 33 skills marked out for the whole semester, and each week between 1 and 3 skills appear on the quiz. Tests, on the other hand, are used to assess more broadly defined “subjects.” There are 13 subjects in total. At the end of the semester, a student’s grade will be determined by the number of skills and the number of subjects that student has been able to master (in addition to some other factors, such as homework).

If a student fails to master a skill on a quiz, or fails to master a subject on a test, there are other chances to show mastery. I’ll talk about the quizzes below, but mastery on the tests is done in a traditional “mastery” format (so students who fail to pass a subject on the first test will have a second chance on the next test and, if necessary, chances on future tests as well). The final exam serves as a final test to show mastery: if students have done well throughout the entire semester, they may not even need to take this exam.

By design, there is a large amount of overlap between skills and subjects. This allows me to assess students’ understanding of the material twice, and makes me feel more comfortable with my quiz grading system. In particular, the tests give me some comfort that students can’t succeed in this class by cramming for a skill quiz, passing it, and then forgetting the material.

The weekly quizzes, and the way I assess them, are the heart of my class. I grade each skill separately, give as much feedback as possible, and give students one of four possible grades: Mastered, Progressing: Email, Progressing: Discussion, or Insufficient. Grades of M (or I) are used when students do nearly perfectly (or very poorly) The “progressing” grades allow for some wiggle room in the middle, with minor mistakes getting a grade of “Progressing: Email” and more moderate mistakes getting a grade of “Progressing: Discussion”.

After getting their graded quizzes back, students have 2 weeks to raise their grade for each skill to “Mastered”. How they do this depends on what grade they received. A student who receives a P:E must send me an email (usually a paragraph or less) detailing what they got wrong, and what the correct answer is, to receive mastery. A grade of P:D means that the student must come and talk to me during office hours (or set up an appointment outside of office hours) for a short discussion about the skill. A grade of Insufficient means they must come to me for a minor discussion, and take an alternate quiz (which will be graded on the same scale of M—P:E—P:D—I).

What happens, then, is a student can have several chances (and go through several iterations) attempting to receive mastery for a skill. As an example, a student could do very poorly and receive an I on the first quiz; receive a P:E on the alternate quiz; send me an initial round of corrections via email, which I push back against because I disagree with their explanation; and, finally, submit good corrections and receive mastery.

I like this system for a number of reasons. First of all, it gives students multiple chances (and multiple ways) to display mastery of a skill. There are some students, for example, who have told me that they suffer from test anxiety and constantly underperform on the quizzes, yet can clearly communicate the material to me via email or by face-to-face discussion.

Second, it gives me a greater degree of control and flexibility than I would have in a simpler Mastered/Insufficient grading system. This means I don’t have to agonize over how to grade “mid-range” quizzes.

Third, it can let me “cheat” by pushing students to relearn various things, even if they are not directly related to the skill at hand. For a concrete example: if a student does well on a Chain Rule quiz but forgets how to compute an important derivative, I can reward them for their work on the chain rule while forcing them to revisit (and hopefully learn) the derivative in question by giving them a grade of P:E. If I were using a binary grading system, I would probably have to give them grade of M (since they had mastered the chain rule) and there’s no guarantee that they would read my comments or learn that derivative.

Finally, I believe I spend less time grading under this system than if I were using a binary “Mastered or Insufficient” system. This is probably debatable, but I believe that I spend less time reading emails and having short discussions than I would by regrading and regrading multiple papers.

There are definitely some downsides to this system. The biggest downside is that I am concerned that some students are using the system as a crutch, and aren’t learning the skills as deeply or completely as they should. Some students seem to be banking on getting “progressing” grades, and then copying out of the book/notes to fix that grade to a “mastery” level. This is concerning, but the parallel test system helps me feel more confident that these students must (eventually) learn the material.

Other students tend to view the discussions as a time when I will explain to them what they did wrong, rather than the intended goal of having students lead the discussion and explain to me what they had done wrong. This is definitely my fault for failing to adequately communicate my intentions for these discussions. I have found the following work-around: if a student comes to a discussion woefully unprepared, I will discuss the problem with them, but won’t give them mastery for the skill. Instead, I’ll bump them up to P:E, and make them email me a summary of the discussion we’ve had, plus corrections to their quiz.

Because I have a large number of students (60 students in Calculus, plus an additional 30 in another class), I end up hosting a lot of office hours (5 a week, plus appointments). These office hours aren’t always full, but many students do show up. I maintain this isn’t a terrible thing, though, because a lot of that time spent meeting with students would be spent grading otherwise (and these discussions can offer a lot of chances for formative assessment and feedback for the student).

Finally: 60 calculus students being quizzed on 33 skills, and each skill being graded on a 4-point scale with probable revisions, can lead to a rather complicated looking grade book. I try to keep a record not only of each student’s current grade, but their whole grade history for each skill (so I can reference, for example, that a student has already taken an alternate quiz and gotten a grade of P:E on it). Halfway through the semester, I have had several students who were unclear on what their grade was in the class. It’s very nice to have this amount of information to refer back to, although the grade book was rather difficult to set up initially.

As the semester wears on, I’m sure that more positive and negative aspects of this system will come to light. For now, though, I am relatively happy with how this system is working out, and an informal anonymous survey of my students suggests that they like it too. I am happy to provide more information to anyone who has any questions.

Tagged: assessment, John Ross, Robert Talbert, Standards based assessment, standards based grading ]]>

But this term, for at least one course, I am abandoning both. My best course is *Euclidean Geometry*. Standards based Grading didn’t work because the learning goals are big, process-oriented things. This made it seem like Specs Grading would be a decent fit, especially because a typical assignment (“find a proof of this theorem, present it to the class, and write a paper about it”) is a complicated, professional activity. In the end, the quality might vary, but you either did it well enough, or you didn’t. But the explicit statements like “do seven of these to earn an A” broke my spring class. Students rushed to their desired grade and then stopped. So, I won’t do either of them in the canonical way for this course again.

I have learned a lot from trying SBG and Specs. I think that a knowledgeable person can still see evidence of each in my work. I now have a clear statement of what the (previously nebulous) learning goals are (SBG). And I have much better language to describe what acceptable work looks like (Specs).

But I won’t use explicit promises about grade conversions anymore. Instead, I have described what typical achievement looks like for each grade in looser terms.

If you want to peek, go here.

]]>The book is enjoyable, and Cheng does an admirable job of trying to explain what mathematics is and how one does it. Of course, she then moves on to focus on what the essential nature of category theory is (her answer: “the mathematics of mathematics”). I found a few nuggets of new analogies and viewpoints to share with my students who are struggling to write their first proofs and find their first conjectures.

The part I most enjoyed was in the last few pages. Cheng describes three different viewpoints on truth:

- Knowledge
- Belief
- Understanding

I really like that these are clearly differentiated. Most importantly, she describes the process of mathematical communication really well, and with a useful diagram. (I suppose that a useful diagram is to be expected from a category theorist.) Here is the diagram.

I think what is truly great here is that the picture is a narrow ravine (she points out that it is difficult and dangerous to jump across). And her accompanying description of mathematical communication is as a process by which you (1) somehow pack your understanding into a proof, and then (2) share that rigorous proof. This plays up the role of a axiomatic proof as the kind of thing which can be done without (or maybe just with a lot less) ambiguity, while recognizing that it might obscure the understanding a little bit. But it also highlights that there is a job for the reader, which is really an inverse to the job of the writer. The reader must (2′) confirm receipt of and agreement with the logical argument, but then (1′) unpack it to construct their own understanding of the ideas.

This is a model I can share with my new proof-writers to help them do their jobs better.

]]>