“Math class is tough.”

Barbie (1992)

## How can I use SBG to challenge the top learners?

So after weeping quietly into my cereal while looking at the Geometry Regents results at the end of this year (just kidding, I don’t eat cereal, it was my toast), I noticed that I didn’t have as many “mastery” scores as I had in the previous years. Now for those of you blissfully unaware of the Regents tests, mastery is an 85 or higher on the year end exam. One of our school board’s goal is 90% passing and 40% mastery on all Regents exams (those of you who know of the Algebra II exam, you may return reading when you can manage to stop laughing). In reality I want the students who struggle to pass or better and the rest of the kids I want at mastery.

Before I continue, I want to list the faults with trying to answer this question:

- Sample SBG size of one year.
- Regents exam content varies each and every year, and the curve is set based on how “hard” they want the test to be.
- Sample size of only 40 something students in my classes taking the Geometry Regents. Maybe earlier years were “stacked” in comparison to this year, a feeling that I’ve had since the start of this year.

## That Said.

Compared to the last couple of years, I had far fewer students reach mastery this year. Why?

How do I challenge the students who relatively breeze through the material?

There were maybe 10% of typically high achieving students who did ok on the first assessment and then knocked the second assessment out of the park. These students didn’t have to stay after-school very often to reassess, and they probably spent less time working on math under SBG then they would have in a traditional classroom.

Should I make the quizzes more difficult? Should I cap grades at 85% and to get higher I could design a project for them to get more in depth on a topic? (but if its useful for them, isn’t it useful for everyone?)

To quote Leeloo, *Please Help*.

(the fifth element is … love, ha. haha. hahahahahaha.)

Hi Dan – I can think of a number of ways. First, to get n A in the course, a students would have to solve successfully a set of challenging problems. These can be sprinkled throughout each standard. Second, in Geometry, I often give problems without the diagram – reading the problem and figuring out the diagram is another way of challenging the students.

I have found two good sources of challenging problems: (1) The Exeter problems – freely available at: http://www.exeter.edu/academics/72_6539.aspx and (2) an old textbook “A Course in Geometry” by Weeks and Adkins – we use it for our honors course. “Geometry” by Mose and Downs is also a pretty challenging course. Best of luck.

Dean Schonfeld

confidentlylimited.wordpress.com

Good call. We don’t have letter grades, but I bet I could figure something out. Those exeter problems would be great for that use. Would they work on the problems in class, or would it be out of class? I’d imagine that to keep it SBG, you’d want to keep the problems individual based. How could I make sure that they weren’t copying each other’s work?

Interestingly, the Exeter problems are organized in such a way that similar problems are repeated at later dates. I can see how the first would be used in class and a later one on a quizz or homework.

As far as individual work/copying, I plan to give the same problems to the whole class. Problems will be labeled by their Learning Objective number (my name for standard). Unless problems are very simple in output (e.g. multiple choice), copying has not been a problem. When I do give multiple choices, I create two versions and scramble the same questions.

I present to you: A picture.

http://fnoschese.files.wordpress.com/2010/11/sbg-scale.png

I dunno, I think this has potential.

It’s from this: https://fnoschese.wordpress.com/2011/02/04/reassessment-experiment/

although he’s talking about other problems.

True. However since its a course that essentially has a “list of standards” already set, aren’t they

allcore goals?Maybe the standards that show up more on the state tests could be the core goals?

In my system, demonstrating mastery on all skills = 90/100 for the semester. I base the final 10% (if applicable) on their work on my comprehensive summative exam, especially on what they do with the goal-less problems at the end. I want to see depth, sustained mastery, and using multiple ideas to tackle one problem. It seemed to work well for me, although I don’t have something to quantitative to show. I did have the most consistently good Force Concept Inventory scores I’ve ever had, including one section being my first class ever where every single person passed at the end (not just a passing class average).

I think your idea of a project is good. I think to understand anything well, including math, you need to work with those concepts in a complex and authentic way. Imagine that as you start a given unit, a complex authentic problem is given to all students. Then, take time, once per week or every other week, to work on the problem. This gives all students access to the problem, which like you said is probably beneficial to all. Students who grasp concepts quicker for whatever reason, may be allowed to work on the project more. They will likely be able to take it to more depth.

You’re the second person to bring this up in as many weeks. I think what we’re seeing, since we and the students are new to SBG, is that kids are just gaming the system. I think this was to be expected, so next year I plan to keep control of the assessments entirely. I don’t love the idea of making assessments more difficult, I think you need to nail the standard on the head, and perhaps in general students aren’t receiving intense enough assessments.

In my room, this often looks like a bunch of failing marks on the first assessment, but, because there are no numbers, it’s just a lot of feedback. A few weeks later a very similar and equally rigorous problem shows up, and more get it, the ones that don’t get more feedback.

As a community, we are all seeing this problem that students are getting “mastery” scores and then dropping off the map, thus performing poorly on final summative exams. We need to rethink how we report mastery.