SY Assessment
23 views | +0 today
Your new post is loading...
Your new post is loading...
Scooped by Kent Brewer!

Joystick Learning - Surviving Summatives: 3 Ways to Make It Unscathed

Joystick Learning - Surviving Summatives: 3 Ways to Make It Unscathed | SY Assessment |
Three ways to ensure a successful summative process - for administrators and teachers alike. Maintain your sanity through your district's summative process!
No comment yet.
Scooped by Kent Brewer!

Are Educators Honest With Parents?

Are Educators Honest With Parents? | SY Assessment |
Even though we know that the parent plays such an important role in the education of their children, it is very difficult to look a parent in the eye and say, "You are the reason your child is having such difficulty in school."...
No comment yet.
Scooped by Kent Brewer!

Assessing Teacher Assessment - Chartered Institute of Educational Assessors

Assessing Teacher Assessment - Chartered Institute of Educational Assessors | SY Assessment |
Dr Joanna Goodman, an education consultant and Fellow of the CIEA, considers the importance of professional development in assessment for all teachers, as schools enter a new dawn of developing their own processes.
Ivon Prefontaine's curator insight, April 30, 2014 2:34 PM

To be a good teacher a person has to be a good listener and observer. We have to be attentive and mindful to what is happening in our students' learning. Assessment has always been a key to teaching and learning in that way.

Scooped by Kent Brewer!

Building A Thinking Classroom Without Technology

Building A Thinking Classroom Without Technology | SY Assessment |
Building A Thinking Classroom Without Technology
Kent Brewer's insight:

Putting what we do as educators, in perspective!

No comment yet.
Rescooped by Kent Brewer from college and career ready! | SY Assessment |

It’s Time to Stop Averaging Grades


 Rick Wormeli


You’ve always averaged grades. Your teachers averaged grades when you were in school and it worked fine. It works fine for your students.

Does it? Just as we teach our students, we don’t want to fall for Argumentum ad populum: something is true or good just because a lot of people think it’s true or good. Let’s take a look at the case against averaging grades.


Hiding Behind the Math

Just because something is mathematically easy to calculate doesn’t mean it’s pedagogically sound. The 100-point scale makes averaging attractive to teachers, and averaging implies credible, mathematical objectivity. However, statistics can be manipulated and manipulative in a variety of ways.

One percentage point is the arbitrary cut-off between getting into or being denied entrance into graduate school. One student gets a 90% and another gets an 89%: the first is an A and the second is a B, yet we can’t discern mastery of content to this level of specificity. These students are even in mastery of content, but we declare a difference based only on the single percentage point. The student with 90% gets scholarships and advanced class placements and the student with 89% is left to a lesser path. Something’s wrong with this picture.

Early in my career, one of my students had a 93.4% in my class. Ninety-four to 100 was the A range set for that school, so he was 0.6% from achieving an A. The student asked if I would be willing to round the score up to the 94% so he could have straight As in all his classes. I reminded him that it was 93.4, not 93.5, so if I rounded anything, I would round down, not up. I told him that if it was 93.5, I could justify rounding up, but not with a 93.4.

I was hiding behind one-tenth of a percentage point. I should have interviewed the student intensely about what he had learned that grading period and made an executive decision about his grade based on the evidence of learning he presented in that moment. The math felt so safe, however, and I was weak. It wasn’t one of my prouder moments.

We can’t resort to averaging just because it feels credible by virtue of its mathematics. There’s too much at stake.


Falsifying Grade Reports

Consider the teacher who gives Martin two chances to do well on the final exam, then averages the two grades. The first attempt results in an F grade, but after re-learning and a lot of hard work, the second attempt results in an A. We trust the exam to be a highly valid indicator of student proficiency in the subject, and Martin has clearly demonstrated excellent mastery in the subject. When the two grades are averaged, however, the teacher records a C in the grade book—falsely reporting his performance against the standards.

This is strikingly inaccurate when using grading scale endpoints such as A and F, and it creates just as inaccurate “blow-to-grade-integrity” reporting as when we average grades closer to one another on the scale: B with D, B with F, A with C, etc.

Consider a sample with more data: Cheryl gets a 97, 94, 26, 35, and 83 on her tests, which correspond to an A, A, F, F, and a B on the school grading scale. When the numbers are averaged, however, everything is given equal weight, and the score is 67, which is a D. This is an incorrect report of her performance against individual standards.

Thankfully, many schools are moving toward disaggregation in which students receive separate grades for individual standards. This will cut down dramatically on the distortions caused by aggregate grades that combine everything into one small symbol and will help eliminate teacher concerns about students who “game” the system when their teachers re-declare zeroes as 50s on the 100-point scale. These students try to do just enough— skipping some assessments, scoring well on others—to pass mathematically. In classrooms where teachers do not average grades, students can’t do this.

No more mind games; students have to learn the material.


Countering the Charge

“Average,” “above average,” and “below average” are norm references, but in today’s successful classrooms, we claim to be standards- (outcomes-) based. This means that assessments and grading are evidentiary, criterion-referenced. A teacher declares Toby is above average, but we’re not interested in that because it provides testimony of Toby’s proficiencies only in relation to others’ performance, which may be high or low, depending on the group. Instead, we want to know if Toby can write an expository essay, stretch correctly before running a long distance, classify cephalopods, and interpret graphs accurately. We don’t need to know how well he’s doing in relation to classmates nearly so much as how he’s doing in relation to his own progress and to societal standards declared for this grade level and subject.

We can’t make specific instructional decisions, provide descriptive feedback, or document progress without being criterion-referenced. Declarations of average-ness muddle our thinking and create a false sense of reporting against standards. We need grade reports to be accurate. Distorting Averaging’s Intention

One of the reasons we developed averaging in statistics was to limit the influence of any one sample error on experimental design. Let’s see how that works in the classroom.

Consider a student taking a test on a particular topic and in a particular format. The student ate breakfast, or he did not. He slept well, or he did not. His parents are divorcing, or they are not. He has a girlfriend, or he does not. He studied for this test, or he did not. He is competing in a high-stakes drama/music/sports competition later this afternoon, or he is not. Whatever the combination, all these factors conspire to create this student’s specific performance on this test on this day at this time of day.

Three weeks later, we give students another test about new material in our unit. Have students changed during three week? Yes, hormonally, if nothing else. Add that the second test is on a different topic and perhaps in a different format. On the first test date, the student ate well, but didn’t study. He slept well, but his parents are arguing each night. The drama/music/sports performance came and went and he did well in it. He didn’t have a girlfriend. For the second test, however, he has a girlfriend, and he studied. He didn’t sleep well, however, nor did he eat breakfast, and his parents have stopped arguing which has calmed things down at home.

The second test situation is dramatically altered. The integrity of maintaining consistent experimental design is violated. We can no longer justify averaging the score of the first test with the score of the second test just to limit the influence of any one sample error.


The Electronic Gradebook

The only reason our electronic gradebooks average grades is because someone declared it a policy—not because it was the educationally wise thing to do—so the district uses the technology that supports that decision. Why don’t we choose our grading philosophy first, then find the technology to support it rather than sacrificing good grading practices because we can’t figure out a way to make the technology work?

How do we do what’s right when we are asked by administrators or a school board to do something that we know is educationally wrong? This is a tough situation, but I suggest we do the ethical thing in the microcosm of our own classrooms, then translate that into the language of the school or district so we can keep our jobs.

We can experiment in our own classes by reporting a subset of students’ grades with and without averaging them just to see how they align with standardized testing. Sometimes running the numbers/grades ourselves helps us see with greater clarity than just hearing about ideas second-hand.

We can read articles on grading and averaging, participate in online conversations on the topic, and start conversations with faculty members. We can also volunteer to be on the committee to revise the gradebook format.

We’re working with real individuals, not statistics. Our students have deeply felt hopes and worries and wonderfully bright futures. They deserve thoughtful teachers who transcend conventional practices and recognize the ethical breach in knowingly falsifying grades. Let’s live up to that charge and liberate the next generation from the oppression of averaging.


Previously published in Middle Ground magazine, October 2012

Via Lynnette Van Dyke
No comment yet.
Rescooped by Kent Brewer from ePortfolios-worldwide!

E-Portfolios Across the Educational Landscape: From K-12 to Doctoral Studies | The Sloan Consortium

E-Portfolios Across the Educational Landscape: From K-12 to Doctoral Studies | The Sloan Consortium | SY Assessment |

Nicole Buzzetto-More writes:

"Electronic portfolios are a paradigm in constructivist e-learning. They are capable of involving students in deep learning while serving as a meaningful way for both students and faculty to engage in outcomes-based assessment. E-portfolios have been shown to be a valid way to document student progress, encourage greater student involvement in the learning process, showcase work samples, and provide learning outcomes assessment and curriculum evaluation."


Also see:

Jennifer Sparrow writes:

"This presentation addresses the challenges of ePortfolio implementation within a fully online environment, including: faculty development and buy-in from a largely adjunct faculty, fully online technical support for faculty and student users, and integration of ePortfolio assignments with an existing LMS platform....  In Fall 2010 a new phase of implementation began with a new SPS leadership team and affiliation with the Connect to Learning Project, a 22 campus program to explore and strengthen best practices in ePortfolio pedagogy, coordinated by LaGuardia Community College (CUNY), and the Association for Authentic, Experiential, and Evidence-based Learning (AAEEBL), a professional association focused on ePortfolio practice."

Via Ray Tolley
No comment yet.
Scooped by Kent Brewer!

Know Your Terms: Holistic, Analytic, and Single-Point Rubrics

Know Your Terms: Holistic, Analytic, and Single-Point Rubrics | SY Assessment |

Whether you're new to rubrics, or you've used them for years without knowing their formal names, it may be time for a primer on rubric terminology. [...]

No comment yet.
Scooped by Kent Brewer!

Assessing education: have we learnt our lesson?

Assessing education: have we learnt our lesson? | SY Assessment |
David Archer: Unesco taskforce report claiming consensus on global education skills is misleading – we may have 250m children in schools, but not all of them are learning
Ivon Prefontaine's curator insight, May 1, 2014 6:14 PM

Perhaps, if we stopped focusing on preparing children for a distant workplace and focused on them enjoying their learning today, it would make a difference. All children are learning, but they are not necessarily learning what bureaucrats, technocrats, and politicians think they should learn and how they should learn.

Scooped by Kent Brewer!

Teaching Students to Embrace Mistakes

Teaching Students to Embrace Mistakes | SY Assessment |
For the last ten years, we've worked one-on-one with students from elementary school through graduate school. No matter their age, no matter the material, when you ask what they're struggling with, s
Kent Brewer's insight:

Mastery could be considered "Growth From Mistakes"!

No comment yet.
Rescooped by Kent Brewer from Professional Growth and Innovation!

What’s Working? Lessons from pioneer 21st century school districts - The Partnership for 21st Century Skills

What’s Working? Lessons from pioneer 21st century school districts - The Partnership for 21st Century Skills | SY Assessment |

  Driving Question: What’s Working?...

"It is not easy to make a 21st Century school district – a district where all students achieve mastery of 21st Century Skills and know how they are doing on acquiring these skills. How might the kids know? Learning outcomes would include the 4cs (critical thinking, communication, collaboration, and creativity) and other 21st Century Skills, curriculum would embed the skills in all subjects (collective learning outcomes), classroom-based performance assessments would assess the skills, and students would receive just-in-time feedback from online “living” report cards, updated whenever there is any new information."

No comment yet.
Rescooped by Kent Brewer from Personal [e-]Learning Environments!

How Can Teachers Assess Students’ Understanding Infographic | e-Learning Infographics

How Can Teachers Assess Students’ Understanding Infographic | e-Learning Infographics | SY Assessment |
The How Can Teachers Assess Students' Understanding Infographic refers to Gagne’s 8th event of instruction and suggests 27 ways that will help teachers to a

Via ThePinkSalmon
Ivon Prefontaine's curator insight, April 11, 2014 2:58 PM

It would be interesting to test these in on-line settings. How different would that be from traditional.

Aiko Maargret's curator insight, April 11, 2014 5:58 PM

Wow..this is are teachers going to Students understanding ???