Assessment and Literacy
4 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Rescooped by Heather McKissick from Eclectic Technology
Scoop.it!

21st Century Critical Literacy: Is Traditional Reading & Writing Enough?

Is traditional reading and writing enough to be considered literate in the 21st century?

Check out this slideshow from Silvia Rosenthal Tolisano which asks us to "rethink our notion of critical literacy." She also suggests that we "develop authentic learning and assessment opportunities {as we} upgrade and amplify our curriculum."


Via Beth Dichter
more...
No comment yet.
Rescooped by Heather McKissick from Information and digital literacy in education via the digital path
Scoop.it!

Diigo and Feedly and Flipboard- Oh My! Ideas for Teaching ... | Jennifer L. Scheffer

Diigo and Feedly and Flipboard- Oh My! Ideas for Teaching ... | Jennifer L. Scheffer | Assessment and Literacy | Scoop.it
Ideas for Teaching Information Literacy. “Awesome”. That is what Evan, one of my Digital Literacy students, said when I showed him and his classmates Flipboard. Laying the Foundation for Digital Citizenship: For the past two ...

Via Elizabeth E Charles
more...
No comment yet.
Rescooped by Heather McKissick from college and career ready
Scoop.it!

Rejecting Instructional Level Theory

Rejecting Instructional Level Theory | Assessment and Literacy | Scoop.it

Rejecting Instructional Level Theory A third bit of evidence in the complex text issue has to do with the strength of evidence on the other side of the ledger. In my two previous posts, I have indicated why the common core is embracing the idea of teaching reading with much more complex texts. But what about the evidence that counters this approach?

Many years ago, when I was a primary grade teacher, I was struggling to teach reading. I knew I was supposed to have groups for different levels of kids, but in those days information about how to make those grouping decisions was not imparted to mere undergraduates. I knew I was supposed to figure out which books would provide the optimal learning experience, but I had no technology to do this.

So, I enrolled in a master’s degree program and started studying to be a reading specialist. During that training I learned how to administer informal reading inventories (IRI) and cloze tests and what the criteria were for independent, instructional, and frustration levels. Consequently, I tested all my students, and matched books to IRI levels using the publisher’s readability levels. I had no doubt that it improved my teaching and students’ learning.

I maintained my interest in this issue when I went off for my doctorate. I worked with Jack Pikulski. Jack had written about informal reading inventories (he’d studied with Johnson and Kress), and as a clinical psychologist he was interested in the validity of these measures. He even sent a bunch of grad students to an elementary school to test a bunch of kids, but nothing ever came of that study. Nevertheless, I learned a lot from Jack about that issue.

He had (has) a great clinical sense and he was skeptical of my faith in the value of those instructional level results. He recognized that informal reading inventories were far from perfect instruments and that at best they had general accuracy. They might be able to specify a wide range of materials for a student (say from grade 2 to 4), but that they couldn’t do better than that. (Further complicating things were the readability estimates. These had about the same level of accuracy.)

For Jack, the combination of two such rough guestimates was very iffy stuff. I liked the certainty of it though and clung to that for a while (until my own clinical sense grew more sophisticated).
Early in my scholarly career, I tracked down the source of the idea of independent, instructional, and frustration levels. It came from Emmett Betts’ textbook. He attributed the scheme to a study conducted by one of his doctoral students. I tracked down that dissertation and to my dismay it was evident that they had just made up those designations without any empirical evidence, something I wrote about 30 years ago!

Since then, readability measures have improved quite a bit, but our technologies for setting reading levels have not. Studies by William Powell in the 1960s, 70s, and 80s showed that the data that we were using did not result in an identification of optimum levels of student learning. He suggested more liberal placement criteria, particularly for younger students. More liberal criteria would mean that instead of accepting 95% word reading accuracy as Betts had suggested, Powell identified 85% as the better predictor of learning—which would mean putting kids in relatively more difficult books.

Consequently, I have sought studies that would support the original contention that we could facilitate student learning by placing kids in the right levels of text. Of course, guided reading and leveled books are so widely used it would make sense that there would be lots of evidence as to their efficacy.

Except that there is not. I keep looking and I keep finding studies that suggest that kids can learn from text written at very different levels (like the studies cited below by Morgan and O’Connor).

How can that be? Well, basically we have put way too much confidence in an unproven theory. The model of learning underlying that theory is too simplistic. Learning to read is an interaction between a learner, a text, and a teacher. Instructional level theory posits that the text difficulty level relative to the student reading level is the important factor in learning. But that ignores the guidance, support, and scaffolding provided by the teacher.

If the teacher is doing little to support the students’ transactions with text then I suspect more learning will accrue with somewhat easier texts. However, if reasonable levels of instructional support are available then students are likely to thrive when working with harder texts.

The problem with guided reading and similar schemes is that they are focused on helping kids to learn with minimal amounts of teaching (something Pinnell and Fountas have stated explicitly in at least some editions of their textbooks). But that switches the criterion. Instead of trying to get kids to optimum levels, that is the levels that would allow them to learn most, they have striven to get kids to levels where they will likely learn best with minimal teacher support.

The common core standards push back against the notion that students learn best when they receive the least teaching. The standards people want to know what it takes for kids to learn most, even if the teacher has to be deeply involved. For them, challenging text is the right ground to maximize learning… but the only way that will work is if kids are getting substantial teaching support in the context of that hard text.

P.S. Although Lexiles have greatly improved readability assessment (shrinking standard errors of measurement and improving the amount of comprehension variance that can be explained by text difficulty), and yet we are in no better shape than before since there are no studies indicating that if you teach students at particular Lexile levels more learning will accrue. (I suspect that if future studies go down this road, they will still find that the answer to that issue is variable; it will depend on the amount and quality of instructional support).

Betts, E. A. (1946). Foundations of reading instruction. New York: American Book Company.

Morgan, A., Wilcox, B. R., & Eldredge, J. L. (2000). Effect of difficulty levels on second-grade delayed readers using dyad reading. Journal of Educational Research, 94, 113–119.

O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology, 102, 1–19.

Pinnell, G. S., & Fountas, I. C. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.

Powell, W. R. (1968). Reappraising the criteria for interpreting informal inventories. Washington, DC: ERIC 5194164.

Shanahan, T. (1983). The informal reading inventory and the instructional level: The study that never took place. In L. Gentile, M. L. Kamil, & J. Blanchard (Eds.), Reading research revisited, (pp. 577–580). Columbus, OH: Merrill.

 

 

 

 


Via Lynnette Van Dyke
Heather McKissick's insight:

This makes me ponder..As literacy changes what do we need to change as well???

more...
No comment yet.
Scooped by Heather McKissick
Scoop.it!

Assessment for learning

Five key strategies for effective formative assessment - Dylan Wiliam and Marnie Thompson. KvUtiS: http://webb2.svedala.se/utbildning Tack för feedback och h...
Heather McKissick's insight:

This is such a great way to explain how to use authentic assessments in a classroom!

more...
No comment yet.
Rescooped by Heather McKissick from Eclectic Technology
Scoop.it!

The Elements Of A Literacy-Rich Classroom Environment

The Elements Of A Literacy-Rich Classroom Environment | Assessment and Literacy | Scoop.it

"Literacy-rich environments, as endorsed by the International Reading Association, have a significant impact on what goes on in the classroom and set the stage for interactions with a wide variety of genres...

 Perhaps we should begin by focusing attention on the classroom environment and making certain that it is a place that supports and encourages literacy learning. A literacy-rich environment not only supports the standards set by the Common Core, but also provides a setting that encourages and supports speaking, listening, reading, and writing in a variety of authentic ways – through print & digital media."


Via Beth Dichter
more...
PGI- VBCPS's curator insight, January 9, 2013 8:28 AM

This post includes an infographic that looks at the top ten characteristics of a literacy-rich classroom as well as a range of suggestions and links to additional resources. Many great ideas!

Frances's comment, March 4, 2013 12:07 PM
Resources for Literacy
Nicole Schutter's curator insight, March 27, 2014 11:25 PM

Great ideas!

Rescooped by Heather McKissick from Digital Delights for Learners
Scoop.it!

emaze - For Online Presentations

emaze - For Online Presentations | Assessment and Literacy | Scoop.it
emaze is the new online presentation tool for people who want more than PowerPoint. AMAZING PRESENTATIONS IN MINUTES! No software required. Start now.

Via Ana Cristina Pratas
more...
Glenda Morris's curator insight, April 16, 2014 5:24 PM

EMaze looks like a great web tool for students - needs further exploration, another alternative to PowerPoint

Rescooped by Heather McKissick from Language Development, Literacy, and the Young Child
Scoop.it!

A Critical Analysis of Eight Informal Reading Inventories | Reading Topics A-Z | Reading Rockets

A Critical Analysis of Eight Informal Reading Inventories  | Reading Topics A-Z | Reading Rockets | Assessment and Literacy | Scoop.it

Assessing Early Literacy


Via Matthew Mooney
more...
No comment yet.