Friday, December 11, 2009

Goal Update- Failure!

Well, there's no sense lying- I've already fallen short of my professional goal this year. (My goal is to develop a differentiated assessment for each unit in social studies.) We just finished Mesopotamia and I didn't create a differentiated assessment- I failed!

Now here's the part where I make up an excuse. Here goes: Part of it has to do with the fact that I've never taught ancient civilizations, so much of my time and preparation is in day-to-day lesson planning and background reading to increase my own knowledge of the subject. I know where I want the course to go conceptually speaking- the themes I want to focus on within each unit of study and the materials I want to use to get there. I had my differentiation book out, open to the chapter on differentiation by product, but there was a large obstruction blocking the creative juices in my brain. I couldn't think of a project for this unit! (What do you think of my excuse? Post comments!)

Despite the fact that I didn't meet my goal for the first unit, I still think it was pretty successful overall. We studied the geography, culture (tools/technology, writing, language, religion), economy, government, and social structure of Ancient Mesopotamia and we read Gilgamesh the Hero in Language Arts (which the kids loved). The librarian from the public library gathered about thirty books on Mesopotamia for me, and students used those throughout the unit in addition to the textbook. We watched a series of short Discovery Education videos which were very engaging and provided some excellent visuals for the kids. Students worked in cooperative groups, each group became experts on a specific technology and taught the class. I have a book with awesome simulations for each ancient civilization and the kids really got into that. They made their own clay tablets and wrote their names in cuneiform (messy, but fun- definitely got across how difficult being a scribe was). Hammurabi's Court was a fun class period, but it wasn't until I read their essays that I realized they thought people who broke the law really went to a place called "Hammurabi's Court." Didn't make that point clear enough:) A lot of reading and double-column notes of course.

My assessment for this unit was a seven paragraph essay: introduction, geography, religion, culture, economy, political/social systems, and conclusion. Students wrote this over the course of one week. Day One: Reviewed some steps for expository writing. Students used a graphic organizer I provided to organize information for their essays. Day Two: drafting. Day Three: typing. Day Four: peer revising and publishing. Of course the students wanted to know how much they had to write, so we worked off of a general framework: 3 details per paragraph- okay (C), 4 details- better (B), 5 details- best (A).

The best part of having students write is it really shows you what they know. They knew a lot about the geography, religion, economy, and culture, but much less about the political and social systems so I need to spend more time and rework that section of the unit. The other thing I noticed is that students recalled and organized information effectively, but I would like to see them move beyond restating- only a handful of students explained or drew conclusions. This is something we will address, practice, and reinforce in the remaining units. The assessment for each unit will always be an essay, because students need to write as much as possible, but I definitely want a differentiated piece so students have an opportunity to showcase their strengths.

I will not give up on my goal. My next post will be about my differentiated Egypt project!

Monday, December 7, 2009

Test

I just tried to set this up as Angie suggested so that everyone who has been contributing or reading the blog will get an email notice when someone posts something new. Did it work?
-Tamar

Using the students' questions

A few days before our unit test, I gave students the following directions:

Look back to everything you have learned starting with the microscope. Using your notes, handouts and your textbook, create a fair test that includes content about microscope and slide preparation skills, identifying cells and cellular structures under the microscope, lab safety rules, and an overview of the 6 Kingdoms of Life.

The students were very engaged in creating a practice test with an answer key, and their tests were actually really thoughtful.

On the following day, students exchanged tests with other pairs and then were graded. For HW, over the weekend, students were to review the tests they created as well as the tests that they took. On Monday, we had a quiz show, which the kids were just over-the-top excited about. They loved it so much, they wanted to have a similar quiz show for math. I asked students to write down questions that were new to them or that they got wrong during the game so they could add them to their study guide.

I used the students questions to create questions from the test. Some were questions they had answered during the quiz show, and other were the questions directly from their practice tests. Tuesday was the day of truth.

I looked back at our first disappointing test and the averages in the two classes were a 64% in 7F and a 65% in 7P. For our second test, there was a 10 point increase in both class averages: 74% in 7F and 75% in 7P. I wasn't sure what to expect, but I was hoping that most students would do well. The case is that a few did well and most passed, hence the low average. Still, I think the 10 point increase is significant and a small success. I believe the students who failed are students who relied solely on the time we spent reviewing in class and spent no time at home preparing.

I am using more cooperative learning activities in science class now (i.e. JIGSAWS, 3-minute review, team-pair-solo) more often in I am hoping that the class average on our next test can be comfortably in the 80-90 range. I plan to use this method again, though there are some drawbacks. It does involve several days of preparation prior to the test, and as I found, some students are content to rely only on the time we spend together in class as their review time. What do I do with these students? I am reluctant to give them more of my time if they do not take any time on their own to prepare. I don't assign students any science homework (except to finish work they've started in class), so I think it's a reasonable expectation to have them study for tests on their own at home, especially if we've already devoted so much class time to reviewing.

Friday, December 4, 2009

Assessments

My efforts to have students become thinkers and problem solvers...
Although this post is more about students thinking (or not thinking) about their evaluations!
Some thoughts on Angie's comment about study guides etc. On our first big food chemistry assessment the students reviewed their notes in small groups and wrote example questions using Bloom's taxonomy, and from those questions I typed up a study guide...rather simple, grouping similar questions. The test was OK for some, but I'm concerned about the students we are always concerned about...Do they make no effort to prepare other than sitting in class? Do they not process anything that goes on in class? This was also the time of kids being out for 3-4 days at a stretch. Do I need to teach the class to them after school? Maybe that's what I should have done. Classmates shared notes with them and helped them with the study guide, but I had two students tell me they just couldn't do the test; they had been out sick, but they also did not seriously use the extra time they received or fully complete the study guide. After the first round with the test, I redid the study guide giving more space, making more diagrams etc. and students who wanted to complete the new guide could retake the test. There was only one student who really made an effort to do it well and retake the test. He did much better. It seems like a lot of work on my part for not much return. I would be interested to hear more about the assessment group Angie works with to get some ideas. DC

Goal Blogging

How well is this goal blog working for you all in terms of giving and receiving feedback? Tamar, I wonder whether you could update the settings to email everyone participating in the blog when someone publishes a post so that we can see what everyone is doing/trying and give each other feedback in a timely manner. What do you all think?

Wednesday, November 4, 2009

Angie's Assessment Update

Progress is slow, but I am reconsidering what the assessment should look like after my first run.

I created a study guide for the first unit based on the notes and lessons we had. Then I created the test from the study guide. The test had to be shorter than the study guide, but I felt it was good for the students to review all the important terms, concepts and applications even if they would not be tested on them because they form the background students would need for future lessons.

The test, like the study guide, had two parts. Part A was vocabulary and concept check, with some very basic application (i.e. do you know how to create and label a box-and-whisker plot given a data set). Part B was a mini-lab in which students would work with a lab partner to expose potatoes to three different environments, with one control, and explain the movement of molecules based on their observations and measurements. Both parts would be given over two days. So this is where I started.

On day one, I looked over the tests the students had completed and I knew we couldn't complete Part B. The vocabulary section was the worst. I thought students would ace this part since it was basic recall of vocabulary and concepts and also since we'd spent weeks defining, labeling, and identifying the vocabulary in various contexts, including the infamous Egg-speriment (which I am also reconsidering now after 4 years at it). Instead of giving the second part of the test the following day, we talked instead about what students did (or didn't do) to prepare for the test, whether and how the study guide was helpful, and lessons we can learn for future assessments. By the way, I did have three students get better than 100% on this first part of the test, but that is clearly not enough to make up for the utter failure of most of the class. The students admitted that they only relied on our class review to prepare for the test; many had incomplete study guides and most did not bother reviewing.

At this point, I know that the primary problem was not the assessment itself since I have given a form of this test in previous years without a dismal passing rate. What I did learn for next time is that students strongly felt that a review game would have been helpful. In the past, while I have found review games to be fun, I have not seen that it results in significant gains. In as much as they play a role in assessments, I would like to steer away from them and use review stations instead, which I feel are more focused and the distraction of earning points for one's team isn't there.

Moving forward, I would like to try creating tiered assessments. I meet regularly with a small Critical Friends Group of middle school math and science teachers and one of the things we discussed this weekend is using tiered assessments to determine how well students are meeting objectives. I think I already do this in some fashion, but the idea of explicitly designing a test with clearly labeled levels of questioning isn't something I've done and would be an excellent use of Bloom's taxonomy. I think this change will be better than the mammoth 2-day assessment I had in mind because it can incorporate application of concepts as a basic objective that all students have to meet. I have a couple of ideas on the actual design of the assessment so that it is a simple one-day assessment. Labs will still have to be separate assessments; there's no way to get around this. They become fewer and fewer as we move into more activity-based (rather than lab-based) lessons, so this will be something that applies only to the first unit.

Sunday, October 25, 2009

What's Working - Tamar's update

I don't know about other people, but I felt really good about the "critical friends" group that took place on Friday. For those of you who weren't there, it was my turn to share student work, and I brought in an email from one of my students to her "book buddy" in Kenya. The letter was two paragraphs long, and the first paragraph (social) included text messaging language, while the second (academic -- about the books she is reading) was written in standard written English. We used a protocol that I adapted from the National Writing Project's protocol for looking at digital composition, a protocol that I will be using in a presentation on this digital writing exchange at the MacArthur Foundation's "Digital Is..." conference in Philadelphia next month. The protocol asks participants to focus on what they notice, what they wonder, what questions the piece raises for them, and what's working. They are asked to withhold all judgment. This is an approach that was central to a graduate class I took at Bread Loaf a few years ago with an incredible professor named Michael Armstrong. His work is all about looking at what's working in student writing.

I left the "critical friends" group on Friday feeling energized and excited about the work our students are doing. So often I can get bogged down in what they're doing wrong in their writing, and I forget to look at what they're doing that's working. So often I can get judgmental -- not just of them, but of myself as a teacher, since I feel responsible for every mistake they make -- and I forget to appreciate and learn from what they write. The NWP/Michael Armstrong approach has been invaluable to me as a writing teacher, and I was very happy to share it with my colleagues.

This weekend I am reading and responding to my students' epic stories (based on structural elements we found while reading Beowulf). They are long. I have been dreading this weekend of grading... But as I read through them, I am reminding myself to notice, to wonder, to question, and to appreciate what is working.

As part of the writing process for this piece, students all commented digitally on their peer's epics, and they used these comments (theoretically!) to revise. I'm pleasantly surprised as I look at the comments they left for each other and the revisions they did to see that they really took their roles as commenter and reviser seriously. As I read, I am responding to their comments (things like, "I agree with what Jafah noticed here... I see how you changed this paragraph in response to his comment"). I am also writing a two-part comment at the end of each piece: What I noticed/what's working (this part is a narrative... 4-6 sentences), and a bullet-point list of "suggestions for next time," which tends to focus more on mechanical elements.

Then I'm slapping a grade at the end, which still feels a bit problematic... This process just isn't easily quantifiable!

Wednesday, October 7, 2009

New assessment (Tamar)

I'm grading the kids' creative endings to The Giver, and after much deliberation about how to balance the nitty-gritty (mechanics) with the touchy-feely (creativity, effectiveness, style, voice), I came up with a combo-rubric of sorts. I can't seem to paste the document here, so I'll just describe it. The top half is a chart with 4 columns labeled: "Based on this piece, the writer is..." "a novice (just learning)" "an apprentice (shows some skill, but needs work)" "a master! (very few mistakes)". There are 6 rows: "paragraphing," "capitalization," "spelling (including homonyms,""consistent verb tense," "complete sentences (no run-ons or fragments)," and "correct formatting." The bottom half of the page is a empty box labeled: "The effect of this piece on the reader (Ms. Paull)." I'm pretty happy with this as far as teacher-friendliness. I'm not marking any mistakes on the writing itself, which saves time and also satisfies my goal of not wanting to bleed ink all over the kids' work. The chart allows me to give the kids specific feedback on their mechanics. I'm trying to reserve the bottom box for positive or judgment-free comments. If anyone would like to try (or adapt) the rubric, let me know.

Now I better get back to that grading...

Friday, October 2, 2009

Tamar's Update: A Positive Approach

I had a really eye-opening goal conference with a student in August. He told me that he loved reading, but was a bad writer. He wanted to learn to make fewer mistakes in writing, but really didn't like writing... because he made so many mistakes! Makes sense, I thought. So what would happen if I focused on getting him to love writing, and addressed his "mistakes" some other way?

I decided to try a new approach with the first set of writing pieces this year. No corrections. None. Only positive feedback. OK, maybe a few tiny little suggestions. But NO circles, cross-outs, "frags" or "awks." This was not as easy as it sounds...

I have never been so aware of my inclination to mark up student work as I was when I read and responded to these papers. I had to keep reminding myself of this experiment, and its purpose. I want my students to become better writers. In order to become better writers, they need to enjoy writing. As my student reminded me in his goal conference in August, nobody likes doing things they are "bad" at. So my goal in responding to this first set of papers was to make everyone feel like they had done a good job. And they had... when I looked at the papers through a purely positive lens.

I underlined my favorite lines in their pieces, looking primarily for sensory detail, which was the focus of this narrative assignment. I then wrote a few sentences at the end telling them what I liked the most about their pieces. I did make the suggestion in several cases that they read their work aloud to make sure it said what they meant to say, but that was the closest I came to mentioning mistakes.

When I handed back the papers, I told them what I had done and why. I said that I didn't circle a single mistake, not because there weren't any ("everybody makes mistakes," I reminded them), but because I wanted to focus on what was best about their writing, and I wanted them to do the same. Then I did a lesson on some of the most common errors that I noticed in their writing (capitalization and homonyms. I mentioned that they would probably notice some of these things in their writing, and that next time they could catch the errors before handing in their work.

Today they completed their second big writing assignment, a creative ending to The Giver. They have spent the week writing, peer conferencing (with a set of questions, all of which focus on revision, not editing), revising (which I am still struggling to get them to see is different than editing), and editing (using a checklist of the skills I have taught them thus far). Now I'm torn about how to respond to these pieces. I really want to stick with the positive approach that I used on their last pieces. I still strongly believe that correcting their work won't teach them much of anything... Yet I'm concerned about my own accountability. I'm also torn about the role of their writing assignments in their final grades. I'm thinking about not assigning a grade to their writing pieces at all, but instead giving them narrative feedback that I think will actually make them enjoy writing more (which, I believe, will lead to them becoming better writers). I'm toying with giving more skills-based assessments that I can grade objectively, and really separating that part of grading from the much more subjective process of responding to their writing pieces. Any thoughts? Thanks!

Tuesday, September 29, 2009

Angie's Goal

Science assessments - the old paper and pencil tests - have never seemed adequate as summative assessments for my students. So naturally, I've designed projects and labs that I think will better allow students to show me everything they know at the end of a unit of learning and therefore truly be summative. However, students don't always do well on these assessments and I often hate the grading process, which is usually arduous and messy. I blame this all on the poor design of the assessments. Some of them usually seem like good ideas, but do not translate as nicely in practice.

What I believe has been one of the most successful summative assessments I have designed has comprised of a paper and pencil test that complements a lab test. I used this assessment at the end of the unit on cellular processes (diffusion/osmosis and cellular transport). What I liked about this assessment is that it gave me an opportunity to see that students understood several key concepts and skills (i.e. how to conduct & write up an investigation, how & why molecules move in/out of cells, the effect this has on the cells) and could also directly apply them in a timed lab activity. Unfortunately, not all units lend themselves to this type of a test, and often, when I feel a traditional test isn't appropriate, I have created a project.

This year, I am planning to assess all my unit assessments and implement backwards design into all my units, picking up from the first unit on cellular processes. This means that I'll start first with creating the ideal summative assessment, then plan backwards to lay the path on which students will use to realize this ideal. This ideal assessment will have an experiential (hands-on) component to it so that students have the opportunity not just to tell me what a scientific term means, but be able to apply it directly in a timed setting.

I am hoping that if I design these assessments more thoughtfully, they will better inform my teaching and help me prepare students for them. The added bonus too is that the correcting process should not be as laborious.

Wednesday, September 16, 2009

Anita's Goal

Last fall I took a course on differentiated instruction, and it is the inspiration for my professional goal this year. The class began with an overview of topics I was familiar with: brain-based learning, Bloom’s Taxonomy, multiple intelligences, and learning profiles. Even though the content was not new, I found it beneficial because these concepts are central to planning and teaching (and a little reminder never hurt anyone). As the course progressed, I realized that differentiation based on readiness was already taking place in my classroom. As a language arts teacher, it was common practice for me to provide reading materials at a variety of levels so that all of my students could access content. I even differentiated based on product, allowing students to choose how they would demonstrate their learning. While I was patting myself on the back for using DI strategies, I started to notice that most of the strategies I used were low-prep. So my goal for this year is utilize one high-prep strategy (tiered activity, centers, stations, etc.) in each ancient civilization unit I teach. I‘ve seen how these activities can benefit students in the classroom, and that is my motivation. My ultimate goal is to have units of study that are entirely differentiated, but it will take several years to get to that level. Adding a few differentiated pieces each year is a manageable goal that I look forward to pursuing. First up- ancient Mesopotamia! I’ll let you all know what my first DI lesson is and how it goes.

Thursday, September 3, 2009

Tamar's Goal

Hello! This is something new... blogging about my professional goal. But I opened my big mouth at our faculty meeting last week, and said I thought it might be a good idea, so now here goes!

Every trimester we ask our students to think about a goal that they would like to work on. The way I explain it to them, there are lots of things we are always working on, but by setting one specific goal, we commit to concentrating on it, getting feedback on it, and tracking our progress.

In the past, I have had goals to increase parent involvement and to incorporate technology in my classroom. (I must have had others, but I can't think of them right now! This is why I've decided to start a blog... documentation!)

My goal this year is to give students feedback on their writing that is effective (for them) and sustainable (for me). I'm so tired of spending hours and hours (weekends and weekend) scribbling on students' work, knowing that they will barely even give my comments a glance. Last year I shifted almost exclusively to rubrics, which certainly made my life easier, but I'm not sure how helpful it was for my students.

This year will be about experimenting with different methods and trying to figure out which are most effective and sustainable (in other words, which lead to better student writing and improved attitudes toward writing, while at the same time not eating up entire weekends).

Some things I know (or think I know) about myself as a feedback-giver:
- I do better when I type. My handwriting is terrible, it hurts my hand, and is hard for kids to read.
- It's important to me to focus on what's working in a student's piece. There is always something positive I can say, and I believe that students (like all of us) respond much better to positive feedback. The last thing I want to do is scare them away from writing.
- I am tempted to circle mechanical errors, but I also don't think this is helpful. I want to explore other ways to address mechanics that do not turn kids off writing.
- I'm wordy! (see above...) I want to focus on the feedback that will be most useful, and not feel that I have to say everything that might be on my mind.
- I want to respond to kids' work as a reader first and foremost. I don't have all the answers.

Inspiration/ resources:
Michael Armostrong ("Children Writing Stories," and others)
Andrea Lunsford
Rhoda Flaxman
Dixie Goswami
My fellow teachers at CP and other schools
... more on these folks in later posts.

I'd appreciate any thoughts that you might have. Thank you!