This section describes the creation of project materials, adapting traditional CPR, the five stages of the project, and the data collection methods employed.
The set of guiding questions was the first material created for the project. In CPR, the guiding questions are the prompts that an instructor uses to cue the students as to what is important to address in writing a paper. They are traditionally topic specific to a given paper, but the guiding questions created for this project were developed to be useful for all scientific writing. They took the form of nine concise prompts, each with questions for students to consider when addressing each prompt. The purpose of the guiding questions was to help give students a broad understanding of what goes into any good scientific writing, so that they could take them and use them when writing any paper, not simply the papers due for this project.
The next material created for the project was the rubric. This was derived from, but did not directly copy, the guiding questions. Each rubric criteria addressed one of the guiding questions in such a way that it would be quantifiable in one of three categories: Poor, Fair, or Very Good. In general, a poor attempt reflected very little or no effort at all on the part of the author in addressing that particular criterion. A fair attempt usually reflected an attempt that was flawed in an important way. A very good attempt conveyed a flawless or near flawless implementation of a criterion. The rubric changed slightly throughout the process as further collaboration with physics faculty occurred; when a change was made to the rubric the students were alerted. Most changes were minor, however, the final two criteria (including a diagram and mathematical or logical veracity) were added to the rubric after stage 3 of the process.
Three written examples were produced, each covering a single in-class activity, and each addressed the rubric criteria to varying degrees of quality. Each paper was written so that certain rubric criteria were addressed well, others addressed fairly, and some not addressed at all. There was no single paper which met all criteria well. This was a deliberate choice so that students would not simply find the best paper and realize that it should be scored with all Very Good marks. Akin to CPR, students were given these examples and used the rubric to score each of them. The members of the project also scored each paper using the rubric so that data on how well students were scoring each of the rubric criteria could be obtained.
The only other materials created for the project were the various writing prompts.
The process of our project was modified in many important ways from the published CPR procedure. The first important change was the generality of the guiding questions. The guiding questions of a traditional CPR activity are topic specific, guiding students to address very specific concerns in a given paper. The guiding questions developed for use in this project were created to be general so that they could be useful for students in any scientific writing and not for each specific activity.
Another important change came when structuring the writing process stages. In traditional CPR, each stage of the writing process occurs over the same paper. Students write a paper, read instructor examples of the same topic, peer review other student papers, and end with scoring their own paper. In our project, each of these stages occurred over separate class activity write-ups. This is an important difference because although the in-class activities were related and shared similar geometry, they gradually built in conceptual difficulty.
Each of the three papers were based on small group activities that took place during class. Students worked in groups of three or four students, using small whiteboards to draw diagrams, ideas and equations to solve various problems. Instructor guidance was provided only when needed. At the end of each activity, groups shared their findings with the entire class, and the instructor lead a brief reflection session drawing attention to any important conclusions and aspects. Students thus had opportunities to fully explore the material they were to write about with other individuals in the class as well as with instructor guidance. In traditional CPR assignments, students are given source material to explore outside of class, and the topics may or may not be linked to those covered in lecture.
The project was carried out over five stages.
Students were given the guiding questions (Appendix A) and the following prompt:
Using the handout “Guiding Questions for Science Writing” to suggest topics that you should address, write up your “analysis” of the activity entitled Electrostatic Potential From Two Charges. You do not need to do the calculations from every case, but your analysis should include some comparison of different cases, as we discussed in class after the activity. To help us with the grading process, please turn in this writing assignment stapled separately from your other homework.
Each student then wrote and submitted their first paper, without having seen an instructor example or the rubric.
The project's scientific writing examples (Appendix B) were uploaded to the class website. Students were given the original version of the rubric(Appendix C) and used the Rubric Evaluation Worksheet to submit their scores for each of the three examples. Students followed this prompt on the website:
You will find three files attached below. These are sample write-ups for the ring activity that you have just completed. Your task is to read each of the sample write-ups and evaluate them on a Very Good-Fair-Poor scale, based on the criteria listed in the Rubric. Fill out the Evaluation Submission Page for each of the write-ups (in the end, you will have filled it out three times). Identify the sample you are evaluating indicating the paper code (D1, D2, or D3) at the top of the Evaluation Submission Page. If there is an evaluation criterion you feel is inappropriate to include in the evaluations (i.e. items about handling experimental data), indicate this on the Evaluation Submission Page by selecting the “Not Applicable” radio button.
Data were sent to the course instructor and stored in Excel spreadsheet format. Each of the project collaborators scored the examples as well, allowing for statistical calculations determining the level of agreement students had with professional scoring. Ideally, students would have received feedback informing them of how well they agreed with instructor scoring. Due to time constraints this never occurred. Statistical analysis was carried out on the data received in this stage of the process.
Students wrote a second paper based on the following prompt:
Writing Assignment: Write a complete solution to the following problem. Keep the Guiding Questions and the Evaluation Rubric in mind when writing this assignment. (a) Find an integral expression (that Maple could evaluate) for the electric field every- where in space due to a ring of charge. Assume the ring has a uniform charge density, a radius R, and that the total charge on the ring is Q. (b) Find the electric field and the electric potential due to this ring of charge for all locations on the z-axis. Comment on your answers. (c) In your discussion section, explore the similarities and differences between your answer for the potential and your answer for the electric field. There are many things one might discuss such as the relationship of the physics to the mathematics and/or various limiting cases. Make your own professional choice about what to discuss. Since you don’t have the results of other students, it would not be relevant to comment about them.
Students submitted these papers in hard copy along with a standard homework assignment.
In this stage, students anonymously scored three other student writings and submitted their scores online. The following prompt was given:
You can find the files containing Writing Assignment 2, written by your fellow students on Blackboard. You are being asked to evaluate three of these files, following the writing rubric we have been using in class. The files are numbered. If your file is numbered N, please review the files numbered N+1, N+2, N+3. If your number is near the end of the list, please cycle back to the beginning in the obvious way. You can find your own file quickly, if you know your letter code, which is written on your final exam from last term. We have revised the Rubric, and the Evaluation page. Please use the updated versions below.
Students used and updated version of the Rubric Evaluation Worksheet from stage 2 of the project, as well as an updated rubric to submit their peer reviews. After everyone had uploaded peer reviews, the data was made available to the students so that everyone in the class could receive feedback in the form of the peer reviews.
A problem occurred during this stage in implementation of the updated Rubric Evaluation Worksheet, and a number of peer evaluation scores were never actually collected. Many students only received peer feedback from one other student, when the intent was to have three peer review scores available to each student. When the glitch in the web page was located and corrected, many students had already submitted the scores and asking them to do so again would have been unfair.
Students wrote a third paper after having had further experience with the rubric in the peer review stage. They were given the following prompt:
Use the Guiding Questions and Rubric to write up the small group activity where you calculated the magnetic vector potential due to a spinning ring.
Students were given the following prompt asking them to reflect on the writing process:
Write a page or so, answering the following questions: (a) Evaluate your writing. How did it change between the first writing assignment and the last one? (You might want to refer to some of the rubric evaluation criteria and/or you might want to discuss how your ability to express your ideas has changed.) (b) What events in the past six weeks have had the greatest impact on your writing?
This reflection piece of the project was important because it asked students to analyze their writing and identify the changes that had occurred. Not only was this important for the students' sake of solidifying positive change that may have occured, but it could also allow for triangulation of the quantitative data on how student writing had changed.
This section contains detailed information of the data collected, as well as an exploration into the validity of the data.
The data that were examined came from the following sources: Student Notebooks from 2006, Student writing samples (stages one, three and five), Student evaluations of instructor examples (stage two), Student evaluations of one another's writing (stage four) and student reflections of the process (stage five).
Data were obtained from scoring the previous year's group of physics students scientific notebooks. These notebooks were write-ups of the in class activities, similar in content to the writing students in this research project covered. The notebooks from previous years, however, did not receive very explicit instruction or guidance. Professor Manogue assigned students these notebooks with the general directions to write about the activity, how they solved it, and to identify what had been learned. Ideally, student notebooks would look like well written scientific papers, addressing each of the rubric criteria, even though the rubric had not yet been developed when the notebooks were assigned. These notebooks were scored to provide context on how the presence of the guiding questions themselves may have had an impact on student writing (assuming students from the previous year had roughly the same writing capabilities as the students involved in this research project).
To collect data from the student writing samples, each paper was read twice and scored using the rubric in each criteria. The use of the rubric allowed for quantitative data showing how each criteria differed across papers for a given student, as well as how overall criteria scores differed for all students across the three papers. Each criteria was given a score during a second reading of a given paper. The first reading of a student's sample was uninterrupted, and provided information on the style, content, and flow of the paper. The second reading was briefly interrupted each time a score for a particular rubric criteria could be assigned.
Students used an online submission form when evaluating the instructor examples in stage two of the process. After having been prompted to read through each sample, students were able to assign a letter grade to each criteria with a short justification of their score. These responses were stored in electronic spreadsheet format. The instructor examples were also scored using the rubric by the professor of the course, an undergraduate senior who had completed the paradigms courses in the previous year, a collaborating graduate student, and a collaborating post-doctoral associate. These instructor evaluations allowed for calibration of the rubric, extrapolating to what degree of accuracy students were able to understand the rubric and apply consistent scores to the writing samples.
In stage four, each student paper was assigned a code and uploaded electronically to a secure database. Students were given the codes of 3 papers to score, and using the web-based scoring method from step two uploaded their score to a spreadsheet for instructor evaluation purposes. The codes were assigned randomly so the grading between students were anonymous.
After having produced their final writing sample, students were asked to compare their final work to their original paper and comment on the process. As part of another homework assignment for the class, they were prompted to identify how their writing had changed, and to describe the parts of the course or process that had lead to this change. Reflections were to be approximately one page in length.
For a rubric to capture quantitative data about the quality of students' writing accurately, a given paper should be assigned the same scores by separate evaluators. However, due to the subjective nature of grading, and the brief period of development time for the rubric itself, it is expected that not all evaluators would assign the same exact scores for a given paper. To measure the deviance of evaluations, the instructor developed samples were scored by four parties: the professor of the course, the undergraduate evaluator, a graduate student, and a Postdoctoral associate. The results of this calibration can be found here.
The rows in bold show the deviations of the undergraduate evaluator from the average scores. Because only integer values are possibly obtained using the rubric, it may be more valuable to round each average to the nearest integer and obtain deviations from that. Having four evaluators score three separate examples shows that for any given paper or any given criteria, the deviation of the undergraduate evaluator from the average is never more than 0.75 (which only occurred for one criteria in one paper), and on average, is less than 0.25 points different from the average. This calibration shows that data obtained by the undergraduate evaluator has an average uncertainty of values that are less than 0.5 per criteria. Because the data is discrete in integer values, it is likely that when assessing a given individual student's paper, only one criterion score would differ from a group of evaluator scores. However, this calibration does suggest that average changes over many criteria or many papers of less than 0.25 may be meaningless.