Summary of the work from the AOS 98 SLO committee
Much of the discussion at Sept 20’s SLO Committee meeting was focused on the guidance in using-baseline-data-and-information-guidance from the Rhode Island Department of Education. The thinking in this document reflects the focus we would like to take in developing SLOs going forward. There are several practices that are different from some of the approaches taken last year and will be summarized below.
If you would like to read the entire document (only 15 pages) go here : using-baseline-data-and-information-guidance
Why Complete a Needs Assessment?
The goal of developing an SLO is to improve student learning by being intentional about what we want students to know, how we plan to accomplish this goal, and what supports we will need to help all students achieve these outcomes. Data review helps us to identify the needs of our current students. A good needs assessment is the beginning of effective differentiation for students.
What Data is Required for this Needs Assessment?
It is in this area that the thinking of the Committee has begun to change and evolve. As described in the document above, appropriate data is anything that allows us to know where our students are currently.
This data can include
- State assessment results
- Report card grades
- Scores on formative or summative assessments
- Individual Education Plans
- Report card comments from previous years
- Observations of student behavior
- work habits
- interactions with other students and teachers
- running records
- quarterly writing prompts
- fall/winter/spring benchmarks
- performance tasks
- end-of-unit tests
- research papers
- lab reports
Which data is appropriate depends upon the learning outcomes being assessed, the type of assessment being used for the summative measure and whether the data in the needs assessment will inform your preparation for the students being measured.
The data chosen should facilitate the prediction of growth goals for the students and allow one to identify the students who are likely to struggle, students who are at grade level and students who are high achievers. You may not have students in these three categories but the data reviewed should help to differentiate the students in your cohort.
What about the use of Pre-Tests?
Pre-tests are one potential source of Baseline Data. A true pre-test measures the same knowledge and skills that will be measured on the post-test, is in the same form, of the same length, given under the same conditions, etc. For a variety of reasons, high-quality pre-tests can be difficult for educators to design and administer themselves. For example:
- If the assessment is not based on knowledge or skills that progress in a linear manner, such as reading fluency, pre-tests may not be appropriate
- In order to be able to truly compare a pre and post-tests, they must be comparable in terms of difficulty, length, and manner in which they were administered.
- If aware that a pre-test “doesn’t count” students may consciously or unconsciously underperform, thus deflating pre-test scores.
- If the post-test too closely resembles the pre-test, scores may be inflated due simply to familiarity and exposure.
These difficulties, as well as others, threaten the validity of the test results. Therefore, it is important to use pre-tests and post-tests when appropriate, such as if an educator has set an SLO on content or skills that progress in a linear manner. The following are some best practices and recommendations for creating high-quality pre- and post-tests:
- Create both pre- and post-tests at the same time, before or at the beginning of the interval of instruction.
- Use the same instructions and testing conditions for both test administrations.
- Ensure both forms have approximately the same number of items, representing the same standards and Depth of Knowledge levels.
- If writing original items, use a “cloning process,” which involves creating an item and then creating a duplicate with similar numbers, comparable texts, etc
- Collaborate with another educator, or have another educator review the test forms.
If an educator has set an SLO on content or skills that do not progress in a linear manner, a pre- and post-test model is likely not appropriate. For example, it may not be useful for an American History teacher to administer a pre-test on names, dates, and facts of American History prior to instruction. However, he or she might consider administering an assignment that asks students to demonstrate related skills that would impact their success on the course, such as comprehension of informational text or analysis of primary and secondary sources. Unlike a pre- and post-test design in which the two assessments closely resemble each other, these sources of baseline data need not be an exact match of the summative evidence used for the SLO in terms of format and content.
(USING BASELINE DATA AND INFORMATION TO SET SLO TARGETS A PART OF THE ASSESSMENT TOOLKIT., p.9)
Pretests can play a role in some SLOs but only if they help in establishing pre-knowledge or prerequisites, placement on a continuum of learning and differentiation for the students in the cohort.
Giving a pretest on knowledge or skills that are new to students has little value for informing growth targets or instruction. If a pretest only confirms that students do not know the material, it is a waste of their and your time and does little by itself to help establish fair and challenging goals for all your students.
Does the needs assessment have to be the same type of assessment as the summative assessment used at the end of the unit of instruction?
No. The data used in the needs assessment should inform how you expect students to preform on the chosen targets, but may not measure each of the learning outcomes directly. If I were to complete a needs assessment for a course in physics, I would look closely at students performance on algebraic and trigonometric concepts as measured by previous classes completed, NWEA scores, or a pre-assessment given by me to measure mathematical fluency. I might use what I know about these students strengths and weaknesses as observed in other courses I have had them in or in the current course I am teaching. I might check with other teachers of those students for their observations. I might review PSAT data in math, etc. I would probably not give them a pretest modeled on my final exam or final performance assessment. This would be unlikely to inform my growth targets for these students.
Instead, my growth targets would be based upon breaking my students into sub-groups based upon their strengths and a professional prediction about what is a reasonable amount of growth or level of proficiency to attain by the end of the unit of instruction differentiated for each sub-group in my class.
Please review the examples provided for various grades and content areas in the document above.