Part 3: What instructional approaches matter most?

Interpreting the Research

This is the third post considering John Hattie’s Visible Learning for Teachers.

Post 1 is here.

Post 2 is here.

If you have looked at the first two posts on this topic, it should be clear that identifying effective strategies using research and statistics isn’t enough.  This list is subject to a variety of interpretations and can inform teaching if we use it to identify potentially valuable instructional approaches.  The next step is to review what the approach described in the literature really looks like in the classroom.  John Hattie’s book Visible Learning for Teachers.  Maximizing Impact on Learning (2012), provides the detail and for the truly brave, the citations to the actual research papers.

If we take one of the more effective strategies, direct instruction, we can illustrate the pitfalls and the opportunities of using research to inform instruction.  In Hattie’s book for teachers, direct instruction is defined in the following manner;

  1. Before the lesson is prepared, the teacher should have a clear  idea of what the learning intentions are: what, specifically, should the student be able to to do/understand/care about as a result of the teaching?
  2. The teacher needs to know what success criteria of performance are to be expected, and when and what students will be held accountable for from the lesson/activity.  As importantly, the students need to be informed about the standards of performance.
  3. There is a need to build commitment and engagement in the learning task- a ‘hook’ to grab the student’s attention such that the student shares the intention and understands what it means to be successful.
  4. There needs to be guides to how the teacher should present the lesson- including notions such as input, modeling, and checking for understanding.
  5. Guided practice involves an opportunity for each student to demonstrate his or her grasp of new learning by working through an activity or exercise- such that the teacher can provide feedback and individual remediation as needed.
  6. Closure involves those actions or statements that cue students that they have arrived at an important point in the lesson or at the end of the lesson, to help to organize student learning, to help to form a coherent picture, to consolidate, to eliminate confusion and frustration, and to reinforce the major points to be learned.
  7. Independent practice then follows first mastery of the content, particularly in new contexts.  For example, if the lesson is about inference from reading a passage about dinosaurs, the practice should be about inference from reading about another topic, such as whales.  The research on direct instruction suggests that the failure to follow this seventh step is responsible for most student failure to be able to apply something learned.
From Visible Learning for Teachers, (2012), p.73.  by John Hattie

The last sentence illustrates the first pitfall of applying research to instruction.  It is called implementation fidelity.  Often applications of research based strategies to the classroom provide underwhelming outcomes because the approach was not implemented in the same manner measured by the research.  It may not be sufficient to implement 90% of the approach if similar results are to be expected.

The second pitfall is related to the first.  This model of direct instruction should sound familiar to many long time educators.  It is closely aligned with Madeline Hunter’s ITIP model.  There are few radically new ideas in education, often only improvements and this is the case for the research base surrounding direct instruction.  It would be easy to scan this summary of the model and not note the distinctions from Hunter’s model.  One of the most important surrounds the second step.  Hunter’s model focuses on stating the lesson’s objectives to students.  The research base suggests that this is insufficient, and what needs to be communicated to students is the performance expectations in demonstrating their understanding.   Again, implementation fidelity can creep in when we are familiar with parts of a method, but perhaps not the whole approach.

Another example of the importance of paying attention to subtle meaning is illustrated by the term feedback in step 5.  The research literature around both direct instruction and feedback is clear that feedback is not the same as grading.  Without going off on a tangent about the research on feedback, this step must be implemented by identifying what aspects of the performance are not adequate and providing corrective feedback.  Assigning problems in a math class to be completed and graded is not guided practice in this model.  Providing feedback on which step of the problem is wrong is.


Ease of Implementation:  The research on homework illustrates another consideration.  Hattie’s synthesis of over 161 studies on the effects of homework on achievement yields an effect size of 0.29.  As noted in the first post this suggests that homework has a low impact on achievement, similar in effect size to studies of home visits, desegregation, and exercise/relaxation during the school day.

Should we drop homework from our toolkit?  Maybe but it still can account for as much as a 15% improvement in achievement over the course of the school year.  More importantly it is cheap and relatively easy to implement.  Additionally, Hattie’s research reviews point out homework practices that are more effective and less effective.  The type and quantity of the assignment may be more important than a  school wide policy about whether to assign or not assign homework.  As an aside, perhaps we should be equally concerned about providing exercise and relaxation for students in addition to homework?

Almost Everything Works: Few measured approaches in the literature do not have an effect size above zero (and if they do, they need to be eliminated immediately).  Thus it is important to draw a line in the sand, Hattie suggests an effect size of 0.4 as it represents the average improvement in achievement for a year of schooling.  Any instructional approach that provides this level of improvement is worth trying.  Given the limited instructional time, depending on authors such as Hattie to provide guidance is a good investment.

Education is a Practice not a Science: Even with an awareness of the issues associated with implementation fidelity, achievement is not guaranteed, it may be more likely but reflection and revision should be the norm.  Consider the last time you were given a prescription from your doctor.  The intent of the prescription was to fix a problem.  The research underlying the efficacy of the drug was scientific, but its application to your context is what the practice of medicine is about (in part).  There is not guarantee that the issue will improve with this prescription and you and your doctor will reflect and revise if things do not improve.  Educators too, apply research proven approaches, but rarely will they work in all cases, even if implemented with fidelity, just as following the directions on your prescription may not improve your condition.

Links to Hattie’s Books

Visible Learning for Teachers.

Visible Learning


About shawncarlson

Assistant Superintendent
This entry was posted in AOS 98, Instruction and tagged . Bookmark the permalink.

2 Responses to Part 3: What instructional approaches matter most?

  1. Pingback: What elements of Domain 1 provide the greatest impact on student achievement? | Rocky Channels Central Office Blog

  2. Pingback: Visible Learning for Teachers | Rocky Channels Central Office Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s