Understanding By Design (UbD)Template:
An Educator’s Bridge From Effective Learning to Student Understanding
- Explain the six facets of understanding
- Summarize why transfer of understanding is essential to learning
- Identify the meaning behind the acronym GRASPS
- Explain the type of learning that can be measured using traditional assessments vs a grading rubric
UbD STAGE 2: DETERMINE ACCEPTABLE EVIDENCE
UbD STAGE 2: OVERVIEW
Having set the Desired Results in Stage 1, the reversed nature of Understanding by Design (UbD) template continues its backward design process by setting parameters for Stage 2 goals: Determine Acceptable Evidence. During this stage, designers are focused on assessment of the desired goals of Stage 1 (Wiggins & McTighe, 2011). While the desired result goals may have been met, the designer needs to specify the concrete evidence that will indicate new learning and understanding has taken place. Simply passing an exam is not a concrete indicator of understanding (Wiggins & McTighe, 2001). Research has shown that assessments requiring student to correctly answer the question by plugging in a memorized bit or using a standard formula does not guarantee meaningful understanding, as the student is simply mimicking for correctness (University of Waterloo, 2011). Further, testing a specific skill via a simple demonstration of a particular skill out of context can also lead to a false sense of mastery (Wiggins & McTighe, 2011).
While simple correct vs incorrect assessment are good for simple right or wrong answers such as spelling, defining terms and simple facts, assessment for deeper understanding requires more meaning-making evidence that can be transferred to new, real life situations. If the measure of learning is a student’s ability to use the new learning (understanding, knowledge and skills) in a new situation outside of the classroom, then true evidence of understanding should be assessed in the same fashion with students using their own voice and their own problem-solving skills much like they would in a real world setting (Wiggins & McTighe, 2011). Instructional designers must remain cognizant of creating tasks that require the student show deep understanding of the goals identified in the Transfer and Meaning sections of Stage 1. Stage 2 of UbD sets out to guide the assessments, keeping them aligned with the Stage 1 Desired Results goals.
Six Facets of Understanding
Students who achieve understanding as transfer can:
- Explain using their own words
- Interpret using stories or personal anecdotes
- Apply & adjust new learning outside of class
- Demonstrate perspective by considering other view points
- Show empathy for others
- Demonstrate metacognition
While the six signs of understanding as transfer are not a prescription for creating performance tasks that show evidence of deep understanding, the outcomes of the tasks generally illustrate learning and understanding in one or more of the six facets of understand (Wiggins & McTighe, 2011).
Validity of Evidence is Key
Before a designer can set tasks as evidence-markers for understanding, the UbD theory asks the designer to keep two validity questions in mind:
A well-aligned task should have a mostly negative response to both questions. Applying these two questions to the proposed activity will keep the assessing task aligned with the Stage 1 goals (Wiggins & McTighe, 2012).
Designing Performance Tasks
Each task should include the following to insure that the task has enough detail for the student to execute it properly:
G = What is the goal of the real-world setting task
R = What role will the student will play in the proposed task
A = Who is the Audience for the task.
S = What is the context or task set up of
P = What will the end product be or look like
S = What are the standards or criteria for judging success
Contrary to drills or solving for simple answers like fill-in the blank homework assignments, GRASPS performance-based tasks work to engage the student into independent, deeper understanding by asking students to independently analyze, interpret, create by using prior knowledge and new learnings to solve for the task. Doing so increase the validity of the task, as it requires transfer of understanding to new situations with little to no guidance on the part of the educator (Wiggins & McTighe, 2012).
UbD STAGE #2: PERFORMANCE TASKS
(#1) In the example to the right, the left-hand side of the Performance Task section of the UbD template asks for the designer to list all the corresponding Transfer Goal & Meaning Goals from Stage #1 that apply.
(#2) The Performance Goals are created using GRASPS (Wiggins & McTighe, 2012). In this examples, the task meets three facets of understanding: Explain using own words, interpret data from label, and apply learning independently to a consumer communication piece (Wiggins & McTighe, 2011).
(#3) To the right-hand side of the Performance Task section of the UbD template, designers should supply the Evaluative Criteria that corresponds to each Performance Task. Regardless of the task assignment, Evaluative Criteria outline the most important qualities required to assess the desired results (Wiggins & McTighe, 2011).
Making the Stage #1 & Stage #2 Connection: Evaluative Criteria
The complex nature of Performance Tasks oftentimes require more than one type to criteria to assess evidence of transfer understanding. Clearly identified, varied criteria affords the student with more feedback opportunities, and helps to make the goals transparent for both teacher and student. Further, solid, multi-facetted criteria helps educators maintain consistency when grading subjective Performance Tasks (Wiggins & McTighe, 2012).
Analytic Grading Rubric
An analytical grading rubric is a tool that marries a set of criteria to a form of measurement or analysis to enhance the evaluative nature of the task, and to give better feedback to students (Wiggins & McTighe, 2012). The rubric typically offers a small number of valid criteria for evaluating student’s work, each scored independently within the rubric.
UbD STAGE #2: ANALYTIC GRADING RUBRIC
TRAIT: The top line of the rubric identifies
the traits or rubric criterion considerations. The criterion should be varied to help enhance as well as evaluate learning (Wiggins & McTighe, 2012).
DEGREE: Since rubric is not used for right or wrong answers, the tool must evaluate the degree of understanding. Each trait or criterion will have language to describe the acceptable degree of understanding – in this case across four differing degrees: Exemplary, Sufficient, Needs Revision and No Evidence.
WEIGHT: Depending on the task, different weights may be given to each trait in association with its relative importance.
Overall, a well-constructed rubric that has multiple criteria and weighted degrees of differences will provide students with more accurate evaluation and feedback. Further, the results will provide the educator with more detailed feedback on student understanding (Wiggins & McTighe, 2012).
Evidence of Learning: Acquisition of Knowledge & Skills
As with Stage #1 Acquisition, if the new learning is simply knowledge such as defining terms, spelling or simple concepts, then traditional assessment in the form of quizzes or multiple choice tests will suffice. The same holds true for basic skills such as following the WSET SAT guide while tasting wine. In other words, the best assessment provides proof of meeting Stage #1 goals. If that can be done with a simple quiz, then that is the best assessment for that particular learning. Performance tasks should be used for deeper (enduring understanding) goals that require transfer to new settings (Wiggin & McTighe, 2011).
UbD STAGE #3: OTHER EVIDENCE
Just as with Performance Tasks, traditional or other evidence is coded to the far left using the Stage #1 Desired Results goals (outlined here in green) and Evaluative Criteria is shown as evidence to the far right (outlined here in yellow).
The middle column, labelled Other Evidence, lists the type of traditional assessment the students will complete such as homework and quizzes (outlined in here in red).
The six facets of understanding work to guide designers, as they create Performance Tasks for transferable learning. Each Performance Task is evaluated separately and is included in the UbD template. After the tasks are vetted, a grading rubric is then carefully created to provide consistent evaluation across different degrees of performance. The criterion is weighted to give a fair assessment of tasks that have no right or wrong answer. Analytical grading rubric provides the educator with student assessment for more focused teaching , while providing the students with evaluation and feedback for better learning in the future (Wiggins & McTigh, 2012).
UbD STAGE #2: VIDEO TUTORIAL
For additional guidance on filling out Stage #2 of the UbD template, please view this informal video. The video, hosted by Marianne Frantz. offers a short presentation on adding performance tasks, coding, valid evidence, and rubric grading tools to your UbD template.
STAGE #2 UbD WSET: Level 2 Wines & Spirits
For easy reference, a sample UbD template is attached below. This particular template is being created for the WSET Level 2 wine course by the American Wine School. The template has been completed for Stage #1 & Stage #2 and represents a good working example of backwards design.
Wiggins, G., & McTighe, J. (2011). The Understanding by Design Guide to Creating High-Quality Units. Alexandra, Virginia: Association for Supervision and Curriculum Development.
Wiggins, G., & McTighe, J. (2012). The Understanding by Design Guide to Advanced Concepts in Creating and Reviewing Units. Alexandra, Virginia: Association for Supervision and Curriculum Development.