JLBC Four Typical Online Learning Assessment Mistakes


JLBC Four Typical Online Learning Assessment Mistakes

• Determine the best tools for defining, modeling, and rendering 2-D and 3-D objects.

Mercer graded students based on weekly homework assignments (10 percent of the grade), two projects (20 percent of the rate), and a final test (70 percent of the rate). More than a third of the students got a C or lower on the last and, as a result, because the final was such a large percentage of the final grade, they received low degrees for the course. Lana didn’t understand why students were upset because final grade scores reflected a bell curve, so the range of grades was as she expected. See any assessment problems? (Yep, you should.)

Four typical mistakes

People who build instruction make typical but unfortunate mistakes when designing learning assessments. These mistakes compromise their competence as designers of in-construction and the quality of the in-construction they build. These mistakes include:

1. Expecting a bell curve

2. The wrong type of assessment

3. Not valid (enough) assessments

4. Poorly written multiple-choice

tests

Expecting a bell curve

Benjamin Bloom (1968), a distinguished educational psychologist, proposed that a bell curve model, with most students performing in the middle and a small percentage, per- forming very well and very poorly (e.g., a standard or bell curve) is the wrong model of expected outcomes from most instruction. The bell curve model is what might be expected without education. Education should be specifically designed to provide the instruction, practice,

By Patti Shank, Ph.D., CPT

The goal of learning assessments should be to measure whether actual learning outcomes match desired learning outcomes. Here’s an analogy. Your freezer stops keeping foods frozen, so you call the appliance repair folks. They show up on schedule and charge you exactly what they estimated on the phone. Is that enough information to know if the desired outcome (frozen food) has been achieved? No, of course not.

We use freezers to achieve specific outcomes. We build instructions to complete particular products as well. Well-written instructional objectives describe the desired results of in-construction and are critical to designing good courses and assessments.

A freezer that works means the food stays frozen as expected. Instruction that works means people learn as expected. Adequate learning assessments tell us whether the in-construction we built works and provides us with data to adjust our efforts.

We measure whether instruction “works” by seeing if the education we build helps people achieve the learning objectives. I’d even argue that we cannot be considered competent builders of instruction if we can’t show that what we built helps learners learn. Some might say that’s a big “duh,” but I’m guessing a fair number of people who make in- construction haven’t thought about it.

Here’s a scenario for us to consider. Lana Mercer, a new instructor, has just finished teaching her online

People who build instruction make some typical but unfortunate mistakes when designing learning assessments, which compromise both their competence as designers of education and the quality of the education they build.

Course, Introduction to Computer Graphics. Here are the three most critical terminal objectives for this course (these are reasonably well-written, unlike most of the objectives I see, which makes it far easier to determine what assessments are needed):


Analyze common uses for these computer graphics: 2-D representation and manipulation, 3-D representation and manipulation, animation, and image processing and manipulation. Describe methods for defining, modeling, and rendering 2-D and 3-D objects.

0 views0 comments