Wednesday, December 12, 2007

REVISING INSTRUCTIONAL MATERIALS

The following are notes from chapter 11 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

When performing revisions to your materials, you should consider changes that are made to the content of the materials to make them more accurate or effective as learning tool, and you should also consider changes to the procedures employed in using your materials.

The designer has five kinds of data he or she can analyze in order to revise instructional materials: learner characteristics and entry behavior, direct responses to the instruction, learning time, posttest performance, and responses to an attitude questionnaire, if used.

Upon analyzing data from the one-to-one evaluations, the designer should first describe the learners and their performance on any entry-behavior measures. Next, the designer should bring together all the comments and suggestions about the instruction, and then summarize data that is associated with the posttest. It is also helpful to develop a table that indicates each student's pre and posttest scores, and total learning time. With all this information in hand, the designer is ready to revise the instruction.

Based on learner performance, the designer should first try to determine if the rubric or test items were faulty. If the items are satisfactory and the learners performed poorly, then the instruction must be changed. The designer should carefully examine the mistakes made by learners in order to identify the kinds of misinterpretations they are making and therefore the kinds of changes that might be made.

The data from small groups of eight to twenty learners are of greater collective interest than individual interest. The available data often include item performance on the pretest, posttest, response to any attitude questionnaire, learning and testing time, and comments made directly in the materials. Performance on each item must be scored as correct or incorrect. If an item has multiple parts, then each part should be scored and reported separately, so information is not lost.

The designer should do an item-by-objective analysis using a table to determine the difficulty of each item for the group, the difficulty of each objective, and to determine the consistency with which the set of items within an objective measures learners' performance on the objective. The item-by-objective table should also provide the data for creating tables to summarize the learners' performance across tests, or even each individual learner's performance. Learners who mastered each objective should increase from pretests to posttests. Data from the tables can also be displayed through various graphing techniques.

By comparing pretest with posttest scores objective by objective, the designer can assess learner's performance on each particular objective and begin to focus on specific objectives and the related instruction that appears to need revision. It may also be necessary for the designer to consider alternative strategies to use, as well as to revise the materials to make them fit within a particular time frame. Attention should also be paid to the instructional procedures, especially if there were questions about how to proceed from one step to the next.

Tips to remember: The designer should avoid responding too quickly to any single piece of data, and instead should corroborate these data with other data. And just remember that when you make changes through the revision process, you cannot assume that the remaining unchanged instruction will necessarily maintain its initial effectiveness. You may hope that your changes are for the better, but you cannot assume that they always are.

Monday, December 10, 2007

DESIGNING AND CONDUCTING FORMATIVE EVALUATIONS

The following are notes from chapter 10 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

Formative evaluation is the process designers use to obtain data that can be used to revise their instruction or the materials they use to make them more efficient and effective. It is different from summative evaluation, which refers to the collection of data used to determine the effectiveness of the final version of the instruction.

There are three basic phases of formative evaluation. The first phase is a one-to-one evaluation, the second one is a small group evaluation, and the third stage is usually a field trial where the performance evaluation occurs in the performance context.

Here are some samples of questions to ask when performing formative evaluations: Are the materials appropriate for the type of learning outcome? Do the materials include adequate instructions on the subordinate skills, and are these skills sequenced and clustered logically? Are the materials clear and readily understood by representative members of the target group? What is the motivational value of the material? Can the materials be managed efficiently in the manner they are mediated?

The types of date you may want to collect include test data (entry behavior, pre, post and performance tests), comments by the learners, the learners' overall reactions to the instruction and their perceptions of the difficulties, the time required for the learners to complete various components of the instruction, and the reaction of the manager or supervisor who has observed the learners using the skills in the performance context.

The purpose of the one-to-one stage is to identify and remove the most obvious errors in the instruction, and to obtain the learners' initial performance indications and reactions to the content. This is accomplished through direct interaction between the designer and the individual learners who are representative of the target population. The learners not only go through the instructional materials, but also take the tests provided with the materials. Don't be surprised if some test items that may appear clear to you will be totally misinterpreted by the learner. The designer will be evaluating the clarity, the impact and the feasibility of the instruction. Only a very rough estimate of the learning time can be obtained in this phase of the evaluation. Attention should also be paid to not overgeneralize the data gathered from only one individual.

Small-group evaluation has the purpose of determining the effectiveness of changes made following the one-to-one evaluation and to identify any remaining learning problems the learners may have. Another purpose of this phase is to determine whether learners can use the instruction without interacting with the instructor. During this phase the instructor can determine the time required for the learners to complete both the instruction and the required performance measures. The small group should consist of eight to twenty learners. If the group is not homogeneous, it should consist of low, average and high-achieving students, learners with various native languages, learners who are familiar with a particular procedure, and young inexperienced learners as well as mature ones. The instructor should intervene as little as possible in the process.

Questions to ask during the small group phase could be: Was the instruction interesting? Did you understand what you were supposed to learn? Were the materials directly related to the objectives? Were sufficient practice exercises included? Were the practice exercises relevant? Did the tests really measure your knowledge of the objectives?

During the third stage of evaluation, the designer should determine whether the skills that have been taught are retained and used in the performance context, and whether these skills have the desired effect on the organization. The designer should get suggestions from the learners and those they work with of how to improve the instruction. In framing questions for this phase, the designer should be specific about which skills are of interest in the evaluation.

One concern in any evaluation of the materials is to ensure that any technical equipment is operating effectively. Also make sure you work with learners in a quiet setting. And be prepared to obtain information that indicate that your materials are not as effective as you thought they would be, even after going through an extensive instructional design process.

Saturday, December 8, 2007

DEVELOPING INSTRUCTIONAL MATERIALS

The following are notes from chapter 9 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

THE DELIVERY SYSTEM AND MEDIA SELECTIONS

As a natural part of the materials development process, our choices of theoretically best practice will always run into a reality check and require some compromises, however the delivery system and media selections should provide a workable educational product that fits the learning environment.

There are three factors that cause compromise in media and delivery system selection. The first is the availability of existing instructional materials. The second is the production and implementation constraints, which are usually severely underestimated by novice designers who don't realize the expertise, infrastructure, and time requirements that go into in-house production. When faced with constraints, the best step is to back down to simple media formats and produce them well, rather than sticking with complex media formats and producing them poorly. The third factor that compromises media and delivery system selection is the amount of instructor facilitation. Course dialogue (in class or online discussion) gives the learners a perception of more personal experience and feelings of group affiliation, resulting in more positive student evaluations of the course and of the instructor. Since such components as motivating learners, promoting active recall of prerequisites, providing practice with corrective feedback, and promoting transfer could be missing from the online learning experience, it is important to provide an instructor presence in the online environment, or at least join the online experience with face-to-face classroom or work group experiences.

COMPONENTS OF AN EXISTING PACKAGE

Several components make up an instructional package, including instructional materials, assessments and course management information. Instructional materials contain the content that the student will use to achieve the objectives, and any material for enhancing memory and transfer. All instructional materials should be accompanied by objective tests or performance assessments, and these may include both a pretest and a posttest. Course management information consists of an instructor's manual that provides the instructor with an overview of the materials and shows how they might be incorporated into an overall sequence. Special attention should be paid to the ease with which this information can be used by the instructor or course manager.

SELECTING EXISTING INSTRUCTIONAL MATERIALS

When you consider the cost of developing a video or multimedia presentation, it is clear worth the effort of spending several hours examining existing materials to determine whether they meet your needs. Four criteria should be considered to evaluate materials. The first is goal-centered, and involves verifying congruence between the content in the material and the performance objectives. The second criterion is learner-centered, where you should consider the appropriateness of instructional materials for the target group, such as looking at vocabulary used, the learners' background and experience, etc. The third criterion is learning-centered, and this is where you determine if the existing materials are adequate as they are, or whether they need to be adapted or enhanced prior to use. The last criterion is context-centered, where context analyses provide the foundation for judging whether existing materials can be adopted as is or adapted for your settings.

THE DESIGNER'S ROLE IN MATERIALS DEVELOPMENT AND INSTRUCTIONAL DELIVERY

When instructors design and develop materials, their role in instructional design is passive, but their role as a facilitator is very active, as they monitor and guide the progress of students through the materials. An instructor that selects and adapts materials has an increased role in delivering instruction. When the designer is also the developer and the instructor, the whole process of materials development is rather informal. One advantage the instructor has in delivering all instructions according to the instructional strategy is that the instructor can constantly update and improve instruction as changes occur in the content. When the designer is neither the developer not the instructor, there is a need to create a team environment that requires collaboration and communication skills, along with each participant's design and development skills. If possible, the team members should conduct the onsite learner and context analyses themselves to observe a sample of the learners for whom the instruction is being designed.

DEVELOPING INSTRUCTIONAL MATERIALS FOR FORMATIVE EVALUATION

A rough draft of materials allows you to create a quick, low-cost version of your design, so that you will have something to not only guide final production but also to take into formative evaluation with subject matter experts, several learners or a group of learners. Rapid prototyping, which relates to the thought of doing it several times, is another technique used to create iterative cycles of formative evaluation and revision to shape the final form of the materials.

It is best to consider the materials you develop as draft copies and expect that they will be reviewed and revised based on feedback from learners, instructors, and subject-matter experts.

For those who are taking their first attempt at instructional design, it is always recommended that they produce self-instructional materials first, and later move to instructor-led material or some combination of both.

Thursday, December 6, 2007

DEVELOPING AN INSTRUCTIONAL STRATEGY

The following are notes from chapter 8 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

Developing an instructional strategy basically deals with the way the designer will present the instruction to the learners and the ways he will use to engage them.

There is a huge variety of teaching and learning activities that can be used in terms of instructional strategy, such as group discussions, case studies, computer simulations, lectures, etc.

Instructional strategy can be divided into pieces as follows:

SELECTION OF A DELIVERY SYSTEM

The designer can select from a variety of delivery systems, such as the traditional system of an instructor with a group of learners in a classroom, telecourse by broadcast or videotape, computer-based instruction, web-based instruction, etc. The designer will first consider the learner characteristics, learning objectives and assessment requirements before selecting the best delivery system.

CONTENT SEQUENCING AND CLUSTERING

The first step in developing an instructional design is identifying a sequence of the content to be presented. Usually the designer should begin with the lower-level skills and then progressing to the higher-level ones. For younger children, it is advisable to keep the content and instructions in small clusters. More mature learners can handle larger clusters of content.

LEARNER COMPONENTS OF INSTRUCTIONAL STRATEGIES

Five major components should be part of an overall instructional strategy: preinstructional activities, content presentation, learner participation, assessment and follow-up activities.

Preinstructional activities consist of motivating the learners, informing them of what they will learn and ensuring they have the prerequisite knowledge to begin the instruction. Motivation includes gaining and sustaining the attention of the learner, presenting material that is relevant, something that the learners can feel confident that they can master after the instruction, and presenting it in a way that they can derive satisfaction from the learning experience.

Content presentation should represent the totality of what is to be learned along with relevant examples, such as illustrations, demonstrations, case studies, etc.

Learner participation takes place by providing the learners with opportunities to practice what you want them to be able to do, such as trying out what they are learning at the time they are learning it. Learners should also be provided feedback on their performance.

Assessment consists of entry-behavior tests, pretests, practice tests and posttests.

Follow-up activities should take into consideration the use of memory aids to help learners recall from memory as well as how the performance context will be different from the learning context.

Although instruction may be designed for an intellectual skill, verbal information, a motor skill, or an attitude, the basic learning components of an instructional strategy should be the same.

SELCTION OF MEDIA

Media should be selected for each component. Cost-effectiveness should be taken into consideration, along with choosing the best possible interactive media (human instructor, computer-based instructions, etc.) in order to obtain responsive feedback. In practice, the selection of media is based on logistical considerations.

Once your instructional strategy is completed, you can begin developing your instructions.

Tuesday, November 27, 2007

Delivery System

When it comes to a delivery system, is important to question whether the delivery system was chosen because it was the most effective way to foster learning, or whether it was just what was available.

You should consider the type of instructors you have first. Then look at the strengths and weaknesses of different media or delivery systems. The instructor may not always have to be in the teaching environment.

Studies from the National Training Laboratories in Bethel, Maine show that lecture and reading have low retention rates (5-10%). Practice by doing have a higher retention rate (75%). Immediate application of learning in a real situation, and teaching others have the highest retention rates (90%).

Wednesday, October 3, 2007

Analyzing learners and context: a few thoughts

The important question to ask is if there is a real need for instruction.

The best way to conduct a learner’s analysis is asking the learners, observing them, and taking notes.

See if you can simulate the context. In education, the learning context is usually in a classroom. We should be asking, what is the learners’ core motivation for being there, what are their prior experiences, the entry behavior?

As for learning styles, there is not enough research done on the subject, but no learning style precludes learning in a different style.

Your instruction is not for everybody. The more you can focus on your population, the better off you will be.

A few more thoughts on task analysis

A few comments on The essentials of instructional design article, by Abbie Brown and Timothy Green.

Task analysis is one of the most critical components of instructional design.

There are four approaches to doing task analysis, but they all have the same goal: to gather information about the content and task to be learned.

You should come up with a document, either using an outline, or a flowchart diagram approach with boxes and diamonds. Diagrams may work better for visual learners. An outline may work better to break down steps into substeps.

How to evaluate your task analysis: ask a professional (subject matter expert) to review it, or compare with other information gathered during the instructional design process to see if content and skills were correctly identified, or conduct a summative evaluation after the instruction has been implemented.