Wednesday, December 12, 2007

REVISING INSTRUCTIONAL MATERIALS

The following are notes from chapter 11 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

When performing revisions to your materials, you should consider changes that are made to the content of the materials to make them more accurate or effective as learning tool, and you should also consider changes to the procedures employed in using your materials.

The designer has five kinds of data he or she can analyze in order to revise instructional materials: learner characteristics and entry behavior, direct responses to the instruction, learning time, posttest performance, and responses to an attitude questionnaire, if used.

Upon analyzing data from the one-to-one evaluations, the designer should first describe the learners and their performance on any entry-behavior measures. Next, the designer should bring together all the comments and suggestions about the instruction, and then summarize data that is associated with the posttest. It is also helpful to develop a table that indicates each student's pre and posttest scores, and total learning time. With all this information in hand, the designer is ready to revise the instruction.

Based on learner performance, the designer should first try to determine if the rubric or test items were faulty. If the items are satisfactory and the learners performed poorly, then the instruction must be changed. The designer should carefully examine the mistakes made by learners in order to identify the kinds of misinterpretations they are making and therefore the kinds of changes that might be made.

The data from small groups of eight to twenty learners are of greater collective interest than individual interest. The available data often include item performance on the pretest, posttest, response to any attitude questionnaire, learning and testing time, and comments made directly in the materials. Performance on each item must be scored as correct or incorrect. If an item has multiple parts, then each part should be scored and reported separately, so information is not lost.

The designer should do an item-by-objective analysis using a table to determine the difficulty of each item for the group, the difficulty of each objective, and to determine the consistency with which the set of items within an objective measures learners' performance on the objective. The item-by-objective table should also provide the data for creating tables to summarize the learners' performance across tests, or even each individual learner's performance. Learners who mastered each objective should increase from pretests to posttests. Data from the tables can also be displayed through various graphing techniques.

By comparing pretest with posttest scores objective by objective, the designer can assess learner's performance on each particular objective and begin to focus on specific objectives and the related instruction that appears to need revision. It may also be necessary for the designer to consider alternative strategies to use, as well as to revise the materials to make them fit within a particular time frame. Attention should also be paid to the instructional procedures, especially if there were questions about how to proceed from one step to the next.

Tips to remember: The designer should avoid responding too quickly to any single piece of data, and instead should corroborate these data with other data. And just remember that when you make changes through the revision process, you cannot assume that the remaining unchanged instruction will necessarily maintain its initial effectiveness. You may hope that your changes are for the better, but you cannot assume that they always are.

Monday, December 10, 2007

DESIGNING AND CONDUCTING FORMATIVE EVALUATIONS

The following are notes from chapter 10 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

Formative evaluation is the process designers use to obtain data that can be used to revise their instruction or the materials they use to make them more efficient and effective. It is different from summative evaluation, which refers to the collection of data used to determine the effectiveness of the final version of the instruction.

There are three basic phases of formative evaluation. The first phase is a one-to-one evaluation, the second one is a small group evaluation, and the third stage is usually a field trial where the performance evaluation occurs in the performance context.

Here are some samples of questions to ask when performing formative evaluations: Are the materials appropriate for the type of learning outcome? Do the materials include adequate instructions on the subordinate skills, and are these skills sequenced and clustered logically? Are the materials clear and readily understood by representative members of the target group? What is the motivational value of the material? Can the materials be managed efficiently in the manner they are mediated?

The types of date you may want to collect include test data (entry behavior, pre, post and performance tests), comments by the learners, the learners' overall reactions to the instruction and their perceptions of the difficulties, the time required for the learners to complete various components of the instruction, and the reaction of the manager or supervisor who has observed the learners using the skills in the performance context.

The purpose of the one-to-one stage is to identify and remove the most obvious errors in the instruction, and to obtain the learners' initial performance indications and reactions to the content. This is accomplished through direct interaction between the designer and the individual learners who are representative of the target population. The learners not only go through the instructional materials, but also take the tests provided with the materials. Don't be surprised if some test items that may appear clear to you will be totally misinterpreted by the learner. The designer will be evaluating the clarity, the impact and the feasibility of the instruction. Only a very rough estimate of the learning time can be obtained in this phase of the evaluation. Attention should also be paid to not overgeneralize the data gathered from only one individual.

Small-group evaluation has the purpose of determining the effectiveness of changes made following the one-to-one evaluation and to identify any remaining learning problems the learners may have. Another purpose of this phase is to determine whether learners can use the instruction without interacting with the instructor. During this phase the instructor can determine the time required for the learners to complete both the instruction and the required performance measures. The small group should consist of eight to twenty learners. If the group is not homogeneous, it should consist of low, average and high-achieving students, learners with various native languages, learners who are familiar with a particular procedure, and young inexperienced learners as well as mature ones. The instructor should intervene as little as possible in the process.

Questions to ask during the small group phase could be: Was the instruction interesting? Did you understand what you were supposed to learn? Were the materials directly related to the objectives? Were sufficient practice exercises included? Were the practice exercises relevant? Did the tests really measure your knowledge of the objectives?

During the third stage of evaluation, the designer should determine whether the skills that have been taught are retained and used in the performance context, and whether these skills have the desired effect on the organization. The designer should get suggestions from the learners and those they work with of how to improve the instruction. In framing questions for this phase, the designer should be specific about which skills are of interest in the evaluation.

One concern in any evaluation of the materials is to ensure that any technical equipment is operating effectively. Also make sure you work with learners in a quiet setting. And be prepared to obtain information that indicate that your materials are not as effective as you thought they would be, even after going through an extensive instructional design process.

Saturday, December 8, 2007

DEVELOPING INSTRUCTIONAL MATERIALS

The following are notes from chapter 9 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

THE DELIVERY SYSTEM AND MEDIA SELECTIONS

As a natural part of the materials development process, our choices of theoretically best practice will always run into a reality check and require some compromises, however the delivery system and media selections should provide a workable educational product that fits the learning environment.

There are three factors that cause compromise in media and delivery system selection. The first is the availability of existing instructional materials. The second is the production and implementation constraints, which are usually severely underestimated by novice designers who don't realize the expertise, infrastructure, and time requirements that go into in-house production. When faced with constraints, the best step is to back down to simple media formats and produce them well, rather than sticking with complex media formats and producing them poorly. The third factor that compromises media and delivery system selection is the amount of instructor facilitation. Course dialogue (in class or online discussion) gives the learners a perception of more personal experience and feelings of group affiliation, resulting in more positive student evaluations of the course and of the instructor. Since such components as motivating learners, promoting active recall of prerequisites, providing practice with corrective feedback, and promoting transfer could be missing from the online learning experience, it is important to provide an instructor presence in the online environment, or at least join the online experience with face-to-face classroom or work group experiences.

COMPONENTS OF AN EXISTING PACKAGE

Several components make up an instructional package, including instructional materials, assessments and course management information. Instructional materials contain the content that the student will use to achieve the objectives, and any material for enhancing memory and transfer. All instructional materials should be accompanied by objective tests or performance assessments, and these may include both a pretest and a posttest. Course management information consists of an instructor's manual that provides the instructor with an overview of the materials and shows how they might be incorporated into an overall sequence. Special attention should be paid to the ease with which this information can be used by the instructor or course manager.

SELECTING EXISTING INSTRUCTIONAL MATERIALS

When you consider the cost of developing a video or multimedia presentation, it is clear worth the effort of spending several hours examining existing materials to determine whether they meet your needs. Four criteria should be considered to evaluate materials. The first is goal-centered, and involves verifying congruence between the content in the material and the performance objectives. The second criterion is learner-centered, where you should consider the appropriateness of instructional materials for the target group, such as looking at vocabulary used, the learners' background and experience, etc. The third criterion is learning-centered, and this is where you determine if the existing materials are adequate as they are, or whether they need to be adapted or enhanced prior to use. The last criterion is context-centered, where context analyses provide the foundation for judging whether existing materials can be adopted as is or adapted for your settings.

THE DESIGNER'S ROLE IN MATERIALS DEVELOPMENT AND INSTRUCTIONAL DELIVERY

When instructors design and develop materials, their role in instructional design is passive, but their role as a facilitator is very active, as they monitor and guide the progress of students through the materials. An instructor that selects and adapts materials has an increased role in delivering instruction. When the designer is also the developer and the instructor, the whole process of materials development is rather informal. One advantage the instructor has in delivering all instructions according to the instructional strategy is that the instructor can constantly update and improve instruction as changes occur in the content. When the designer is neither the developer not the instructor, there is a need to create a team environment that requires collaboration and communication skills, along with each participant's design and development skills. If possible, the team members should conduct the onsite learner and context analyses themselves to observe a sample of the learners for whom the instruction is being designed.

DEVELOPING INSTRUCTIONAL MATERIALS FOR FORMATIVE EVALUATION

A rough draft of materials allows you to create a quick, low-cost version of your design, so that you will have something to not only guide final production but also to take into formative evaluation with subject matter experts, several learners or a group of learners. Rapid prototyping, which relates to the thought of doing it several times, is another technique used to create iterative cycles of formative evaluation and revision to shape the final form of the materials.

It is best to consider the materials you develop as draft copies and expect that they will be reviewed and revised based on feedback from learners, instructors, and subject-matter experts.

For those who are taking their first attempt at instructional design, it is always recommended that they produce self-instructional materials first, and later move to instructor-led material or some combination of both.

Thursday, December 6, 2007

DEVELOPING AN INSTRUCTIONAL STRATEGY

The following are notes from chapter 8 of The Systematic Design Instruction, sixth edition by Dick, Carey & Carey.

Developing an instructional strategy basically deals with the way the designer will present the instruction to the learners and the ways he will use to engage them.

There is a huge variety of teaching and learning activities that can be used in terms of instructional strategy, such as group discussions, case studies, computer simulations, lectures, etc.

Instructional strategy can be divided into pieces as follows:

SELECTION OF A DELIVERY SYSTEM

The designer can select from a variety of delivery systems, such as the traditional system of an instructor with a group of learners in a classroom, telecourse by broadcast or videotape, computer-based instruction, web-based instruction, etc. The designer will first consider the learner characteristics, learning objectives and assessment requirements before selecting the best delivery system.

CONTENT SEQUENCING AND CLUSTERING

The first step in developing an instructional design is identifying a sequence of the content to be presented. Usually the designer should begin with the lower-level skills and then progressing to the higher-level ones. For younger children, it is advisable to keep the content and instructions in small clusters. More mature learners can handle larger clusters of content.

LEARNER COMPONENTS OF INSTRUCTIONAL STRATEGIES

Five major components should be part of an overall instructional strategy: preinstructional activities, content presentation, learner participation, assessment and follow-up activities.

Preinstructional activities consist of motivating the learners, informing them of what they will learn and ensuring they have the prerequisite knowledge to begin the instruction. Motivation includes gaining and sustaining the attention of the learner, presenting material that is relevant, something that the learners can feel confident that they can master after the instruction, and presenting it in a way that they can derive satisfaction from the learning experience.

Content presentation should represent the totality of what is to be learned along with relevant examples, such as illustrations, demonstrations, case studies, etc.

Learner participation takes place by providing the learners with opportunities to practice what you want them to be able to do, such as trying out what they are learning at the time they are learning it. Learners should also be provided feedback on their performance.

Assessment consists of entry-behavior tests, pretests, practice tests and posttests.

Follow-up activities should take into consideration the use of memory aids to help learners recall from memory as well as how the performance context will be different from the learning context.

Although instruction may be designed for an intellectual skill, verbal information, a motor skill, or an attitude, the basic learning components of an instructional strategy should be the same.

SELCTION OF MEDIA

Media should be selected for each component. Cost-effectiveness should be taken into consideration, along with choosing the best possible interactive media (human instructor, computer-based instructions, etc.) in order to obtain responsive feedback. In practice, the selection of media is based on logistical considerations.

Once your instructional strategy is completed, you can begin developing your instructions.

Tuesday, November 27, 2007

Delivery System

When it comes to a delivery system, is important to question whether the delivery system was chosen because it was the most effective way to foster learning, or whether it was just what was available.

You should consider the type of instructors you have first. Then look at the strengths and weaknesses of different media or delivery systems. The instructor may not always have to be in the teaching environment.

Studies from the National Training Laboratories in Bethel, Maine show that lecture and reading have low retention rates (5-10%). Practice by doing have a higher retention rate (75%). Immediate application of learning in a real situation, and teaching others have the highest retention rates (90%).

Wednesday, October 3, 2007

Analyzing learners and context: a few thoughts

The important question to ask is if there is a real need for instruction.

The best way to conduct a learner’s analysis is asking the learners, observing them, and taking notes.

See if you can simulate the context. In education, the learning context is usually in a classroom. We should be asking, what is the learners’ core motivation for being there, what are their prior experiences, the entry behavior?

As for learning styles, there is not enough research done on the subject, but no learning style precludes learning in a different style.

Your instruction is not for everybody. The more you can focus on your population, the better off you will be.

A few more thoughts on task analysis

A few comments on The essentials of instructional design article, by Abbie Brown and Timothy Green.

Task analysis is one of the most critical components of instructional design.

There are four approaches to doing task analysis, but they all have the same goal: to gather information about the content and task to be learned.

You should come up with a document, either using an outline, or a flowchart diagram approach with boxes and diamonds. Diagrams may work better for visual learners. An outline may work better to break down steps into substeps.

How to evaluate your task analysis: ask a professional (subject matter expert) to review it, or compare with other information gathered during the instructional design process to see if content and skills were correctly identified, or conduct a summative evaluation after the instruction has been implemented.

Monday, October 1, 2007

Technology (educational and instructional)

In an article entitled Educational technology. A question of meaning, by Cass Gentry, I was able to get some ideas of what tecnhology means, and get a better picture of what people mean by instructional and educational technology.

As many people might think, the word technology does not mean the use of machines. It refers mostly to a technique using scientific knowledge. The machine and its applications are made possible by technique. Technology also deals with processes, systems, and management and control mechanisms.

One important point to remember is that we should not use technology simply because it is available. Processes need to be thought over and improved before technology is applied.

Educational technology refers to the methodology and set of techniques used to apply instructional principles.

Instructional technology can be viewed as hardware, as well as the application of the behavioral sciences' research findings to solve the problems of instruction.

I hope this is a little helpful...

Tuesday, September 25, 2007

A few notes on task analysis

One of the reasons we do task analysis is to take the ordinary and common and examine it. First thing we need to do is ask what the student already knows, and what they need to know so the task can be learned. We cannot assume the student already knows the subjecte to be taught. One thing to remember is that, although the analysis may be a little complex, in the instructional phase, we need to make the presentation simple.

We need to ask questions and break the steps down into steps and sub-steps.

We need to establish the entry behavior, or what the students must be able to do at the entry level. A question you can ask when determining whether it is an entry behavior, is asking if it is worth the time for you to test it. Entry behavior is different from general characteristic. General characteristic is something found in the entire population you are working with, and does not relate directly to the task.

Here are the steps we use in task analysis:

1- Define your goal (write a sentence)
2- Do a high-level goal analysis (5 to 15 steps)
3- Break it down into smaller steps
4- Analyze your learner and their context

Wednesday, September 19, 2007

A little history on Instructional Design

I had the opportunity to read a good article on the history of Instructional Design. The article is entitled, A brief history of instructional development, by Sharon Shrock (1995). I will list the reference at the end of my writing. So here is a little summary of some of the ideas I got from her article.

Shrock spans the history of instructional design from the 1920's to the 1980's.

According to Shrock, before the 1920's, the prevalent idea was that people could improve their mental performance by simply studying certain disciplines, "in the way that calisthenics improve muscle functioniong" (Shrock, 1995, p. 12). The work of a researcher named Thorndlike, from Columbia University, started giving birth to the idea that instruction should be based on prespecified, socially useful goals. Thorndlike was also a strong advocate of educational measurement (cited in Snelbecker, 1974).

In the 1920's the idea of the mind as a muscle started giving way to other thoughts. Here is where we see the roots of job and task analysis, and researchers, such as Franklin Bobbitt (1918) advocating that "the goals for schooling could be derived from an objective analysis of those skills necessary for successful living" (Shrock, 1995, p. 13). Here we start seeing a connection between outcomes and instruction. Several other researchers at the time contributed with their ideas. Mary Ward and Frederick Burk advocated that learners should be able to progress at their own pace with little direction from their teachers. Washburne created a plan for public schools making use of self-paced, self-instructional material that allowed students to take a self-administered test to see if they were ready for testing by the teacher. Another plan was developed by Dalton, introducing the idea of contracts, where students would agree to learn something at their own pace, and only after they learned what they had agreed on, were they able to move on to a more advanced lesson. All these plans of the 1920's emphasize individualized instruction and mastery learning (Shrock, 1995, p. 13).

The Great Depression saw a slowing down of the devopment of instructional design in the 1930's. However, it is here that we see the birth of what we would call today formative evaluation. Ralph Tyler did a study to see if students completing an alternative high school curricula could have more success in college. The study showed that objectives could be clarified in terms of expected student behaviors. The term formative evaluation comes from the introduction of these objectives and their assessment, which were used to revise and refine the new curricula for the students (Shrock, 1995, p. 14).

With the advent of World War II in the 1940's, we see a rapid development of mediated instruction. The military became a good example of what education could accomplish with well-funded research and development effort. Here we see the creation of military training films, and a new role of the technical expert and the producer emerged as distinct from that of the subject matter expert (Shrock, 1995, p. 15).

The 1950's see the influence of Skinner's research into operant conditioning and animal learning, leading him to suggest the use of controlled reinforcement for desired behaviors. Here is where we see the emergence of programed instruction that consisted of "clearly stated objectives, small frames of instruction, self-facing and immediate feedback regarding the correctness of the response" (Shrock, 1995, p.15). The term task analysis is also first used by the Airforce personnel.

By the 1960's, the essence of what we know as instructional design today was already present, mostly because of the support the federal government gave to the this field. It was during this time that the field of audiovisual instruction gained its momentum.

The 1970's is viewed as a time where we see the proliferation of numerous ID models. One important addition to the process of instructional design that took place at this time was the addition of needs assessment (Shrock, 1995, p.17).

In the 1980's we see the advent of microcomputers and the proliferation of instructional design in businesses and other non-school agencies. The use of microcomputers facilitated the use of cognitive psychology and knowledge engineering strategies, broadening its theoretical and analytical bases (Shrock, 1995, p. 18).

The 1990's to our present day will find the same themes here presented, but in much more complex and sophisticated forms (Shrock, 1995, p. 18).

So here was a short summary of the history. Feel free to make any comments...

Reference

Shrock, S. (1995). A brief history of instructional development. Instructional technology: Past, present and future (2nd ed.), pp. 11-18, Englewood, CO: Libraries Unlimited.

Rick's Instructional Design Class

As the title indicates, I have created this blog to register some of my reflections on what I've been learning from my Instructional Design class. Here I will add a little bit of the history of instructional design, a few thoughts on technology and what instructional designers do, what I've been learning in class, and what I may be able to do with the skills and knowledge I get from the class. So here goes...