Beginner Basics > e-Learning Alphabet Soup: A Guide to Terms by Kevin Kruse WHAT IS E-LEARNING? e-Learning can be a confusing topic in part because of the alphabet soup of acronyms, technology related buzzwords, overlapping definitions, variety of delivery options, and the converging histories of the two disciplines of technology and training. In the current marketplace, what most people really mean when they use the term e-learning (and its multiple synonyms) is Web-based training -- but we'll see that it really means much more. e-Learning is really nothing more than using some form of technology to deliver training and other educational materials. e-Learning is the latest, in vogue, all-inclusive term for training delivered by a number of means. In the past, these have included the use of mainframe computers, floppy diskettes, multimedia CD-ROMs, and interactive videodisks. Most recently, Web technology (both Internet and Intranet delivery) have become preferred delivery options. In the near future, e-learning will also include training delivered on PDA's (e.g., Palm Pilots) and even via wireless devices like your cell phone. This new, mobile form of education is called, predictably enough, m-learning. Other Terms Associated with e-Learning Understanding what is and what isn’t e-learning can be confusing due to the wealth of different terms that exist to define the same thing. Most people prefer the word learning to training ("dogs are trained, people learn") and use technology-based learning (TBL) or "e-learning" instead of technology-based training (TBT). Other commonly used terms include computer-based training (CBT), computer-based learning (CBL), computer-based instruction (CBI), computer-based education (CBE), Web-based training (WBT), Internet-based training (IBT), Intranet-based training (also IBT), and any number of others. Some of these, like Web-based training, can be seen as specific subsections of e-learning while others, notably computer-based training, are less specific. Other confusion arises from technical definitions that differ from their popular use. For example, the terms CBT, CBI, and CBL are sometimes used generically to refer to all types of e-learning, but are commonly used to describe older disk-based training. A term beginning with the word computer frequently, but not always, refers to interactive tutorials that are distributed on floppy diskettes. The term multimedia training is usually used to describe training delivered via CD-ROM. This rule of thumb is complicated by the fact that advances in Internet technology make it possible for network-based training to now deliver audio and video elements as well. Browser-based training is the term used to describe courseware that requires a Web browser to access, but may in fact be running from the Internet or CD-ROM. In fact, some training programs will pull content from both a Web site as well as a CD-ROM. These courses are sometimes called hybrids, or hybrid-CD-ROMs. Distance learning, or distance education, are other commonly used terms. They accurately describe most types of e-learning, but are most often used to describe instructor-led, web-based education -- for either corporate training or college classes. To further complicate matters, some theorists divide e-learning into three distinct branches: Computer-aided instruction (CAI), computer-managed instruction (CMI), and computer-supported learning resources (CSLR). The first term, CAI, encompasses the portion of a given e-learning product that provides the instruction, such as the tutorials, simulations, and exercises. The second term, CMI, refers to the testing, record keeping, and study guidance functions of an e-learning product. The last term, CSLR, encompasses the communication, database, and performance support aspects of e-learning. Although these distinctions can prove useful in academic research and discussion, it is enough for most of us to know that they exist and that they all refer to parts of the greater whole, e-learning. Finally, when it comes to course and student management, the newest descriptor is learning mangement system (LMS). LMS' are typically web-based programs that are used to enroll students, assign and launch courses, and track student progress and test scores. A close cousin to the LMS is the LCMS which stands for Learning Content Managemet System. An LCMS manages chunks of reusable learning objects, known as RLO's. For more detailed definitions of the ever changing jargon of e-learning, visit the The World's Biggest e-Learning Glossary. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > How to Write an e-Learning RFP by Kevin Kruse Training managers are often surprised when they receive vendors’ proposals and see the wide disparity in prices and proposed solutions. One training manager at a pharmaceutical company received five proposals for the development of a CD-ROM to teach sales representatives about a new drug. The low bidder quoted $28,000 while the high bidder came in at $380,000 (the winning proposal was for $78,000). What explains this huge disparity? Quite simply, the training manager was not specific enough in his request to receive comparable and valid responses from the vendors. Key Points in a Model RFP A fair amount of analysis and planning needs to be done by the client in order to provide an appropriately specific request to a technology-based training vendor. Ideally, a needs analysis and high-level design document should be created prior to sending out RFPs to vendors. Sometimes one vendor is hired to conduct the needs analysis, outline the learning objectives, and produce a high-level design document. Then the same vendor or others are asked to propose how to create the actual solution. Such detailed front-end work helps ensure that the end product will be an effective training tool delivered on time and on budget. Vendors should be given at least two to three weeks to complete their proposals. Some managers believe that mandating quick turnarounds of a proposal, three days, for example, tests how professional and committed the vendors are. The theory is that only the best companies will be able to respond on time. In reality, the best companies are very thorough with their proposed solutions and very busy with existing clients. The companies more likely to respond to quick turnaround RFPs are those that are desperate for new business, or are overstaffed for their current workload. A quality company might submit a proposal that is 20 to 50 pages long, with detailed design strategies, sample screen images, and perhaps even a return on investment analysis. The extra information you receive from such a company will be worth the wait. The Components of an RFP A request for proposals (RFP) is a document that explains the training need, and provides details about the size and scope of the project. A complete RFP should include: * Background on the student population. * Outline of the content or learning objectives to be covered. * Estimate of the total amount of learning time the finished program should include. * Samples of any existing subject matter or description of available subject matter expertise. * Description of the delivery technology, whether CD-ROM or Web-based. * Description of the types of media to be used, such as whether audio or video will be included. * Clear requests for vendor background information. (See the e-Learning Guru Toolbox for a sample RFP document.) --------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > Evaluating e-Learning: Introduction to the Kirkpatrick Model by Kevin Kruse The final step in the ADDIE model is a summative evaluation in which you measure how effectively the training program accomplished its stated goals. This step in the training process is usually ignored because of the added time and cost required. Training departments with limited budgets often assume new programs are effective and put dollars that should go into evaluation into the next program. However, as senior executives demand more accountability from training efforts, interest is certain to increase in measuring and reporting results. The Kirkpatrick Model for Summative Evaluation In 1975, Donald Kirkpatrick first presented a four-level model of evaluation that has become a classic in the industry: * Level One: Reaction * Level Two: Learning * Level Three: Behavior * Level Four: Results These levels can be applied to technology-based training as well as to more traditional forms of delivery. Modified labels and descriptions of these steps of summative evaluation follow. Level One: Students’ Reaction In this first level or step, students are asked to evaluate the training after completing the program. These are sometimes called smile sheets or happy sheets because in their simplest form they measure how well students liked the training. However, this type of evaluation can reveal valuable data if the questions asked are more complex. For example, a survey similar to the one used in the formative evaluation also could be used with the full student population. This questionnaire moves beyond how well the students liked the training to questions about: * The relevance of the objectives. * The ability of the course to maintain interest. * The amount and appropriateness of interactive exercises. * The ease of navigation. * The perceived value and transferability to the workplace. With technology-based training, the survey can be delivered and completed online, and then printed or e-mailed to a training manager. Because this type of evaluation is so easy and cheap to administer, it usually is conducted in most organizations. Level Two: Learning Results Level Two in the Kirkpatrick model measures learning results. In other words, did the students actually learn the knowledge, skills, and attitudes the program was supposed to teach? To show achievement, have students complete a pre-test and post-test, making sure that test items or questions are truly written to the learning objectives. By summarizing the scores of all students, trainers can accurately see the impact that the training intervention had. This type of evaluation is not as widely conducted as Level One, but is still very common. Level Three: Behavior in the Workplace Students typically score well on post-tests, but the real question is whether or not any of the new knowledge and skills are retained and transferred back on the job. Level Three evaluations attempt to answer whether or not students’ behaviors actually change as a result of new learning. Ideally, this measurement is conducted three to six months after the training program. By allowing some time to pass, students have the opportunity to implement new skills and retention rates can be checked. Observation surveys are used, sometimes called behavioral scorecards. Surveys can be completed by the student, the student’s supervisor, individuals who report directly to the student, and even the student’s customers. For example, survey questions evaluating a sales training program might include: * Did the representative open each customer dialogue with a product benefit statement, followed by a request to proceed? * Was the representative able to analyze and describe to you the category of customers’ objections as either valid, misinformation, or smokescreen? * Did the representative use the appropriate model answer in response to each objection? * Did the representative close each sales call with a request for purchase? * If the prospect did not buy anything, did the representative end the call with specific future action steps? * Did the representative complete call history records that include summaries of who, what, where, when, and why? Level Four: Business Results The fourth level in this model is to evaluate the business impact of the training program. The only scientific way to isolate training as a variable would be to isolate a representative control group within the larger student population, and then rollout the training program, complete the evaluation, and compare against a business evaluation of the non-trained group. Unfortunately, this is rarely done because of the difficulty of gathering the business data and the complexity of isolating the training intervention as a unique variable. However, even anecdotal data is worth capturing. Below are sample training programs and the type of business impact data that can be measured. * Sales training. Measure change in sales volume, customer retention, length of sales cycle, profitability on each sale after the training program has been implemented. * Technical training. Measure reduction in calls to the help desk; reduced time to complete reports, forms, or tasks; or improved use of software or systems. * Quality training. Measure a reduction in number of defects. * Safety training. Measure reduction in number or severity of accidents. * Management training. Measure increase in engagement levels of direct-reports ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > The Magic of Learner Motivation: The ARCS Model by Kevin Kruse Motivation is the most overlooked aspect of instructional strategy, and perhaps the most critical element needed for employee-learners. Even the most elegantly designed training program will fail if the students are not motivated to learn. Without a desire to learn on the part of the student, retention is unlikely. Many students in a corporate setting who are forced to complete training programs are motivated only to "pass the test." Designers must strive to create a deeper motivation in learners for them to learn new skills and transfer those skills back into the work environment. As a first step, instructional designers should not assume they understand the target audience’s motivation. To analyze needs, the designer should ask prospective learners questions such as: * What would the value be to you from this type of program? * What do you hope to get out of this program? * What are your interests in this topic? * What are you most pressing problems? The answers to these types of questions are likely to provide insight into learner motivation, as well as desirable behavioral outcomes. Keller’s ARCS Model for Motivation John Keller synthesized existing research on psychological motivation and created the ARCS model (Keller, 1987). ARCS stands for Attention, Relevance, Confidence, and Satisfaction. This model is not intended to stand apart as a separate system for instructional design, but can be incorporated within Gagne’s events of instruction. Attention The first and single most important aspect of the ARCS model is gaining and keeping the learner’s attention, which coincides with the first step in Gagne’s model. Keller’s strategies for attention include sensory stimuli (as discussed previously), inquiry arousal (thought provoking questions), and variability (variance in exercises and use of media). Relevance Attention and motivation will not be maintained, however, unless the learner believes the training is relevant. Put simply, the training program should answer the critical question, "What’s in it for me?" Benefits should be clearly stated. For a sales training program, the benefit might be to help representatives increase their sales and personal commissions. For a safety training program, the benefit might be to reduce the number of workers getting hurt. For a software training program, the benefit to users could be to make them more productive or reduce their frustration with an application. A healthcare program might have the benefit that it can teach doctors how to treat certain patients. Confidence The confidence aspect of the ARCS model is required so that students feel that they should put a good faith effort into the program. If they think they are incapable of achieving the objectives or that it will take too much time or effort, their motivation will decrease. In technology-based training programs, students should be given estimates of the time required to complete lessons or a measure of their progress through the program. Satisfaction Finally, learners must obtain some type of satisfaction or reward from the learning experience. This can be in the form of entertainment or a sense of achievement. A self-assessment game, for example, might end with an animation sequence acknowledging the player's high score. A passing grade on a post-test might be rewarded with a completion certificate. Other forms of external rewards would include praise from a supervisor, a raise, or a promotion. Ultimately, though, the best way for learners to achieve satisfaction is for them to find their new skills immediately useful and beneficial on their job. The success or failure of any e-learning initiative can be closely corrolated to learner motivation. Remebers the ARCS model when designing any program. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > Introduction to Instructional Design and the ADDIE Model by Kevin Kruse What is Instructional Systems Design? The most widely used methodology for developing new training programs is called Instructional Systems Design (ISD) It is also known as Instructional Systems Design & Development (ISDD), the Systems Approach to Training (SAT), or just Instructional Design (ID). This approach provides a step-by-step system for the evaluation of students’ needs, the design and development of training materials, and the evaluation of the effectiveness of the training intervention. ISD evolved from post-World War II research in the United States military to find a more effective and manageable way to create training programs. These efforts led to early ISD models that were developed and taught in the late 1960’s at Florida State University. Today, Walter Dick and Lou Carey are widely viewed as the torchbearers of the methodology, with their authoritative book, The Systematic Design of Instruction (Dick and Carey). Why Use a Systems Approach? A system is any set of components that work together to achieve a specified outcome or goal. Think of the cruise control system on your car. You set the desired speed (or goal) and the cruise control sets the gas injection to the proper level. An important aspect of any system is the feedback mechanisms that ensure the goal is achieved or maintained. Using the cruise control analogy, the car does not just lock the gas pedal in one position. If you begin to drive uphill, the car briefly slows down until the speedometer information is fed back to the cruise control system, which then increases the amount of gas and the desired speed is reached once again. Just as a systems approach with its requisite feedback makes cruise control a viable system to maintain driving speed, so, too, the systems approach provides the smoothest development means for training programs. The ADDIE Model There are more than 100 different ISD models, but almost all are based on the generic "ADDIE" model, which stands for Analysis, Design, Development, Implementation, and Evaluation, as illustrated in the figure below. Each step has an outcome that feeds the subsequent step. Analysis --> Design --> Development --> Implementation --> Evaluation During analysis, the designer develops a clear understanding of the "gaps" between the desired outcomes or behaviors, and the audience’s existing knowledge and skills. The design phase documents specific learning objectives, assessment instruments, exercises, and content. The actual creation of learning materials is completed in the development phase. During implementation, these materials are delivered or distributed to the student group. After delivery, the effectiveness of the training materials is evaluated. Alternate Design Models The ADDIE model has been criticized by some as being too systematic, that is, too linear, too inflexible, too constraining, and even too time-consuming to implement. As an alternative to the systematic approach, there are a variety of systemic design models that emphasize a more holistic, iterative approach to the development of training. Rather than developing the instruction in phases, the entire development team works together from the start to rapidly build modules, which can be tested with the student audience, and then revised based on their feedback. The systemic approach to development has many advantages when it comes to the creation of technology-based training. To create engaging metaphors or themes, artists and writers work together in a process that validates the creative approach with students early in the development cycle. Programmers and designers garner agreement as to which learning activities are both effective as well as possible, given the constraints of the client’s computers or network. Despite these advantages, there are practical challenges with a purely systemic design approach in the management of resources. In most cases, training programs must be developed under a fixed -- and often limited -– budget and schedule. While it is very easy to allocate people and time to each step in the ISD model, it is harder to plan deliverables when there are no distinct steps in the process. The holistic approach begs the questions, "How many iterations, and time, will it take to finish the program?" "Do the contributions made by programmers and artists in the design phase, who have no formal background in instruction, warrant the extra time required and additional compensation for this time?" Introducing a Rapid Prototyping Phase For best results, the development process for CD-ROM or Web-based training programs should use a modified ADDIE model, which borrows from the most valuable aspects of the systemic approach. Specifically, a rapid prototype phase is inserted after, or as an extension of, the design phase. A rapid prototype is simply a quickly assembled module that can be tested with the student audience early in the ISD process. The evaluation typically looks at things like how well the learners responded to the creative metaphor, how effective the learning activities are, and how well the program performs on the chosen technology platform. Based on the feedback, the design can be revised and another prototype developed. This iterative process continues until there is agreement and confidence in the prototype. In this process, only after the prototype is completed is additional development work done. However, this work often moves more quickly after a rapid prototype than in the traditional ADDIE model. Instructional designers and writers are able to proceed more efficiently since they know exactly what the program will look like and what it will be capable of doing. Additionally, with all of the major technical issues resolved, final programming becomes a simple matter of assembly of media components. ------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > CHECKLIST: Evaluating e-Learning User Interfaces by Kevin Kruse When reviewing and evaluating the computer interface of your e-learning program, you should be able to answer yes to the questions below. * Do all buttons and icons have a consistent and unique appearance? * Are visual cues like mouse cursor changes and roll-over highlights used consistently on all buttons? * Are buttons labeled with text descriptions (or with roll-over text)? * Do buttons gray-out or disappear when they are inactive? * Do non-button graphics have their own design properties distinct from that of buttons? * Are navigation buttons displayed in exactly the same screen position every time they appear? * Are buttons grouped logically and located where the user is likely to be looking? * Do users have one-click access to help, exit, and the Main Menu? * Are users returned to where they left off after closing the help window and canceling out of the exit screen? * Does every menu have a title? * Does every menu screen include an option to return to the previous or Main Menu? * Are there fewer than three levels of menus? * Do menus have nine or fewer items on them? * Are items on menus descriptive rather than general? * Are menu items listed in a sequential or logical order? * Do menus indicate which items the student has completed? * Are confirmation messages used in areas such as student registration, exit, and final exams? * Are there clear instructions associated with menus, questions, and other tasks? * Are error messages written in plain language? * Are status messages displayed during delays greater than four seconds? * Are exclamation points and sound effects used sparingly? * Is there a bookmarking feature that enables students to exit and resume later where they left off? * Can students move backward, as well as forward, in linear tutorials? * Are page or screen counters used to show progress within linear lessons? * Is the visual metaphor consistent and intuitive in non-linear simulations? * Are all pop-up windows positioned on the screen so they do not cover up relevant information. * Does text appear clearly and with normal margins and spacing? * Do information input screens force all capital letters, and is the evaluation of inputs case insensitive? * Can users interact with the program from either the keyboard or the mouse? * Are text fonts used consistently? * Are audio volume levels consistent? * Do users have the option to replay video or audio narration? ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- Beginner Basics > Gagne's Nine Events of Instruction: An Introduction by Kevin Kruse Just as Malcolm Knowles is widely regarded as the father of adult learning theory, Robert Gagne is considered to be the foremost researcher and contributor to the systematic approach to instructional design and training. Gagne and his followers are known as behaviorists, and their focus is on the outcomes – or behaviors – that result from training. Gagne’s Nine Events of Instruction Gagne’s book, The Conditions of Learning, first published in 1965, identified the mental conditions for learning. These were based on the information processing model of the mental events that occur when adults are presented with various stimuli. Gagne created a nine-step process called the events of instruction, which correlate to and address the conditions of learning. The figure below shows these instructional events in the left column and the associated mental processes in the right column. Instructional Event Internal Mental Process 1. Gain attention Stimuli activates receptors 2. Inform learners of objectives Creates level of expectation for learning 3. Stimulate recall of prior learning Retrieval and activation of short-term memory 4. Present the content Selective perception of content 5. Provide "learning guidance" Semantic encoding for storage long-term memory 6. Elicit performance (practice) Responds to questions to enhance encoding and verification 7. Provide feedback Reinforcement and assessment of correct performance 8. Assess performance Retrieval and reinforcement of content as final evaluation 9. Enhance retention and transfer to the job Retrieval and generalization of learned skill to new situation 1. Gain attention In order for any learning to take place, you must first capture the attention of the student. A multimedia program that begins with an animated title screen sequence accompanied by sound effects or music startles the senses with auditory or visual stimuli. An even better way to capture students’ attention is to start each lesson with a thought-provoking question or interesting fact. Curiosity motivates students to learn. 2. Inform learners of objectives Early in each lesson students should encounter a list of learning objectives. This initiates the internal process of expectancy and helps motivate the learner to complete the lesson. These objectives should form the basis for assessment and possible certification as well. Typically, learning objectives are presented in the form of "Upon completing this lesson you will be able to. . . ." The phrasing of the objectives themselves will be covered under Robert Mager’s contributions later in this chapter. 3. Stimulate recall of prior learning Associating new information with prior knowledge can facilitate the learning process. It is easier for learners to encode and store information in long-term memory when there are links to personal experience and knowledge. A simple way to stimulate recall is to ask questions about previous experiences, an understanding of previous concepts, or a body of content. 4. Present the content This event of instruction is where the new content is actually presented to the learner. Content should be chunked and organized meaningfully, and typically is explained and then demonstrated. To appeal to different learning modalities, a variety of media should be used if possible, including text, graphics, audio narration, and video. 5. Provide "learning guidance" To help learners encode information for long-term storage, additional guidance should be provided along with the presentation of new content. Guidance strategies include the use of examples, non-examples, case studies, graphical representations, mnemonics, and analogies. 6. Elicit performance (practice) In this event of instruction, the learner is required to practice the new skill or behavior. Eliciting performance provides an opportunity for learners to confirm their correct understanding, and the repetition further increases the likelihood of retention. 7. Provide feedback As learners practice new behavior it is important to provide specific and immediate feedback of their performance. Unlike questions in a post-test, exercises within tutorials should be used for comprehension and encoding purposes, not for formal scoring. Additional guidance and answers provided at this stage are called formative feedback. 8. Assess performance Upon completing instructional modules, students should be given the opportunity to take (or be required to take) a post-test or final assessment. This assessment should be completed without the ability to receive additional coaching, feedback, or hints. Mastery of material, or certification, is typically granted after achieving a certain score or percent correct. A commonly accepted level of mastery is 80% to 90% correct. 9. Enhance retention and transfer to the job Determining whether or not the skills learned from a training program are ever applied back on the job often remains a mystery to training managers - and a source of consternation for senior executives. Effective training programs have a "performance" focus, incorporating design and media that facilitate retention and transfer to the job. The repetition of learned concepts is a tried and true means of aiding retention, although often disliked by students. (There was a reason for writing spelling words ten times as grade school student.) Creating electronic or online job-aids, references, templates, and wizards are other ways of aiding performance. Applying Gagne’s nine-step model to any training program is the single best way to ensure an effective learning program. A multimedia program that is filled with glitz or that provides unlimited access to Web-based documents is no substitute for sound instructional design. While those types of programs might entertain or be valuable as references, they will not maximize the effectiveness of information processing - and learning will not occur. How to Apply Gagne’s Events of Instruction in e-Learning As an example of how to apply Gagne’s events of instruction to an actual training program, let’s look at a high-level treatment for a fictitious software training program. We’ll assume that we need to develop a CD-ROM tutorial to teach sales representatives how to use a new lead-tracking system called STAR, which runs on their laptop computers. 1. Gain attention The program starts with an engaging opening sequence. A space theme is used to play off the new software product's name, STAR. Inspirational music accompanies the opening sequence, which might consist of a shooting star or animated logo. When students access the first lesson, the vice president of sales appears on the screen in a video clip and introduces the course. She explains how important it is to stay on the cutting edge of technology and how the training program will teach them to use the new STAR system. She also emphasizes the benefits of the STAR system, which include reducing the amount of time representatives need to spend on paperwork. 2. Inform learners of objectives The VP of sales presents students with the following learning objectives immediately after the introduction. Upon completing this lesson you will be able to: * List the benefits of the new STAR system. * Start and exit the program. * Generate lead-tracking reports by date, geography, and source. * Print paper copies of all reports. 3. Stimulate recall of prior learning Students are called upon to use their prior knowledge of other software applications to understand the basic functionality of the STAR system. They are asked to think about how they start, close, and print from other programs such as their word processor, and it is explained that the STAR system works similarly. Representatives are asked to reflect on the process of the old lead-tracking system and compare it to the process of the new electronic one. 4. Present the content Using screen images captured from the live application software and audio narration, the training program describes the basic features of the STAR system. After the description, a simple demonstration is performed. 5. Provide "learning guidance" With each STAR feature, students are shown a variety of ways to access it - using short-cut keys on the keyboard, drop-down menus, and button bars. Complex sequences are chunked into short, step-by-step lists for easier storage in long-term memory. 6. Elicit performance (practice) After each function is demonstrated, students are asked to practice with realistic, controlled simulations. For example, students might be asked to "Generate a report that shows all active leads in the state of New Jersey." Students are required to use the mouse to click on the correct on-screen buttons and options to generate the report. 7. Provide feedback During the simulations, students are given guidance as needed. If they are performing operations correctly, the simulated STAR system behaves just as the live application would. If the student makes a mistake, the tutorial immediately responds with an audible cue, and a pop-up window explains and reinforces the correct operation. 8. Assess performance After all lessons are completed, students are required to take a post-test. Mastery is achieved with an 80% or better score, and once obtained, the training program displays a completion certificate, which can be printed. The assessment questions are directly tied to the learning objectives displayed in the lessons. 9. Enhance retention and transfer to the job While the STAR system is relatively easy to use, additional steps are taken to ensure successful implementation and widespread use among the sales force. These features include online help and "wizards", which are step-by-step instructions on completing complex tasks. Additionally, the training program is equipped with a content map, an index of topics, and a search function. These enable students to use the training as a just-in-time support tool in the future. Finally, a one-page, laminated quick reference card is packaged with the training CD-ROM for further reinforcement of the learning session. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------odel RFP