外文翻译--灵活性和计算机辅助评价技术(编辑修改稿)内容摘要:

f Paris would be marked correct and anything else incorrect. For an incorrect answer the only feedback which could be given would be to give the correct answer and,possibly, an indication of why the answer chosen was wrong. At higher levels the assessment is graded for degrees of correctness and there is a considerable amount of information which can be used for meaningful feedback to candidates. Consider a question asking for a proof ofa mathematical theorem. An assessor would produce an assessment based on how close the answer was to the correct answer, taking into account the method used. Feedback would consist of identifying the parts of the answer which were incorrect and feeding this information back to the candidate with explanatory ments. Thus at the lower levels exact matching algorithms are normally used whereas at the higher levels approximate matching algorithms are required which are more plex and slower. It is the use of these approximate matching algorithms which distinguishes the assessment of higher level skills from lower level skills. Another major difference between the assessment of higher and lower level skills is the data which is assessed. Lower level exercises almost always assess the oute of the exercise, for example, the formula typed into a spreadsheet cell as part of a spreadsheet exercise. Higher level skills can also be assessed on the oute of an exercise but may also be assessed on the method used by the candidate to generate the oute, for example, the sequence of key depressions and mouse clicks used. Method assessment is especially important where group working skills are being tested. A MODEL OF IT SKILLS ASSESSMENT In a typical IT skills assessment, as in many other forms of assessment, the exercise/examination follows the following sequence of actions. 1. The examiner prepares the exercise and model answer(s). 2. The candidate sits exam/ does exercise. 3. The candidate’s answer is pared to the model answer(s) to detect raw errors. 4. Raw errors are categorised according to the assessment criteria. 5. An assessment is generated from the error analysis. 6. The assessments is recorded for remedial help/mark generation/student petence tracking The flexibility in the model is dependent on。 1. The amount of choice the examiner has in setting the exercise. 2. The amount of choice the candidate has in answering the exercise. 3. The method used to map raw errors to assessment errors. 4. The method of reporting the results. Each of these is examined in more detail in the following section. The difficulty in the assessment depends on。 1. The plexity and difficulty of the exercise set. 2. The number of equivalent correct answers. 3. The plexity of the assessment criteria. 4. The type and amount of feedback required. FLEXIBILITY IN PRACTICE There are many different ways in which flexibility is built into practical skills assessors. This section categorises some of the reasons and gives examples of such categories of flexibility which have been built into IT skills assessors. FLEXIBILITY IN THE MODEL Flexible Question Setting In general, many examiners wish to have the ability to set their own exercises, rather than select from the set of exercises provided. Providing the examiner with the tools to set their own exercises can be problematic for the producer of an automated assessor. The reason for this is that the difficulty of assessing IT skills exercises depends to a large extent on the amount of interaction between individual errors and this interaction between errors can be reduced to so。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。