No full text
Scientific conference in universities or research centers (Scientific conferences in universities or research centers)
Computerized Adaptive Testing
Braeken, Johan; Magis, David; Stillwell, David
2016
 

Files


Full Text
No document available.
Annexes
UV9256-CAT(1).pdf
Publisher postprint (3.12 MB)
Request a copy
CATworkshopCEMO26042016.R
Publisher postprint (5.69 kB)
Request a copy

All documents in ORBi are protected by a user license.

Send to



Details



Abstract :
[en] Why ask a person to answer a problem item, when you a priori know they won’t be able to solve it? It is a waste of time and resources, and you won’t gain any new information; this is both inefficient and ineffective. In contrast, computerized adaptive testing (CAT) is based on the principle that more information can be gained when one tailors the test towards the level of the person being tested. Computational and statistical techniques from item response theory (IRT) and decision theory are combined to implement a test that can behave interactively during the test process and adapts towards the level of the person being tested.The implementation of such a CAT relies on an iterative sequential algorithm that searches the pool of available items (a so-called item bank) for the optimal item to administer based on the current estimate of the person’s level (and optional external constraints). The subsequent response on this item provides new information to update the person’s proficiency estimate. This selection-responding-updating process continues until specified stop criteria have been reached. The consequence of such an adaptive test administration is that you get an individualized tailored test that is more efficient and more effective. Because you have less of a mismatch between the level of the test and the level of the test taker, there is a lesser burden for the latter and a higher precision for the former, and this with fewer items than a traditional fixed item-set test format. Furthermore, because it is computerized and sequential, test performance can be continuously monitored and reported directly after test completion. Item response models come into play to ensure comparable scores of these individual tailored tests by putting them on the same measurement scale and to precalibrate,the psychometric parameters of the items that are part of the item bank on which the sequential iterative algorithm operates. The workshop intends to tackle issues encountered during the setup of a computerized adaptive test, starting from the design towards the actual delivery of a CAT.
Disciplines :
Education & instruction
Author, co-author :
Braeken, Johan
Magis, David ;  Université de Liège > Département des Sciences de l'éducation > Psychométrie et édumétrie
Stillwell, David
Language :
English
Title :
Computerized Adaptive Testing
Publication date :
25 April 2016
Event name :
Workshop UiO
Event organizer :
Centtre for Educational Measurement (CEMO), University of Oslo
Event place :
Oslo, Norway
Event date :
25-27 avril 2016
Audience :
International
Available on ORBi :
since 27 April 2016

Statistics


Number of views
176 (1 by ULiège)
Number of downloads
0 (0 by ULiège)

Bibliography


Similar publications



Contact ORBi