Produced with Scholar

Project: Educational Theory Practice Analysis

Project Overview

Project Description

Project Requirements

The peer-reviewed project will include five major sections, with relevant sub-sections to organize your work using the CGScholar structure tool.

BUT! Please don’t use these boilerplate headings. Make them specific to your chosen topic, for instance: “Introduction: Addressing the Challenge of Learner Differences”; “The Theory of Differentiated Instruction”; “Lessons from the Research: Differentiated Instruction in Practice”; “Analyzing the Future of Differentiated Instruction in the Era of Artificial Intelligence;” “Conclusions: Challenges and Prospects for Differentiated Instruction.”

Include a publishable title, an Abstract, Keywords, and Work Icon (About this Work => Info => Title/Work Icon/Abstract/Keywords).

Overall Project Wordlength – At least 3500 words (Concentration of words should be on theory/concepts and educational practice)

Part 1: Introduction/Background

Introduce your topic. Why is this topic important? What are the main dimensions of the topic? Where in the research literature and other sources do you need to go to address this topic?

Part 2: Educational Theory/Concepts

What is the educational theory that addresses your topic? Who are the main writers or advocates? Who are their critics, and what do they say?

Your work must be in the form of an exegesis of the relevant scholarly literature that addresses and cites at least 6 scholarly sources (peer-reviewed journal articles or scholarly books).

Media: Include at least 7 media elements, such as images, diagrams, infographics, tables, embedded videos, (either uploaded into CGScholar, or embedded from other sites), web links, PDFs, datasets, or other digital media. Be sure these are well integrated into your work. Explain or discuss each media item in the text of your work. If a video is more than a few minutes long, you should refer to specific points with time codes or the particular aspects of the media object that you want your readers to focus on. Caption each item sourced from the web with a link. You don’t need to include media in the references list – this should be mainly for formal publications such as peer reviewed journal articles and scholarly monographs.

Part 3 – Educational Practice Exegesis

You will present an educational practice example, or an ensemble of practices, as applied in clearly specified learning contexts. This could be a reflection practice in which you have been involved, one you have read about in the scholarly literature, or a new or unfamiliar practice which you would like to explore. While not as detailed as in the Educational Theory section of your work, this section should be supported by scholarly sources. There is not a minimum number of scholarly sources, 6 more scholarly sources in addition to those for section 2 is a reasonable target.

This section should include the following elements:

Articulate the purpose of the practice. What problem were they trying to solve, if any? What were the implementers or researchers hoping to achieve and/or learn from implementing this practice?

Provide detailed context of the educational practice applications – what, who, when, where, etc.

Describe the findings or outcomes of the implementation. What occurred? What were the impacts? What were the conclusions?

Part 4: Analysis/Discussion

Connect the practice to the theory. How does the practice that you have analyzed in this section of your work connect with the theory that you analyzed on the previous section? Does the practice fulfill the promise of the theory? What are its limitations? What are its unrealized potentials? What is your overall interpretation of your selected topic? What do the critics say about the concept and its theory, and what are the possible rebuttals of their arguments? Are its ideals and purposes hard, easy, too easy, or too hard to realize? What does the research say? What would you recommend as a way forward? What needs more thinking in theory and research of practice?

Part 5: References (as a part of and subset of the main References Section at the end of the full work)

Include citations for all media and other curated content throughout the work (below each image and video)

Include a references section of all sources and media used throughout the work, differentiated between your Learning Module-specific content and your literature review sources.

Include a References “element” or section using APA 7th edition with at least 10 scholarly sources and media sources that you have used and referred to in the text.

Be sure to follow APA guidelines, including lowercase article titles, uppercase journal titles first letter of each word), and italicized journal titles and volumes.

Icon for Computer Adaptive Testing

Computer Adaptive Testing

Computer adaptive assessments have become increasingly popular a measurement tool to assess student’s learning. My school has currently been using the NWEA MAP test for the past 15 years but will be switching to the iReady assessment next school year. I am going into my thirteenth year as an educator, all of which I have given the NWEA MAP assessment and analyzed its findings. iReady gives teachers the ability to monitor computer adaptive learning lessons, growth, and data from the students. While both NWEA MAP and iReady are computer adaptive tests, the change has prompted me to become more interested in the adaptive assessment process, their reliability, and the best ways to use them in my own classroom.

Media embedded June 12, 2024

The objective of standardized, large-scale assessments is ultimately to measure and monitor student growth and achievement. No Child Left Behind (NCLB), authorized in 2001, set forth the goal “that every child can learn, every child can progress” (Kalantzis & Cope, 2022). Standardized testing was a required part of NCLB implemented to ensure that schools were making adequate yearly progress and if they weren’t a strict set of guidelines were in place to enforce growth. Currently, Every Student Succeeds Act (ESSA) replaced NCLB with the same underlying principles of NCLB. ESSA, unlike NCLB, allows the states to determine the measurement tool to monitor student progress in multiple subjects and set their own guidelines to support districts that are not meeting that measure.

Figure 1

Personalized Recommendation with Adaptive Testing 

Note. The figure shows the outline of using an item bank with adaptive testing from "Personalized recommendation in the Adaptive Learning System: The role of Adaptive Testing Technology" by Dai at el., 2023, Journal of Educational Computing Research, 61(3), p. 531.

Standardized assessments have changed throughout their existence. Traditional paper and pencil standard assessments give all students the same questions and elicit a selected response answer (Kalantzis & Cope, 2022). “Increased testing and the high stakes decisions surrounding standardized test scores have also led researched to find more accurate and reliable ways to assess student ability” (Colwell, 2013, p. 50). Computer adaptive assessments gear the questions to the individual learner. Based on the student’s answer to a question, the next question is selected from an item bank. For example, if a student gets question 1 correct, the item bank will pose a comparable or more difficult question to the student. If they get question 1 incorrect, it will move them to an easier question (Colwell, 2013). This process is shown in Figure 1 and is described in depth in the video, "Making Sense of Computer Adaptive Tests,” by Ementum (2019).. Item Response Theory (IRT) is used with adaptive testing as according to Dai et. al (2023), “the measurement efficiency is the highest when the test items are neither too difficult nor too easy for examinees” (p. 527). All students are provided with an opportunity to answer questions that should match the student’s learning level, which one would think would help motivate them rather than posing questions at too high or too low of a learning level.

2. Theories and Pedagogy around Computer Adaptive Testing

Standards Based Tests were given for many years on paper and pencil and transitioned to computer based more recently. These tests traditionally were given to all students in the same order with the same questions. This is called the classical test theory (Martin & Lazendic, 2017). Didactic pedagogy and item response theory are two terms discussed in this section that portray the type of framework that computer adaptative assessments fall within due to their nature, despite their advancement in technologies.

Didactic Pedagogy

The test is viewed as a “peculiar artifact” in didactic pedagogy (Cope & Kalantzis, 2015). This is because of its enate nature - following a unit of learning without the use of the textbook or discussion with peers. The shift of the select response test from paper and pencil to machine allowed for tests to be made inexpensively and be graded much quicker. Select response questions are those that allow for the selection of one answer from a myriad of multiple-choice, true/false, or fill in the blank question types on an assessment (Cope & Kalantzis, 2022). The data gained from the assessments may be instant or quickly received, but not always direct in terms of learner feedback. Cope and Kalantzis (2015) point out that due the nature of the select-response assessment, regardless of its advanced technology, it is still testing memory and the “replicability of skills in the form of non-negotiable epistemic routines” (p. 361). The norm-based standardized assessments, Cope and Kalantzis (2015) state, “position learners in a cohort in a way that presupposed inequality, and to this extend constructs inequality. For the few to succeed, the many need to be mediocre, and some must fail. This is the mathematical logic of the normal distribution curve” (362).

Media embedded June 8, 2024

Item Response Theory

The adaption of computer adaptive technology (CAT) has led to the use of the Item Response Theory (IRT). IRT is defined as “a statistical framed used to model and analyze the relationship between individual’s responses to test items and their underlying ability or trait” in the video above (Test Partnership, 2023a). CAT is viewed as an effective application of IRT. Lord stated in 1968 that, “measurement efficiency is the highest when the test items are neither too hard nor too easy for examinees” which proved to be the “earliest item recommendation strategy in adaptive testing” (Lord, 1968, as cited in Dai et al., 2023 p. 527). Item response theory is “an overarching theory describing how students respond to individual test questions. In its most common form, it assumes that the probability that any student will answer any item correctly is defined by just two things: a single number describing the ability of the student (unidimensionality) and a small set of numbers (item parameters) describing the key characteristics of the item such as its difficulty and how discriminating it is” (Benton, 2021, p. 82-83). The scoring in computer adaptive testing is then calculated using the level of difficulty in the questions they were given (Benton, 2021). Cope and Kalantzis (2016) describe this as a latent cognitive trait that encompasses the idea that test is "varying the questions according to what the student knows or understands and the relative difficulty of the item" (p. 2). The tests provide the student with a "calibrated" score in a shorter amount of time and with a greater range of questions (Cope & Kalantzis, 2016). A large data bank of questions must be created to assure that there are enough questions based upon each level of difficulty to help assess each individual student where they are at. The data bank allows for the analysis of each question and answer to help determine the following question and pinpoint the areas of strength and weakness for each student.

Figure 2

Zone of Proximal Development Diagram

Note: The figure shows how learning shifts from what one can do on their own, to what they can do with some help, and what they cannot do even with support from “Embracing the journey: Lifelong learning and the zone of proximal development” by L. Epp, 2023, May 1, (https://lorneepp.com/embracing-the-journey-lifelong-learning-and-the-zone-of-proximal-development/).

Zone of Proximal Development

The zone of proximal development (ZPD) theory suggests that “providing learners with challenging content can stimulate their enthusiasm and potential and improve their learning performance” (Dia et. al., 2023, p. 525). This can be illustrated in computer adaptive testing when the learner difficulty level may shift from each question to the next. The questions are intended to meet students at a level in which they can perform activating their ZPD. This idea can help to drive students at higher-achieving levels by asking them questions that they may struggle to solve to find their adaptive level. Epp (2023) describes the ZPD as the “sweet spot between what we can do independently and what we can achieve with guidance.” In Figure 2, Epp (2023) provides a visual for the ZPD and shows the continuum of growth for learning. When students are in the “sweet spot” for learning, it provides them with an opportunity to be more engaged as they quite to an area where they simply cannot complete the work on their own. Integrating computer adaptive testing with a learner’s zone of proximal development should help to motivate students, too.

3. The Benefits and Drawbacks of Computer Adaptive Testing

Shortly after the 1950s, select response tests that could be machine read using colored in bubbles were introduced (Cope & Kalantzis, 2015). In the 1970s, the minimum competency tests were introduced as the nation was worried about falling behind other countries and wanted to find a way to hold educational accountability (Amrein‐Beardsley, 2022). Since then, the shifts between requirements and tests have varied based on policy, presidents, etc. The formalized paper and pencil state testing, such as Illinois Standards Achievement Test (ISAT), and now the classic test that is computer based, Illinois Assessment of Readiness (IAR), featured the same questions for all students. These tests gave students questions that were geared towards the “average” student.

Media embedded May 31, 2024

Benefits of CAT

As technology advanced and computer adaptive testing (CAT) has begun to replace some of the standardized test that schools have become accustomed to, there have been several pros and cons brought to light. One benefit is CAT has allowed for tests to produce a “more accurate and precise measurement of ability in comparison to fixed item testing” (Colwell, 2013, p. 50). A precise measurement is extremely important when assessing a learner’s ability and wanting to ensure that it is time well spent with useful data for the learner, teacher, and parents. While computer adaptive tests are commonly associated with K-12 education, they are also used for credentialing, such as the NCLEX (Nurses), NAPLEX (Pharmacists), CPA exam, and EMT/Paramedic, and other areas, such as employment selections, higher education (GRE/GMAT), and for medical fields (Thompson, 2022).

Student motivation, which can be defined "as individuals' energy, inclination, interest, and drive to learn, work effectively and achieve to potential," and engagement, which can be defined as "the behaviors aligned with or following from this energy, inclination, interest, and drive" (Martin & Lazendic, 2017, p. 30) have been shown to increase when taking a computer adaptive test. A positive effect in all areas of testing (achievement levels, measurement precision, motivation, engagement, and overall test experience) showed to be significantly higher in 9th grade students, who are often “known for lower levels of academic motivation and engagement” with adaptive testing over a traditional paper-and pencil test (Martin and Lazendic, 2017, p. 38).

Computer adaptive testing can lead to data with specific measures for where a student is currently in different areas within each subject. Figure 3 shows a student’s instructional report from NWEA MAP testing. As mentioned earlier, NWEA MAP is a type of computer adaptive test that is usually given three times a year in schools. This report shows the skills that the student is ready to learn. Following the assessment, a teacher can generate the reports and use the skills listed to guide their small group instruction, to address learning gaps, or provide enrichment for students who have already mastered the upcoming skills. The assessment can also be used to place students in personalized learning paths in computer adaptive learning for optimal learning. With teacher training on how to best analyze the data and use CAT, the resources for a teacher to truly understand each of their students is there. It is only a matter of taking the time to unlock the data and apply it to teaching through guided professional development and support.

Figure 3

NWEA MAP Instructional Area Report

Note: This figure shows a student’s instructional report that tells the specific skills based on the adaptive test taken that a student is developmentally ready to begin working on from “MAP Student profile” by NWEA, 2024, (https://teach.mapnwea.org/nextgen-report/students/profile).

The computer adaptive tests generated “lower achievement rate errors,” which means the scores were more accurate than others (Martin & Lazendic, 2017, p. 40). By using a test that correctly measures from student to student is an important factor when considering the options of testing to measure a learner’s ability. Time is of the essence in education and computer adaptive testing address the need for time too. Using computer adaptive assessments entail only about 60 percent of the items to produce the same results as a traditional paper-based test (Benton, 2021, p. 83). This in turn allows for the test to be shorter and require less time than the former assessments used in education. Figure 4, shown below, highlights the benefits of CAT and the process taken within it to measure the student.

Figure 4

Benefits and Procedures of Computer Adaptive Testing

Note. The figure shows how computer adaptive testing adjusts based on the learner and the benefits of it from “Computer Adaptive Testing” by Schwarz, J, 2022, (https://www.assessmentworkshop.com/2022/04/25/computer-adaptive-testing/).

Critiques of CAT

While there are many benefits to computer adaptive testing, there are also concerns. Critiques of CAT and IRT suggest that the student is limited to mainly select response-based questions. Select response-based questions allow for students to easily make errors or guess to get the answer correct. Cope and Kalantzis (2022) discuss the use of distractor choice within multiple choice questions. An example of a distractor choice would be 0.83, when the answer to the question is 0.083.

While motivation and engagement increase with CAT, this also comes with higher anxiety than a traditional test (Martin & Lazendic, 2017). They believe this may be due to the heightened arousal brought on by “a better match between students and test items that generates a level of personal challenge” (p. 39). Colwell (2013) compared the difference between students taking a CAT with high- and low-test anxiety and found that while they scored comparably on a fixed-item test, the high-test anxiety student performed lower than their peer with low test anxiety on the CAT (p. 52). However, Colwell (2013) also reported that if the process of a computer adaptive test, such as the item selection procedure, was shared with the test taker, they performed higher than those who did receive the same information (p. 53). Another recommendation Martin and Lazendic (2017) have is to use adaptive tests that function in a way that the student can review their work. As adaptive testing now stands, a student answers a question which determines their next question. This does not allow for students to revisit a previous question.

Benton (2021) described the way in which CAT calibrates the questions which “requires an initial estimate of the difficulty and discrimination of each item” (p. 84). Benton goes on to discuss the limited pool of students used to create this baseline data with some tests which could lead to under or over-estimation of the difficulty for some questions. Further testing or baseline data could help to ensure this is not a critique of such testing going forward. Martin and Lazendic (2017) state another “common criticism of item-level adaptive testing is that all students are not completing the same items and even if the same items are answered, they may not be in the same order for all students” which can influence the score for each student (p. 39).

Finally, Cope and Kalantzis (2015) discuss that CAT “brings continuous assessment of memory and skills into learning” and that this causes the learning to “further be mechanized in a relationship between the lone learner moving forward on their learning on the basis of the test answers they give to their machine” (p. 362). In short, the device is causing the learner to become isolated from their peers.

4. Analysis of Computer Adaptive Testing

Computer adaptive testing has become one of the primary sources of standardized testing in schools today. Using Item Response Theory allows for students to be posed questions that are targeted to where they are at academically. The need for fewer questions while providing accurate results is a win for all involved. The shift for CATs from an assessment of learning to an assessment for learning to measure the knowledge while providing learners with immediate feedback is instrumental in education. The ability of the testing to cater to everyone brings into fruition the zone of proximal development presented by Vygotsky in 1978 (Collares & Cecilio‐Fernandes, 2018). The possibilities of what is next to come with computer adaptive learning are discussed in this section.

Figure 5

Comparison of Linear, Computer Adaptive, and Multi-Stage Computer Adaptive Tests

Note. This figure compares the differences between linear tests, computer adaptive tests, and multi-stage computer adaptive tests from “Leveraging multi-stage computer adaptive testing for large-scale assessments,” 2020, Education Quality and Accountability Office, p. 2-3.

 Multistage Adaptive Testing

An alternative to computer adaptive testing presented in the literature is called multistage adaptive testing (MAT). MAT gives students a series of questions, termed testlets or modules, to answer before determining the difficulty of the next modules. This type of testing has its own set of benefits such as allowing students to go back to the previous question within a module, a smaller item bank needing to be designed, and the ability to use a common text or passage with a series of questions (Martin & Lazendic, 2017). The Education Quality and Accountability Office (EQAO) states that MAT are a “balanced compromise” between linear tests and CATs as they combine the advantages of both (2020). A main reason for this opinion is that the same modules are being used for multiple students as they navigate through the test. This in turn, is giving several students the same series of questions, but at different times. One of the drawbacks of computer adaptive testing is the fact that most students are not being asked the same questions, even if at similar levels. In Figure 5 (above), the comparison between linear tests, computer adaptive tests, and multi-stage computer adaptive tests is outlined. Linear test refers to the fixed or traditional paper and pencil test mentioned earlier.

Figure 6

Adaptive Learning Model

Note. The figure shows the process of combining the domain and learner model to form the adaptive learning model from “Personalized recommendation in the Adaptive Learning System: The role of Adaptive Testing Technology" by Dai at el., 2023, Journal of Educational Computing Research, 61(3), p. 526.

 Computer Adaptive Learning

Computer adaptive learning has become a more prominent tool in education. CAT or MAT results are often what is used to determine the starting point in computer adaptive learning for students. For example, if a student is in 4th grade tests at a 5th grade level in math while taking a CAT, the student would then start their learning path with 5th grade math standards that fall into their zone of proximal development. Computer adaptive learning uses the same progression as the zone of proximal development by “providing learners with challenging content that can stimulate their enthusiasm and potential and lead to improvement” (Dai et al., 2023, p. 529). By using the student’s zone of proximal development to focus in on skills that the learner is ready for without pushing them outside of their zone, adaptive learning can help to build upon skills students already have. In Figure 6, the adaptive model is illustrated to show the way in which learning paths, materials, and strategies may all be adjusted as needed based upon what is best for the learner. Dia et al. conducted a study to measure the impact of computer adaptive learning. They studied two groups of junior high students and found that the group using the computer adaptive learning model had considerably higher achievement scores and greater learning abilities then the control group of students (Dai et al., 2023).

Media embedded June 1, 2024
Media embedded June 1, 2024

Possibilities with CAT, Natural Language Processing, and AIEd

With the endless possibilities with technology today, I believe over the next ten years, computer adaptive testing and multistage adaptative testing will significantly change. Currently, one of the downfalls of such testing is that it is primarily driven with select-response questions. In the coming years, with the use of Artificial Intelligence in Education (AIEd), there will likely be supply-response questions embedded in such assessments and learning platforms. Using AI in education, would allow for more timely and objective feedback and less time spent grading for teachers. Instead, teachers could focus their attention on students individually and dedicated interventions (fintelics, 2023).

The use of AI in intelligent tutoring systems (ITS) has used “techniques of artificial intelligence to model a human tutor in order to improve learning by providing better support for the learner” (Kabudi et al., 2021, p. 2). AIEd could respond to the student in a variety of ways, as a tutor would, by scaffolding instruction, providing a different strategy to help the student, or giving support through a learning activity (Kensington, 2024). ITS use AI algorithms “facilitate real-time communication, monitor student engagement, and provide automated feedback, creating a rich and dynamic virtual learning experience” (fintelics, 2023). Using this type of interactive feedback will continue to build upon the motivation and engagement that was seen alone in students using computer adaptive testing. The AI algorithms can similarly be used to “identify specific challenges and provide targeted inventions, fostering inclusivity and enhancing the educational experiences of students with special needs” (fintelics, 2023). While students may all be on the same platform, appearing to work on the same material, the skills or concepts may be varied based upon what each learner needs to be successful while working within their own zone of proximal development. The Math Principal Tutor (2024) compares AI adaptive learning to a “music streaming service that knows your tastes and continually adapts the playlist to suite your mood and preference” (fintelics, 2023).

One notable product already using AI-adaptive learning is Duolingo, a popular app that allows the user to learn a new language through a series of questions and activities using gamification and adaptivity (Kensington, 2024). Duolingo mixes multiple choice, fill in the blank, verbal response, and application questions to help the user become proficient and gain points to use for a variety of purposes within the game all will assessing for mastery. The use of AI in this way has allowed me to see the ability for it to translate well over to CAT, MAT, and computer adaptive learning in the coming years.

What’s Next?

The growth of computer and multistage adaptive testing allow for the discussions of future possibilities with technology to be endless. The reliability of such test as the baseline for instruction guided through adaptive or AI learning allows for customized instruction to meet the students in their zone of proximal development. AI tutoring could provide students with one-on-one tutoring to help fill in learning gaps identified using computer adaptive assessments. This will allow for the teachers to use assessment for learning rather than of learning to better understand where each student in their classroom measures academically. Finding a balance between the new advanced technology for education and the in-person instruction given by a teacher while fostering relationships between students, will prove to be the challenge going forward for educators. We are entering a whole new realm of possibilities in education.


References

Amrein‐Beardsley, A. (2022). Using standardized tests for educational accountability: The bad idea that keeps on giving nothing in return. Journal of Policy Analysis and Management, 41(4), 1226–1232. https://doi.org/10.1002/pam.22426

Benton, T. (2021). Item response theory, computer adaptive testing and the risk of self-deception. Research Matters, (32), 82–100.

Cherry, K. (2023, October 3). A short-term memory experiment you can try at home. Verywell Mind. https://www.verywellmind.com/a-short-term-memory-experiment-2795664

Collares, C. F., & Cecilio‐Fernandes, D. (2018). When I say … computerised adaptive testing. Medical Education, 53(2), 115–116. https://doi.org/10.1111/medu.13648

Colwell, N. M. (2013). Test anxiety, computer -adaptive testing, and the Common Core. Journal of Education and Training Studies, 1(2). https://doi.org/10.11114/jets.v1i2.101

Cope, B., & Kalantzis, M. (2015). Assessment and pedagogy in the era of machine-mediated learning. Education as Social Construction: Contributions to Theory, Research, and Practice, 350–374.

Cope, B., & Kalantzis, M. (2016). Big Data comes to school: Implications for learning, assessment, and research. AERA Open, 2(2), 1–19. https://doi.org/10.1177/2332858416641907

Cope, B., & Kalantzis, M. (2022). New learning: Elements of a science of education. Cambridge University Press/Common Grounds Research Network

Dai, J., Gu, X., & Zhu, J. (2023). Personalized recommendation in the Adaptive Learning System: The role of adaptive testing technology. Journal of Educational Computing Research, 61(3), 523–545. https://doi.org/10.1177/07356331221127303

Edmentum. (2019, July 24). Making sense of computer adaptive tests. YouTube. https://youtu.be/dbuiwdBl8RU?feature=shared

Epp, L. (2023, May 1). Embracing the journey: Lifelong learning and the zone of proximal development. https://lorneepp.com/embracing-the-journey-lifelong-learning-and-the-zone-of-proximal-development/

fintelics. (2023, June 13). The power of AI in education: Personalized learning, intelligent tutoring, and beyond [Video]. YouTube. https://youtu.be/Lxil-oEo-XU?feature=shared

Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled Adaptive Learning Systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2, 1–10. https://doi.org/10.1016/j.caeai.2021.100017

Kensington, J. (2024, January 7). Navigating the future of education: The role of AI adaptive learning. The Multiplication Hustle. https://www.multiplicationhustle.com/navigating-the-future-of-education-the-role-of-ai-adaptive-learning/

Leveraging multi-stage computer adaptive testing for large-scale assessments. (2020). Education Quality and Accountability Office, 1–8.

Martin, A., & Lazendic, G. (2017). Supplemental material for computer-adaptive testing: Implications for students’ achievement, motivation, engagement, and subjective test experience. Journal of Educational Psychology, 110(1), 27–45. https://doi.org/10.1037/edu0000205.supp

NWEA. (2024). MAP Student profile. https://teach.mapnwea.org/nextgen-report/students/profile

Schwarz, J. (2022, April 26). Computer adaptive testing. The Assessment Workshop. https://www.assessmentworkshop.com/2022/04/25/computer-adaptive-testing/

Test Partnership. (2023a, March 20). What is item response theory? [Video]. YouTube. https://youtu.be/d1FTKVAPlxY?feature=shared

Test Partnership. (2023b, April 13). What are Computer Adaptive Tests (CAT)?. YouTube. https://youtu.be/gIhirk2XAPo?feature=shared

The Principal Math Tutor. (2024, January 7). What is Ai Adaptive Learning [Video]. YouTube. https://youtu.be/XJhqwgKD1Mg?feature=shared

Thompson, N. (2022, May 12). Ai in assessment: Introduction to computerized adaptive testing [Video]. YouTube. https://youtu.be/c5RWytwscBM?feature=shared

Thompson, N. (2024, April 4). Multistage testing. Assessment Systems. https://assess.com/multistage-testing/