Episodes

  • It Depends Episode 10: Criterion, Norm, CAT & LOFT
    Nov 29 2024

    This episode of the It Depends Podcast examines criterion referenced testing, norm referenced testing, computer adaptive testing (CAT), and linear on-the-fly testing (LOFT). Amanda and Tim explore the key differences between these assessment types, their applications, and considerations for implementation.

    Key questions answered:

    • What is the difference between criterion and norm referenced testing?
    • How do CAT and LOFT tests work?
    • When should organizations consider using CAT or LOFT?


    Topic Keywords:

    Assessment, Criterion Referenced, Norm Referenced, CAT, LOFT, Adaptive Testing, Test Security, Item Banking, Test Design


    Key points from the discussion:


    • Criterion referenced testing measures against established standards/benchmarks
    • Norm referenced testing compares performance against other test-takers
    • CAT requires large item banks and unidimensional content
    • LOFT offers test security benefits but needs substantial item development
    • Both CAT and LOFT require sophisticated testing platforms
    • Organizations should carefully evaluate their needs and resources before implementing adaptive testing


    Register for Office Hours with Amanda Dainis and Company: https://www.dainisco.com/

    Connect with us:

    Amanda Dainis, PhD, MPA: https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/

    Tim Burnett: https://www.linkedin.com/in/tburnett/

    Ask a question: https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    59 mins
  • Ep9 - Test Centre V Online Proctoring
    Sep 20 2024

    This episode of the It Depends Podcast compares online proctoring and test centres for assessment delivery. Amanda and Tim explore the advantages and challenges of both methods, discussing candidate experience, technology reliability, security concerns, and cost factors. They debate the misconceptions surrounding each approach and emphasise the importance of considering an organisation's specific needs and budget constraints. The conversation also touches on future trends, including the impact of AI and the likelihood of both methods coexisting to meet diverse testing requirements. Overall, the episode highlights that the choice between online invigilation and test centres often "depends" on various factors unique to each testing programme.


    Key questions answered:

    Which method is preferred or better for candidates?

    How do the security concerns compare between online proctoring and test centers?

    What does the future hold for online proctoring and test centers?


    Topic Keywords:

    Proctoring, Invigilation, Item banking, Psychometrics, Certification, Security, Cheating, Accessibility, Test centers


    Bonus Question: What is the future of online proctoring and test centres.


    Find the answer here: ⁠⁠⁠⁠https://www.testcommunity.network/podcast-it-depends⁠⁠⁠⁠


    Register for Office Hours with Amanda Dainis and Company here: ⁠https://www.dainisco.com/⁠


    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠⁠⁠⁠⁠⁠


    Tim Burnett⁠⁠⁠ ⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/tburnett/⁠⁠⁠⁠⁠⁠


    Ask a question: ⁠⁠⁠⁠⁠https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    50 mins
  • Ep8 - Testing on a Budget
    Aug 14 2024

    This episode of the It Depends Podcast explores testing on a budget, from test development to secure delivery. Amanda and Tim discuss testing on a budget, exploring various ways organisations can implement and maintain testing programs without necessarily investing in expensive, large-scale platforms.


    Key questions answered:

    • Are big vendor platforms always the best option for your organisation if you're testing on a budget?
    • Is it possible to do item banking on a budget?
    • Can you do secure test delivery on a budget?


    Topic Keywords:

    Item banking, test security, remote proctoring, budget testing solutions, data analysis, operational validity, accreditation requirements, scalability, hidden costs, licensing.


    Bonus Question: If you need a bigger platform, what tips are there to reduce the price?


    Find the answer here: ⁠⁠⁠https://www.testcommunity.network/podcast-it-depends⁠⁠⁠


    Register for Office Hours with Amanda Dainis and Company here: https://www.dainisco.com/


    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠⁠⁠⁠⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠⁠⁠⁠⁠


    Tim Burnett⁠⁠⁠ ⁠⁠⁠⁠⁠https://www.linkedin.com/in/tburnett/⁠⁠⁠⁠⁠


    Ask a question: ⁠⁠⁠⁠https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    38 mins
  • Ep 7 - Performance Based Testing
    Jul 16 2024

    This episode of the It Depends Podcast explores performance-based testing in professional certification and licensure. Amanda and Tim discuss the benefits and challenges of performance tests compared to traditional knowledge-based exams, covering various assessment types, implementation considerations, costs, and future trends in performance-based assessment.

    Key questions answered:

    • What is performance-based testing?
    • What are some advantages of performance tests
    • What challenges come with performance-based tests?

    Topic Keywords:

    Psychometrics • Validity • Reliability • Item Response Theory • Job Task Analysis • Cut Scores • Performance-Based Assessment • Test Security • Equating • Bias and Fairness

    Bonus Question: What future trends do you see in performance testing and how should organizations prepare to adapt to these changes?

    Find the answer here: ⁠⁠https://www.testcommunity.network/podcast-it-depends⁠⁠

    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠⁠⁠⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠⁠⁠⁠

    Tim Burnett⁠⁠⁠ ⁠⁠⁠⁠https://www.linkedin.com/in/tburnett/⁠⁠⁠⁠

    Ask a question: ⁠⁠⁠https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    39 mins
  • Episode 6 - Exam Maintenance
    Jun 20 2024

    Top Episode Takeaways:


    📈 Test maintenance involves regularly monitoring test and item performance statistics to ensure content remains current and secure, and should be conducted annually at minimum or as frequently as monthly for large programs.


    🔍 The more frequently a test is administered, the more often maintenance should be performed, including reviewing key test-level statistics, item-level metrics, and qualitative candidate feedback to proactively identify issues.


    🛠️ Typical maintenance steps include analyzing test passing rates, reliability, item difficulty, discrimination, and distractor performance, investigating qualitative trends, and engaging subject matter experts to replace problematic items following standard development procedures.

    Bonus Question: Can you list 3 key flags or indicators that would alert you to a problem with the maintenance of a test?

    Find the answer here: ⁠https://www.testcommunity.network/podcast-it-depends⁠

    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠⁠⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠⁠⁠

    Tim Burnett⁠⁠⁠ ⁠⁠⁠https://www.linkedin.com/in/tburnett/⁠⁠⁠

    Ask a question: ⁠⁠https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    39 mins
  • Ep5: Test Blueprints
    May 12 2024

    Top Episode Takeaways:

    📐 Test blueprints provide the overall design and structure for an assessment. They define what content areas the test will cover and what proportion of the test each content area should comprise. Having a blueprint helps ensure the test adequately measures the intended knowledge and skills in a balanced way that reflects their importance.


    🎯 Blueprints should be created for most types of assessments, including certification/licensure exams, educational tests, and even workplace skills tests. The blueprint is developed after conducting a job task analysis, curriculum review, or other process to determine the key knowledge and skills to be assessed. Subject matter experts should be involved in validating the blueprint.


    ✅ A good test blueprint will cover all the major content areas without overemphasizing or underemphasizing any topics, allocate questions to each content area proportionately to their importance, be validated by representative subject matter experts to ensure it reflects real-world requirements, be transparently shared with stakeholders so they understand what the test measures, and be updated periodically as the field evolves to ensure the test remains relevant and valid.

    Bonus Question: Is there a particular platform or technology that you like to use when creating test blubprints?


    Find the answer here: https://www.testcommunity.network/podcast-it-depends


    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠⁠

    Tim Burnett⁠⁠⁠ ⁠⁠https://www.linkedin.com/in/tburnett/⁠⁠

    Ask a question: ⁠https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    28 mins
  • Ep4: Exploring Item Types
    Apr 18 2024

    Top Episode Takeaways:

    📝 Multiple choice questions are the most common item type, but they can be done well or poorly. A good multiple choice question takes effort to create but can measure a wide range of knowledge and skills.


    🎥 Technology-enhanced items (e.g., audio/video clips, drag-and-drop) can make assessments more engaging, but they should be used purposefully to measure relevant skills, not just for the "wow" factor.


    🚢 Performance-based assessments, such as simulations, are the gold standard for measuring practical skills but can be resource-intensive. The key is finding the right balance based on the content, resources, and scoring considerations.


    Episode Keywords: Item types, Multiple choice, Technology-enhanced items, Performance-based tests, Practical exams, Simulations, Rubrics, Scoring, Bias, Psychometricians Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ ⁠https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠⁠

    Tim Burnett⁠⁠⁠ ⁠https://www.linkedin.com/in/tburnett/⁠

    Ask a question: https://www.testcommunity.network/podcast-it-depends

    Show More Show Less
    38 mins
  • Ep3: Cut Scores Explained
    Mar 22 2024

    Top Episode Takeaways: ✂️ Cut scores are important decision points in criterion-referenced tests that categorize test takers as passing or failing. They should be set thoughtfully using a defensible process, not arbitrarily (e.g. always using 70%). Setting the cut score too high or low can have significant consequences.


    📏 The Angoff method is a common way to set cut scores. It involves having subject matter experts review test questions and estimate what percentage of minimally competent test takers would answer each question correctly. This provides a way to link the cut score to the test content and the minimum performance standard for the job or domain being tested.


    ⚖️ Cut scores may need to be revisited and adjusted over time if the test content changes substantially. Small tweaks may not require changing the cut score, but replacing many items with ones that perform differently in terms of difficulty may necessitate an adjustment to keep the cut score fair and aligned with the intended minimum competency standard. Ongoing evaluation is important.


    Episode Keywords:Validity, Reliability, Construct, Cut score, Angoff method, Item analysis, Fairness, Rubric, Feedback, Blueprint.

    Connect with us:⁠⁠⁠

    Amanda Dainis, PhD, MPA⁠⁠⁠ https://www.linkedin.com/in/amanda-dainis-phd-mpa-012b94119/⁠⁠⁠

    Tim Burnett⁠⁠⁠ https://www.linkedin.com/in/tburnett/

    A proud supporter of the ⁠⁠⁠https://www.testcommunity.network

    Show More Show Less
    38 mins