Skip to main content
Home
  • Clinical Outcome Assessments
    Clinical Outcome Assessments

    What are Clinical Outcome Assessments (COAs)? What do they measure?

    Learn more about COAs
    Collaborating with questionnaire authors.
    The most trusted distributor of COAs

    700+ exclusive COA distributions on behalf of their copyright owners

    Author discussing assessment development.
    PROQOLID™: the largest COA database

    7,000+ COAs described in details and accessible in this unique database

  • Services
    Services

    Mapi Research Trust provides turnkey solutions for your Clinical Outcome Assessments (COAs) strategies - including COA databases, targeted literature reviews, licensing.

    Read More
    COA Databases Patient-Centered Endpoint Intelligence COA Licensing Translation and Linguistic Validation eCOA Licensing COA Repository
    Team reviewing questionnaire content.
    About ePROVIDE™

    Online access to all our services and COA databases

    Working on validation studies.
    What are eBooklets?

    Find out more about this step-by-step guidance for valid digitization of COAs

  • Author Collaboration
    Author Collaboration

    Our team is dedicated to assisting authors in the daily management of their questionnaires and all derivatives, including translations and electronic versions

    Read More
    COA distribution COA copyright and protection Promotion of COAs and developers Testimonials
    Reviewing translated assessments.
    Catalog of COAs distributed by Mapi Research Trust

    800+ exclusive distributions of COAs, accessible in ePROVIDE

  • Resources
    Resources

    ACCESS all our resources on COAs and eCOA: webinars, publications, blogs. SUBSCRIBE to receive email updates.

    Read More
    Blog Publications Useful links Webinar recordings Whitepapers
    Whitepaper-Digitizing-COAs.jpg
    New whitepaper

    Digitizing COAs: A streamlined approach to approval

    Developing new PRO instruments.
    New whitepaper

    COAs and copyright: How to mitigate risks of infringement and misuse in clinical research and practice

  • News & Events
    News & Events

    Stay informed with our latest news and check all our upcoming events. SUBSCRIBE to get email updates.

    Read More
    Scientific meeting about COAs.
    News
    Meeting with instrument developers.
    Conferences
    Discussion about PRO measures.
    Webinars
  • About us
    About us

    Mapi Research Trust is a non-profit organization dedicated to improving patients’ quality of life by facilitating access to Clinical Outcome Assessments(COAs).

    Read More
    Collaborative research session.
    Team

    A global team of multi-disciplinary experts

    Team analyzing PRO data.
    Experience

    Collecting & Processing Patient Centered Outcomes information for more than two decades

  • Contact
  1. Home
  2. News & Events
  3. News
  4. The Role of QuesTReview™ in evaluating Patient Reported Outcome (PRO) measures

The Role of QuesTReview™ in evaluating Patient Reported Outcome (PRO) measures

By Keith Meadows, PhD, CMRS, HealthSurveySolutions, Banbury, UK

Introduction


No research paper on the development of a patient-reported outcome (PRO) measure would rightly be complete or acceptable without a detailed description of its psychometric properties in support of its reliability and validity. However, this information is only half the story of the instrument’s quality and ability to provide insightful information.


Following good questionnaire design practice is an essential component in the development of a health survey questionnaire if the data collected is to be reliable and valid, whether it’s a PRO measure or not.
Writing good survey questions is a complex task. Every item can be worded in different ways and it is not always clear which is optimal. For example, the following question would appear at face value to be acceptable.
When sitting down do you ever feel short of breath?
However when compared to:
Do you ever feel short of breath when sitting down?
The latter question has a right-branching structure in which the question starts with the subject followed by the modifier, resulting in the respondent having to hold less information in memory to give a correct response. However, design issues such as this are rarely presented as part of the PRO’s developmental process.
Question characteristics, such as wording, word length, question type, reading ease, choice of appropriate response options are well known among survey designers and over time researchers have developed recommendations for writing questions based on results on how these characteristics affect outcome.


There are less obvious design issues that can also impact on the validity of respondents’ answers to a survey questionnaire. For example, the appropriate use of balanced and unbalanced rating scales, left and right branching question structure discussed above and placing a specific question after a general question to avoid biased responses are just three examples that can impact on the validity of respondents answers. However, these features will not necessarily be reflected in the psychometric properties of the instrument, as respondents will generally provide a response to a question despite its design qualities. Furthermore, there are other factors to take account of in the design of a questionnaire, such as how to maintain respondent engagement, reducing respondent burden and long response times, avoiding straight lining and satisficing.


As mobile surveys are increasingly becoming mainstream, with about 20% of all surveys being taken on a mobile device, it is important to understand the best practice for mobile survey design and the expectations for mobile respondents. Issues can include the design of motivational welcome screens, the number of questions per screen, inappropriate question formats, use of scrolling and motivational techniques such as inclusion of a progress bar are all issues to be considered in terms of increasing the quality of information obtained from the respondent.
There are a number of methods to understand the quality of questions prior to fielding which include expert review, laboratory based e.g. cognitive interviews (CI) and field based methods. 1,2,3
Many factors can influence the choice of methods, including budget and development timelines which are the major factors in determining the method to be used. Expert review which requires no data collection is the least expensive, while field-based methods such as pre-testing are more costly and time consuming.
Laboratory methods such as CI can provide evidence of the comprehensibility problems respondents might have, however, CI will not necessarily pick up all issues of comprehensibility that expert review will. However, research evidence indicates that a combination of expert review and CI provides the best predictions of question accuracy.4
This article introduces QuesTReviewTM our expert review tool that evaluates a survey questionnaire in terms of its ability to collect reliable and unbiased data prior it going into the field.


What is QuesTReview™?
QuesTReview is our structured standardized questionnaire evaluation tool, developed to evaluate the structure and effectiveness of a questionnaire and identify question features that are likely to lead to response error.
Grounded in recognized psychological and behavioral models of how respondents complete survey questionnaires, QuesTReviewTM benchmark the questionnaire against key parameters of good questionnaire design practice in a step-wise manner e.g. wording, question length, knowledge and memory demands etc. and rate whether each question exhibits features that are likely to cause problems. These ratings are, where required, combined with recommendations for correcting each potential problem as feedback to the instrument author / developer.
Each parameter comprises a number of sub-categories. For example, the clarity parameter comprises sub-categories including, word length, ambiguity, use of technical terminology, question wording (including the use of low-frequency wording), sensitivity and social desirability. The resulting quantitative and qualitative feedback is provided in the following formats:

  • Traffic light rating system for each question/item by parameter combined with a detailed qualitative description of the identified weaknesses for each question/item.
  • Average rating score across all parameters by individual question providing an overview of each question/item performance and need for revision.
  • Checklist for eCOA application
  • Survey completion time
  • Overall performance rating
uuid_e5344c92-832c-4bc5-874c-42d44ea9e3d5_Fig1-300x226-QR

Examples of QuesTReview™ feedback

  • Figure 1 illustrates that for question number 6 the sub categories of the clarity parameter have been rated as having major defects. The qualitative feedback identifies the problems and provides suggestions on how to address the issues.
uuid_79caf2a9-5943-45bd-b34d-0da48ac5e645_Fig2-300x220_QR
  • Based on our proprietary QuesTAnalyzerTM scoring algorithm, Figure 2 provides an overall view of the performance of each question across all parameters with questions 6, 8, 10 and 11 requiring major revisions in line with the qualitative feedback.
     
uuid_017c9207-a7a6-4270-9993-bb407512ffc8_Fig3-300x210_QR
  • With mobile surveys becoming mainstream in collecting patient reported outcomes and other health, quality of life assessments and particularly when applying  ‘Bring your own device’ (BOYD) solutions, it is essential that issues specific to online and mobile surveys are included in questionnaire evaluation in addition to comprehensibility problems. Figure 3 is a check list of some of the do’s and don’ts specific to the design of online and mobile survey questionnaires.
     
uuid_6ef633cd-7c60-4bc5-a7fa-5304587c1d13_Fig4-300x231_QR
  • Survey completion time can have a significant impact on respondent burden, motivation to complete the survey, item non-response and drop-outs. Using a scoring algorithm based on word count and characters, feedback is provided on completion time for the questionnaire across the main administration modes. In Figure 4 an example of the feedback is shown, where, apart from the smartphone the administration modes fall within their respective completion timeframes.

What distinguishes QuesTReview™?
We believe that what sets QuesTReview™ apart from other questionnaire evaluation methods is first, it provides both quantitative and comprehensive qualitative feedback on a range of less well known good questionnaire design practices which are less likely to be identified during cognitive interviews. Secondly, adapting a paper questionnaire for or developing a mobile survey calls for a different set of design criteria. QuesTReview™ undertakes an evaluation against a checklist of key design features to ensure the survey is mobile friendly.
For further information on QuesTReview™ contact the author: kmeadows@dhpresearch.com
References

Presser S and Blair J. Survey Pretesting: Do Different Methods Produce Different Results? Sociological Methodology 1994; Vol. 24 73-104.

Rothgeb J, Willis G, Forsyth S. Questionnaire Pretesting Methods: Do Different Techniques and Different Organizations Produce Similar Results? Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 2007; 96 no. 1 5-31.

Yan T, Kreuter F Tourangeau R. Evaluating Survey Questions: A Comparison of Methods. Journal of Official Statistics. 2012; 28.4

Westat AM, Presser S. Using Pretest Results to Predict Survey Question Accuracy https://www.amstat.org/meetings/qdet2/OnlineProgram/AbstractDetails.cfm? International Conference on Questionnaire Design, Development, Evaluation and Testing. 2016



 

In this section
In this section
  • News
  • Events
    • Conferences
    • Webinars
Site Branding
    ICON plc
  • Contact
  • About Us
For Clients
  • Services
  • Resources
  • ePROVIDE™
News & Events
  • News
  • Conferences
  • Webinars
Socials
  • Linkedin

Legal Footer

  • © 2025 Mapi Research Trust
  • Disclaimer
  • Privacy
  • Site Cookies
How can we help?
  • All
Popular search terms:
  • COVID-19
  • Site & Patient Recruitment
  • Oncology
  • Medical Device
  • Real World Evidence
  • Decentralised & hybrid clinical trials
  • Digital Disruption