Home Research studies Guide to Analyzing Test Results for Research Studies

Guide to Analyzing Test Results for Research Studies

Last updated on May 27, 2025

After publishing your research study and collecting participant responses, Crowd offers a robust suite of tools to analyze and interpret your data. This workflow provides a detailed, step-by-step process to access, evaluate, and leverage your test results, ensuring you extract maximum value for decision-making.

Step 1: Accessing the Results Interface

Begin by navigating to the appropriate section of the Crowd dashboard to review your data:

  • From Research Studies: Go to the "Research Studies" tab, then click the "View Responses" button on the sidebar of the Research Studies homepage to enter the results area.

  • From Recruit Testers: Alternatively, access the "Recruit Testers" page and switch to the "Results" tab to transition from recruitment to analysis.

Upon entry, the interface is organized into four specialized sections: Responses, Metrics, Sessions, and AI Analysis, each designed to provide unique insights into your study’s performance.

Step 2: Overview of Responses Analytics

At the top of the results page, a dedicated bar provides an immediate overview of key analytics to assess the study’s progress and engagement:

  • Participant Sessions: Displays the total number of sessions captured for the study, representing every instance a tester interacted with the test. This figure indicates the reach and initial engagement level.

  • Responses Submitted: Shows the total number of participants who completed the study, reflecting the number of fully recorded datasets available for analysis.

  • Completion Rate: Presents the percentage of participants who finished the test relative to those who started, calculated as (Responses Submitted / Participant Sessions) × 100. A high rate suggests a well-designed, engaging test, while a low rate may indicate usability issues or excessive length.

  • Average Duration: Indicates the average time participants spent on the test, measured in minutes or seconds. This metric helps evaluate test complexity—shorter durations may imply simplicity, while longer ones could suggest thorough engagement or potential confusion.

Step 3: Navigating Result Sections

Responses: Individual and Aggregate Insights

  • Aggregate Results: View cumulative data as percentages (e.g., 33% of respondents selected Option A), offering a snapshot of overall trends.

  • Individual Responses: Examine each participant’s answers with their anonymous unique tester ID (e.g., Tester ID #67890), enabling detailed tracking of individual input.

*Viewing cumulative results

*Viewing individual responses

Metrics: Quantitative Performance Overview

  • Completion vs. Abandonment: Highlights the ratio of completers to drop-offs, indicating potential friction points.

  • Time Graph: Plots response times over the test period, revealing patterns in participant pacing.

  • Pie Chart Breakdowns: Visualize key demographics and engagement patterns through pie charts, including:

    1. Responses by Countries: Displays the distribution of responses across different countries, helping identify geographic trends (e.g., 40% from the USA, 30% from India).

    2. Sources: Shows the proportion of participants by recruitment source (e.g., hire, shareable link, email invite, web prompt), useful for evaluating recruitment effectiveness.

    3. Devices: Breaks down responses by device type (e.g., 60% mobile, 30% desktop, 10% tablet), aiding in device-specific usability analysis.

    4. Operating Systems: Illustrates the operating systems used by participants (e.g., 50% Android, 40% iOS, 10% Windows), highlighting platform-specific performance.

Sessions: Behavioral Replay

  • Select a session, then click "Play" to review a participant’s journey, including clicks, scrolls, and pauses. This qualitative data complements metrics, aiding in usability diagnostics. You can also delete these sessions.

AI Analysis: AI Summary

  • Access AI-driven insights by clicking the "AI Analysis" button, which directs you to the AI Summaries section.

  • Click "Generate Summary" to produce a synthesized report detailing themes (e.g., common usability issues), sentiment (positive/negative feedback), recommendations (e.g., redesign suggestions), and limitations (e.g., small sample size).

  • Note that when new responses are added, the AI will notify you, prompting you to regenerate the summary to incorporate the latest data.

  • Alternatively, interact with your data using the AI Chat (Chat with Your Data) feature, available on your dashboard or homepage by clicking the "AI Chat" button. For detailed guidance, refer to the AI Chat Documentation.

Applying Filters for Targeted Analysis

Customize your data view by clicking the "Filters" button at the top:

  • Options include country, duration, date, recruitment method (hire, shareable link, email invite, web prompt), device type, and browser.

  • Example: Filter by "Mobile" devices to assess mobile-specific performance.

  • For a comprehensive guide on understanding these filters, see guide.

Managing Study Settings

Access additional controls by clicking the "Settings" icon on the Responses page:

  • Edit Study: Adjust questions or settings mid-study. Refer to Editing Your Research Study for details.

  • Stop Responses: Cease new submissions to finalize data collection, ideal for time-sensitive studies.

  • Recruit Participant: Invite more testers to boost participation. Learn more in the Recruiting Participants.

  • Delete: Permanently remove the study and data, requiring confirmation to prevent accidental loss.

Practical Tips for Effective Analysis

  • Iterative Review: Regularly revisit Metrics and Sessions to track progress and adjust live studies.

  • Contextual Filtering: Use device or location filters to address specific user groups.

  • Session Correlation: Cross-reference session replays with Metrics to pinpoint usability issues.

  • Secure Sharing: Password-protect shared links if sensitive data is involved.

  • Compliance: Adhere to data privacy regulations when handling participant information.

Common Challenges and Solutions

  • Result Inaccessibility: Confirm test publication status or user permissions; escalate to support if unresolved.

  • Filter Misapplication: Reset filters and reapply with narrower criteria if data is missing.

  • Session Playback Failure: Verify session recording settings and participant device compatibility.

Related Resources

For assistance, reach out via in-app chat or email support@crowdapp.io