Home Research studies

Research studies

Get started with creating, publishing, and analyzing research studies to validate hypotheses and gain deeper insights from your users.
By Crowd team
14 articles

Share Your Research Studies with Participants Via Email

Crowd provides multiple methods to distribute your user research study to participants: sharing via a direct URL link, embedding on your website, hiring from Prolific’s participant pool, or sending email invitations. This guide focuses on the email invitation method, which enables you to target specific individuals or groups directly, ensuring you gather feedback from the right audience. Benefits of Using Email Invitations Email invitations are ideal when you need to: - Gather feedback from specific customers, users, or stakeholders. - Target participants with particular expertise or experience. - Reach a known group, such as a mailing list, customer database, or internal team. - Track exactly who has received and responded to your study invitation. This method offers precision and control, making it perfect for personalized research outreach. Steps to Send Email Invitations Step 1: Access the Email Invitation Feature Begin by accessing the email invitation feature on your Crowd dashboard. Navigate to Workspace > Research Studies > Create with Templates > Publish Your Study, then click the "Go Live" button and select "Invite with Email" to open the setup modal. For guidance on creating and publishing a study, refer to "Creating Your First Research Study in Crowd". Step 2: Configure Your Email Campaign In the setup modal: 1. Campaign Name: Enter a name for your campaign (visible only to you and your workspace members, e.g., "Customer Feedback Study 2025"). Note: You can create multiple campaigns 2. Add Recipient Emails: Option A: Enter Emails Manually: a. Input email addresses directly, separating each with a comma (e.g., user1@example.com, user2@example.com). Option B: Upload a CSV File (recommended for larger groups): a. Prepare a CSV file with email addresses in a single column, one per row. b. Click "Upload CSV", then select your file. c. Ensure proper formatting by downloading the Template CSV File. 3. Verify the number of email invites available, as displayed by the system (limits vary by pricing plan). Step 3: Customize Recruitment Settings (Optional) Tailor the participant experience by adjusting recruitment settings before sending. Options include redirecting participants to a custom URL upon completion, recording screen activity (e.g., mouse movements, clicks) for deeper insights, or removing Crowd branding for a professional look. Configure these settings to align with your study goals. Learn more about Additional recruitment settings. Step 4: Review and Send Invitations 1. Double-check your campaign name and email addresses for accuracy. 2. Use the "Click to Get a Study Email" option at the top of the modal to send a test email to yourself and preview the invitation. 3. Confirm the number of participants aligns with your expectations, noting your plan’s email invite limit. 4. Click "Send Email Invite" to distribute the invitations; Each recipient will receive an email containing a unique link to access your study. Step 5: Monitor Participation 1. Track your study’s performance directly on the recruit page under the performance section, where you can view key metrics: the number of emails sent, the number of people who started the test, and the number who completed it. 2. You can also manage your prompt by editing, stopping, and viewing responses here. 1. Additionally, monitor participation by checking the result page for detailed response data. For more on analyzing results, see Guide here. Tips for Successful Email Invitations Maximize response rates with these best practices: - Timing: Send emails in the morning for higher open rates. - Subject Line: Use clear, specific subject lines (e.g., "We Need Your Feedback on Our New Product"). - Brevity: Keep the email concise, with straightforward instructions. - Value Proposition: Explain how their feedback will make an impact (e.g., "Help us improve your experience"). - Time Commitment: Be transparent about the study duration (e.g., "This will take 5 minutes"). - Incentives: Highlight any rewards for participation (e.g., "Complete the study for a $10 voucher"). - Follow-Up: Send a polite reminder to non-responders after a few days. Troubleshooting Common Issues - Emails Not Sent: Verify email addresses are correctly formatted and within your plan’s invite limit. - Low Response Rates: Adjust your subject line or timing, and ensure the email clearly communicates value. - Participants Can’t Access Study: Confirm the study link is active and test it using the preview option. Related Resources - How to Install Crowd on My Website - Hiring Participants from a pool of Testers - Launching Your Study to Your Website Audience For further assistance, contact our support team at support@crowdapp.io

Last updated on May 28, 2025

Get High-Quality Participants with Screener

The Screener feature in Crowd allows you to filter participants for your research studies by setting specific criteria, ensuring only qualified participants proceed. This eliminates the need for manual filtering after data collection. Current Limitations - Conditions between screener questions are not supported. - Be cautious when setting up panel orders with screening to ensure proper participant flow. Why Use a Screener Instead of Multiple-Choice Questions? While you can use multiple-choice questions with conditional logic to filter participants, the Screener feature is recommended for the following reasons: - Comprehensive Evaluation: Screener assesses all criteria before excluding participants, unlike multiple-choice setups where participants may be excluded step-by-step. - Dedicated Rejection Screen: Screener provides a specific rejection screen for disqualified participants, unlike a shared end screen in multiple-choice setups. - Cost Efficiency: When using hired testers, you’re only charged for qualified participants with Screener. Excluding testers via conditional logic may still incur costs for each participant who starts the study. Adding a Screener to Your Study Follow these steps to add a Screener to your research study in Crowd: 1. Navigate to Research Studies: a. From the workspace homepage, locate the "Research Studies" section on the dashboard. b. Click on "Research Studies" to proceed. c. Then, click "Create with Template." 2. Select a Template or Start from Scratch: a. On the template page, choose a template that suits your study needs, or click "Start from Scratch" to build a study manually. 3. Enter Study Details and enable screener: a. On the "Create a Study" page, under the "Study Details" section, enter the study name b. Select the accepted device for participants: "All devices," "Desktop only," or "Mobile only." c. In the "Screener" section, toggle the switch to "On" to enable the screener for filtering participants. To disable it, toggle the switch to "Off." 4. Add Screening Questions: With the screener enabled, add at least two qualifying questions (a minimum of two questions is required). You can add or delete questions as needed. For each question, choose the question type: a. Single Choice: Participants can select only one option. Available choices are: - Qualify: The participant meets the criteria and can proceed with the study. - Disqualify: The participant is excluded and directed to the rejection screen. b. Multiple Choice: Participants can select multiple options. Available choices are: - Qualify: The participant meets the criteria and can proceed. - Disqualify: The participant is excluded if this option is selected, even if other qualifying options are chosen. - Not Relevant: The participant can proceed only if this option is selected alongside a "Qualify" choice. Selecting only "Not Relevant" without a "Qualify" choice will disqualify the participant. c. Add answer choices by clicking "Add new choice" (e.g., "Option 1," "Option 2"), and assign the appropriate logic ("Qualify," "Disqualify," or "Not Relevant" for multiple-choice questions). 5. Preview Your Screener: - Ensure your criteria align with the note: Participants must select at least one "Qualify" option and zero "Disqualify" options to be eligible. - Click "Preview" to test the screener flow and ensure questions are clear. 6. Add Blocks to Create the Study: - Add additional blocks to design the rest of your study as needed. Click here to view How to create and launch a research study Setting Up Participant Screening Logic When configuring your Screener, you can add questions to filter participants: - Question Types: Choose between single-select (one answer) or multi-select (multiple answers). - Screening Logic: - Qualify: The participant meets the criteria and can proceed. - Disqualify: The participant is excluded if this option is selected. - Not Relevant (Multiple Choice Only): The participant can proceed only if selected alongside a "Qualify" choice. Selecting only "Not Relevant" without a "Qualify" choice leads to disqualification. Best Practices - Ensure a minimum of two questions to create a robust screener. - Avoid overly complex questions to reduce participant confusion. - Preview the screener to ensure the logic works as intended. Important Notes - Participants cannot go back to change their answers to prevent bias. - Conditions between screener questions are not supported. - Always preview your screener to ensure clarity and proper functionality. Viewing Results Currently, screener responses are not displayed in the Results dashboard. Related Articles - Creating a New Study on Crowd Need more help? Contact our support team via the chat bubble in your dashboard or email support@crowdapp.io.

Last updated on May 26, 2025

Creating Your First Research Study

Create new projects and gather instant user insights with Crowd. Follow these steps to set up your first research study and start collecting valuable feedback. Steps to Create a Research Study 1. Initiate Study Creation - Click the "Research Studies" on the right side panel of your dashboard 2. Choose a Template or Start from Scratch - Select from various pre-populated templates to streamline setup, or choose "Start from Scratch" to create a study tailored to your needs. Streamline your process or find the perfect template by selecting any category from the options on your left. 3. Enter Study Details - Provide the necessary details, including the project name. Specify the devices for participant access: choose "All devices," "Desktop only," or "Mobile only." - After selecting a template or starting from scratch, configure your screening options to pre-qualify testers. Toggle the Screener on or off and set up questions as needed. For detailed guidance, refer to the Screener Feature Help Document. 5. Customize the Welcome Screen - Enter your project title and description. This welcome message is the first thing testers will see, so provide clear instructions or a welcoming note. 6. Add Test Blocks - Click "Add New Block" to select a test type. Options include Simple Survey, Design Survey, Website Evaluation, Preference Test, Card Sorting, Five Second Test, and Prototype Evaluation (coming soon) - Add multiple test types by clicking "Add New Block" again. see use cases for blocks. - To reorder blocks use this icon to adjust the sequence of your sections as needed. - 8. Customize the "Thank You" Screen - Personalize the final screen testers see after completing your study. Create a message that resonates with participants and leaves a positive impression. 10. Preview Your Study - Click the preview button to experience your study from a tester’s perspective and ensure everything functions as intended. 11. Publish Your Study - Once satisfied, click the "Publish" button. In the modal that appears, select "Go Live" to launch your study, or choose "Continue Editing" to make further adjustments. Additional Resources For a visual walkthrough, watch the Creating Your First Research Study demo video

Last updated on May 26, 2025

Understanding Blocks: Simple Survey

The Simple Survey feature provides users with valuable insights into their target audience's opinions, preferences, and behaviors. By including demographic questions in the survey, Crowd users can gain a better understanding of who their customers are, what they want, and why they use their product or service. This feature also helps identify multiple subgroups of users who interact with the product differently. Tailoring the user testing process to each subgroup ensures that the feedback received is relevant and actionable. Overall, the Simple Survey feature helps Crowd users make data-driven decisions, improve their products or services, and increase customer satisfaction and retention. Steps to Create a Simple Survey 1. Select the Simple Survey Test: - Start by selecting the "Simple Survey" test from the list of block options. 2 . Select Answer Type: - Choose the preferred answer type for each question. Options include short text, long text, multiple choice, Yes or No, checkbox, verbal response, and linear scale. Select the answer type that best fits the question to gather accurate and relevant responses. 3. Set Question Requirements: - To make a question compulsory, toggle on the required button. If this is enabled, testers must answer the question to proceed. If left off, testers can skip the question. 4. Duplicate Questions: - Click on the duplicate button to create a copy of a question. 5. Delete Questions: - Click on the delete button to remove a question. 6. Duplicate Entire Sections: - To duplicate an entire section or block, click on the duplicate icon in the top right corner of the block. 7. Delete Entire Sections: - Click on the delete icon to remove an entire block or section. 8. Add New Questions: - Click on the "Add New Question" button to include additional questions. Multiple questions can be added within one section. 9. Enable Conditional Logic: - Toggle on conditional logic to display specific questions to users based on predefined conditions that you set.

Last updated on May 27, 2025

How to Evaluate your Website with Crowd

The Web Evaluation feature in Crowd allows you to gather detailed user feedback on your website’s performance, usability, and design. By integrating this feature into your research studies, you can obtain actionable insights to enhance user experience and optimize your site. This guide provides a step-by-step process to set up and conduct a web evaluation within the Crowd platform. Overview The Web Evaluation tool enables you to assess how users interact with your website by collecting their observations and preferences. After installing a tracking script, you can customize the evaluation with additional questions or instructions to target specific aspects of your site, such as navigation, content clarity, or visual appeal. Steps to Conduct a Web Evaluation 1. Navigate to Your Dashboard - Log in to your Crowd account. - From the main interface, locate and click on the "Research Studies" section to access your study management area. 2. Initiate a New Research Study - To begin creating a research study in Crowd, navigate to the "Research Studies" page and click the "Create New Test" button. - You’ll have two options: select a pre-designed template tailored for various evaluation needs, or choose to start from scratch for a fully customized study. - Next, provide a descriptive project name (e.g., "Website Usability Test ) and a description outlining the study’s purpose (e.g., "Evaluating user navigation and content accessibility on our homepage"). These details clarify the study’s objectives for participants and help you manage multiple projects. For a complete step-by-step guide, refer to the Help Document: Creating Your First Research Study in Crowd 4. Add a Web Evaluation Block - Click the "Add New Block" button to incorporate additional test components. - From the list of available test types, select "Web Evaluation" to proceed with this specific evaluation method. - Paste your website Url in the box below, and click on “Add website”. 5. Follow the instructions below, and click on Test connection. Note: A unique code will be generated for you; the code presented above is a placeholder. 7. Add Additional Questions or Instructions - Once the connection is confirmed, enhance your web evaluation by adding more questions or instructions for participants. - Use the "Add New Question" button to include items such as: - "How easy was it to find the product page?" (e.g., multiple-choice or linear scale). - "Please describe any issues you encountered while navigating." - Provide clear instructions (e.g., "Spend 2 minutes exploring the homepage and note your first impressions") to guide participant behavior. - Customize question types (e.g., short text, multiple choice) and set requirements as needed to ensure relevant feedback. Best Practices - Clear Objectives: Define specific goals for the web evaluation (e.g., assessing load time or user satisfaction) to focus participant responses. - Script Installation: Ensure the tracking script is placed in the section of your website’s HTML to avoid tracking errors. - Participant Guidance: Offer concise, actionable instructions to maximize the quality of feedback. - Review Feedback: Analyze results post-evaluation to identify trends and areas for improvement. Troubleshooting Common Issues - Connection Failure: If the "Test Connection" step fails, verify the script is correctly installed and the URL is active. Contact support if issues persist. - Limited Feedback: If responses are sparse, add more targeted questions or extend the study duration. - Script Not Loading: Ensure no ad blockers or website security settings are interfering with the script; adjust settings as necessary. Next Steps For advanced support, contact our support team via the in-app chat or email support@crowdapp.io.

Last updated on May 27, 2025

Conducting Preference Testing

Preference Testing in Crowd is a powerful research method within the Research Studies feature, designed to evaluate user preferences among multiple design options, products, or concepts. This feature enables Crowd users to collect actionable insights into user opinions, tastes, and decision-making processes, facilitating informed improvements to websites, applications, or marketing materials. This comprehensive guide provides detailed steps, best practices, and advanced techniques to maximize the effectiveness of Preference Testing in your research studies. When to Use Preference Testing Preference Testing is ideal in the following scenarios: - Design Decision-Making: Compare multiple design variants (e.g., homepage layouts) to select the most appealing option. - Product Development: Evaluate prototypes or features to align with user expectations. - Marketing Optimization: Test advertisements, slogans, or branding elements to identify the most effective choices. - User Experience Refinement: Assess usability preferences to improve navigation or interface elements. Types of Preference Testing Crowd supports several Preference Testing methods, each suited to different research objectives: - Paired Comparison Test: Present two options (e.g., Design A vs. Design B) and ask users to choose their preference. - Ranking Test: Display multiple options (e.g., 3-5 designs) and ask users to rank them in order of preference. - Rating Scale Test: Provide options with a rating scale (e.g., 1-5) for users to score each alternative based on preference or satisfaction. - A/B Testing Variant: Compare two versions (A and B) to determine which performs better in terms of user preference. The choice depends on the number of alternatives and the depth of insight required. Preparing for Preference Testing 1. Defining Research Goals - Clearly outline your objectives (e.g., "Determine the preferred color scheme for our app"). - Identify the key metrics (e.g., preference percentage, qualitative feedback) to measure success. 2. Identifying Target Participants - Select a representative sample of your target audience (e.g., age 18-35, frequent website users). - Use the screener feature to pre-qualify participants based on demographics or behavior. See how to use this feature. 3. Selecting Testing Materials - Prepare 2 to 4 high-quality images, designs, or descriptions of the alternatives (e.g., website mockups, product photos). - Ensure stimuli are consistent in format and resolution for fair comparison. Creating and Administering Preference Tests 1. Initiating a Research Study - Navigate to the "Research Studies" page in your Crowd dashboard. - Click "Create New Test" and choose to use a template or start from scratch see Help Document: Creating Your First Research Study. - Name your project (e.g., "Preference Test - App Redesign") and provide a description (e.g., "Comparing design options for app interface"). 2. Adding the Preference Testing Block - Click "Add New Block" within your research study setup. - Select "Preference Testing" from the list of test types. 3. Configuring the Preference Testing Block - Upload Images: - Click the "Upload Images" button to add visual stimuli. - Upload a minimum of 2 and a maximum of 4 images (e.g., design variants or product options). - Name each image (e.g., "Design A," "Prototype 1") in the provided field for better organization and reference. - Preview Images: - Use the "Preview" button to view uploaded images and ensure they display correctly. - Delete Images: - To replace an image, click the "Delete" button next to the image and upload a new one. - Add Instructions: - Enter pre-upload instructions (e.g., "Please review the following designs and select your favorite") in the provided text box. - Randomized Order Option: - Toggle the "Randomized Order" setting to on, allowing testers to see images in different sequences, reducing order bias and enhancing result reliability. - Add Follow-Up Questions: - After image selection, click "Add Follow-Up Question" to include questions (e.g., "Why did you prefer this design?" with multiple-choice or text options). - You can add and delete as many questions as required. 4. Administering the Test - Publish the research study and invite participants via the recruitment tools. - Ensure instructions are clear, and consider randomizing image order to minimize bias. Analyzing and Interpreting Test Results 1. Quantitative Analysis - Review preference percentages or average ratings for each option in the Crowd analytics dashboard. - Identify the most preferred alternative based on aggregated data. 2. Qualitative Insights - Analyze open-ended responses or comments to understand the reasoning behind preferences (e.g., "Preferred Design A for its simplicity"). - Use thematic analysis to categorize feedback. 3. Comparing Preferences - Segment results by demographics (e.g., age, location) to detect preference variations. - Export data for detailed statistical analysis if needed. 4. Reporting and Presenting Findings - Generate a report with charts (e.g., bar graphs of preference scores) and narrative insights. - Share findings with stakeholders to guide design or marketing decisions. Best Practices for Preference Testing - Unbiased Design: Avoid leading questions or visually favoring one option (e.g., larger images). - Diverse Participants: Include a balanced sample to reflect your target audience. - Clear Instructions: Provide concise guidance (e.g., "Choose the design you find most intuitive"). - Iterative Testing: Conduct multiple rounds to refine options based on initial feedback. - Ethical Standards: Obtain consent and ensure participant anonymity. Troubleshooting Common Issues - Image Upload Errors: Verify file formats (e.g., PNG, JPG) and size limits; re-upload if necessary. - Low Participation: Adjust screener criteria or increase incentives. - Inconsistent Results: Check for order bias and retest with randomized presentation. Next Steps Explore advanced Preference Testing features or integrate with other Crowd tools (e.g., Web Evaluation). For support, contact our team via in-app chat or email support@crowdapp.io

Last updated on May 27, 2025

Testing your Designs with Crowd

The Design Survey test type in Crowd is a powerful research method within the "Research Studies" feature, designed to collect targeted user feedback on a single design element, such as a website mockup, app interface, or marketing graphic. This feature enables Crowd users to gather actionable insights into user opinions, impressions, and usability concerns, facilitating informed improvements to designs. This comprehensive guide provides detailed steps, best practices, and advanced techniques to maximize the effectiveness of Design Surveys in your research studies. When to Use a Design Survey Design Surveys are ideal in the following scenarios: - Design Validation: Assess a specific design element (e.g., a new homepage layout) to ensure it resonates with users. - User Feedback Collection: Gather detailed opinions on visual appeal, clarity, or functionality of a design. - Iterative Design Processes: Test a design iteration before final implementation to identify areas for improvement. - Marketing Material Review: Evaluate the effectiveness of promotional graphics or advertisements. Types of Design Surveys Crowd supports several Design Survey methods, each suited to different feedback objectives: - Open-Ended Feedback: Ask testers to provide detailed comments (e.g., "What are your thoughts on this design?"). - Rating Scale Feedback: Request ratings on specific aspects (e.g., "Rate the visual appeal from 1-5"). - Multiple-Choice Questions: Offer predefined options (e.g., "Which aspect needs improvement: Colors, Layout, Text?"). - Yes/No Feedback: Use binary questions (e.g., "Does this design feel intuitive?"). The choice depends on the depth and type of feedback required. Preparing for a Design Survey 1. Defining Research Goals - Clearly outline your objectives (e.g., "Gather feedback on the usability of our new homepage design"). - Identify the key metrics (e.g., user satisfaction, specific pain points) to measure success. 2. Identifying Target Participants - Select a representative sample of your target audience (e.g., users aged 25-40, frequent website visitors). - Use the screener feature to pre-qualify participants based on demographics or behavior. See how to use this feature in the Get High-Quality Participants with Screener article. 3. Selecting Testing Materials - Prepare a single high-quality image of the design element (e.g., a website mockup, banner ad). - Ensure the image is clear and optimized for viewing (e.g., appropriate resolution, visible details). Creating and Administering Design Surveys 1. Initiating a Research Study - Navigate to the "Research Studies" page in your Crowd dashboard. - Click "Create New Test" and choose to use a template or start from scratch (see Creating Your First Research Study in Crowd). - Name your project (e.g., "Design Survey - Homepage Feedback") and provide a description (e.g., "Collecting user feedback on the new homepage design"). 2. Adding the Design Survey Block - Click "Add New Block" within your research study setup. - Select "Design Survey" from the list of test types. 3. Configuring the Design Survey Block - Upload Image: - Click the "Upload Image" button to add your design. - Upload a single image (e.g., a homepage mockup or ad graphic). - Name the image (e.g., "Homepage Design") in the provided field for better organization and reference. - [Insert Screenshot 2: Image Upload and Naming Interface] - Preview Image: - Use the "Preview" button to view the uploaded image and ensure it displays correctly. - Delete Image: - To replace the image, click the "Delete" button next to the image and upload a new one. - Add Instructions: - Enter instructions for testers (e.g., "Please review this homepage design and provide your feedback") in the provided text box. - Add Feedback Questions: - After image upload, click "Add Feedback Question" to include questions (e.g., "What do you think of this design?" with open-ended text, or "Rate the usability from 1-5"). - You can add and delete as many questions as required. - [Insert Screenshot 6: Feedback Questions Section] 4. Administering the Survey - Publish the research study and invite participants via the recruitment tools (see How to Invite Participants to Your Study via Email). - Ensure instructions are clear and concise to encourage meaningful feedback. Analyzing and Interpreting Survey Results 1. Quantitative Analysis - Review ratings or multiple-choice responses in the Crowd analytics dashboard. - Calculate average scores for specific metrics (e.g., average usability rating). 2. Qualitative Insights - Analyze open-ended feedback to identify recurring themes (e.g., "Testers found the layout cluttered"). - Use thematic analysis to categorize comments into actionable insights. 3. Comparing Feedback - Segment responses by demographics (e.g., age, device type) to uncover variations in feedback. - Export data for deeper analysis if needed. 4. Reporting and Presenting Findings - Create a report with visual aids (e.g., bar charts for ratings, quotes for qualitative feedback). - Share insights with stakeholders to guide design refinements. Best Practices for Design Surveys - Focused Feedback: Ask specific questions to target key design aspects (e.g., "Is the color scheme appealing?"). - Diverse Participants: Include a balanced sample to reflect varied user perspectives. - Clear Instructions: Provide concise guidance (e.g., "Focus on the layout and share your thoughts"). - Iterative Testing: Use feedback to refine the design and conduct follow-up surveys if needed. - Ethical Standards: Obtain consent and protect participant privacy during the survey. Troubleshooting Common Issues - Image Upload Errors: Verify file formats (e.g., PNG, JPG) and size limits; re-upload if necessary. - Low Response Rates: Adjust screener criteria or increase incentives to boost participation. - Vague Feedback: Refine questions to be more specific (e.g., avoid overly broad prompts). Related Documents Enhance your research studies with these related guides: - Creating Your First Research Study in Crowd – A step-by-step guide to initiating a research study. - Get High-Quality Participants with Screener – Learn how to pre-qualify testers for targeted feedback. - How to Invite Participants to Your Study via Email – Discover how to recruit testers through email invitations. - Web Evaluation Test – Explore another test type for assessing website performance 8. Next Steps Explore other test types like Web Evaluation (see Web Evaluation Test) or integrate with additional recruitment tools. For support, contact our team via in-app chat or email support@crowdapp.io.

Last updated on May 27, 2025

Recruiting Testers for Research Studies

Recruiting the right testers is critical to the success of your research studies in Crowd. This guide outlines four distinct methods to invite participants, enabling you to gather valuable user insights tailored to your project needs. Additionally, it covers customizable recruitment settings and provides tools to monitor tester performance across these methods. Overview of Tester Recruitment Crowd offers flexible recruitment options to suit various research requirements, from sharing direct links to leveraging a diverse participant pool. Each method can be configured with additional settings to enhance the testing experience, and performance metrics are available to track participation effectiveness. Methods of Recruiting Testers 1. Inviting via Sharing Link - Process: After configuring your research study, click "Publish" and then "Go Live" in the modal to publish the test. A unique invitation link is automatically generated and copied to your clipboard. - Sharing: At any time, navigate to the "Recruit" page and click the "Share via Link" button to copy the link for distribution via email, social media, or other channels. *Note: This method is ideal for quick, targeted recruitment and allows flexibility in reaching specific audiences. * 2. Publishing on Website - Process: Publish the research study on your website where the Crowd tracking script is installed. This embeds the test directly into your site, making it accessible to visitors. If you haven't integrated Crowd into your website, click here. - Use Case: Suitable for engaging your existing website users or conducting on-site usability tests. - Reference: For detailed instructions, see the Website Publishing Help Document (artifact_id to be provided). 3. Hiring from Participant Pool - Process: Access Crowd’s pool of over 140,000 testers with diverse demographics (e.g., age, location, device usage) and select participants that match your study criteria. - Use Case: Ideal for reaching a broad or specific audience when you need representative feedback. - Reference: For setup guidance, refer to the Participant Pool Recruitment Help Document (artifact_id to be provided). 4. Inviting via Email - Process: Send personalized invitations to a list of email addresses through the "Recruit" page, allowing you to target specific individuals or groups. - Use Case: Effective for engaging existing customers or stakeholders with tailored invitations. - Reference: For detailed steps, consult the Email Invitation Help Document Customizing Recruitment Settings On the "Recruit" page, you can enhance your study with the following customizable settings: - Redirect on Completion: Enter a URL (e.g., "https://www.example.com/thankyou") where testers will be redirected after completing the test. - Record Screen Activity: Toggle this option on or off to enable or disable session recording of tester interactions. - Remove Crowd Branding: Activate this setting to remove the "Powered by Crowd" branding from the test interface. These settings allow you to tailor the testing environment to your preferences and brand requirements. Monitoring Tester Performance 1. Understanding Recruitment Performance Insights The "Recruit" page includes a dedicated section to track the performance of each recruitment method. This area displays the following columns: - Sent: The number of invitations sent to testers. - Started: The number of testers who began the test. - Completed: The number of testers who finished the test. This data helps you assess the effectiveness of each recruitment source and optimize future studies. 2. Analyzing Performance - Compare completion rates across methods to identify the most effective recruitment strategy. - Adjust participant targeting or incentives if completion rates are low. Best Practices for Recruiting Testers - Targeted Selection: Use screeners or demographic filters to ensure participants align with your study goals. - Clear Communication: Provide concise instructions in invitations or on-site prompts to encourage participation. - Incentive Alignment: Offer appropriate rewards based on the effort required (e.g., gift cards for longer tests). - Performance Review: Regularly monitor performance metrics to refine recruitment strategies. Troubleshooting Common Issues - Link Not Generated: Ensure the study is published; reattempt "Go Live" if necessary. - Low Participation: Verify invitation delivery and consider increasing incentives or adjusting settings. - Recording Issues: Confirm "Record Screen Activity" is enabled and compatible with participant devices. Next Steps Explore advanced recruitment options or integrate with other Crowd features. For further assistance, contact support via in-app chat or email support@crowdapp.io.

Last updated on May 27, 2025

Conducting a Five Second Test

The Five Second Test in Crowd is a powerful research method within the "Research Studies" feature, designed to evaluate users’ first impressions of a design by displaying an image for five seconds and gathering immediate feedback. This feature enables Crowd users to collect insights into what users notice, recall, or feel about a design in a brief moment, facilitating improvements to user interfaces, marketing materials, or branding elements. This comprehensive guide provides detailed steps, best practices, and advanced techniques to maximize the effectiveness of Five Second Tests in your research studies. 1. When to Use a Five Second Test Five Second Tests are ideal in the following scenarios: - First Impression Evaluation: Assess the immediate impact of a design (e.g., a homepage) to ensure it communicates the intended message. - Branding Clarity: Test logos, taglines, or ads to determine if they are memorable or clear within a short time. - User Interface Testing: Evaluate whether key elements (e.g., call-to-action buttons) are noticeable and intuitive. - Marketing Material Assessment: Gauge the effectiveness of promotional graphics in capturing attention quickly. 2. Types of Five Second Tests Crowd supports several variations of Five Second Tests, depending on the feedback you seek: - Memory Recall Test: Ask testers to recall what they saw (e.g., "What was the main message of the design?"). - Impression-Based Feedback: Gather opinions on the design’s feel (e.g., "Did this design feel professional?"). - Element Identification Test: Focus on specific elements (e.g., "Did you notice the ‘Buy Now’ button?"). - Rating Scale Feedback: Use ratings to measure impact (e.g., "Rate how memorable this design was from 1-5"). The choice depends on the specific insights you aim to gather. 3. Preparing for a Five Second Test 3.1 Defining Research Goals - Clearly outline your objectives (e.g., "Determine if the main call-to-action is noticeable in five seconds"). - Identify the key metrics (e.g., recall accuracy, impression ratings) to measure success. 3.2 Identifying Target Participants - Select a representative sample of your target audience (e.g., users aged 18-35, online shoppers). - Use the screener feature to pre-qualify participants based on demographics or behavior. See how to use this feature in the Get High-Quality Participants with Screener article. 3.3 Selecting Testing Materials - Prepare a single high-quality image of the design element (e.g., a webpage, ad, or logo). - Ensure the image is optimized for quick viewing (e.g., clear contrast, minimal clutter). 4. Creating and Administering Five Second Tests 4.1 Initiating a Research Study - Navigate to the "Research Studies" page in your Crowd dashboard. - Click "Create New Test" and choose to use a template or start from scratch (see Creating Your First Research Study in Crowd). - Name your project (e.g., "Five Second Test - Homepage Impression") and provide a description (e.g., "Evaluating first impressions of the new homepage design"). 4.2 Adding the Five Second Test Block - Click "Add New Block" within your research study setup. - Select "Five Second Test" from the list of test types. 4.3 Configuring the Five Second Test Block - Upload Image: - Click the "Upload Image" button to add your design. - Upload a single image (e.g., a homepage mockup or ad graphic). - Name the image (e.g., "Homepage Design") in the provided field for better organization and reference. - [Insert Screenshot 2: Image Upload and Naming Interface] - Preview Image: - Use the "Preview" button to view the uploaded image and ensure it displays correctly. - Delete Image: - To replace the image, click the "Delete" button next to the image and upload a new one. - Add Instructions: - Enter instructions for testers (e.g., "You will see an image for five seconds. Be prepared to answer questions about what you noticed.") in the provided text box. - Add Follow-Up Questions: - After image upload, click "Add Follow-Up Question" to include questions (e.g., "What did you notice first?" with open-ended text, or "Did you see the main call-to-action?" with yes/no options). - You can add and delete as many questions as required. - [Insert Screenshot 6: Follow-Up Questions Section] 4.4 Administering the Test - Publish the research study and invite participants via the recruitment tools (see How to Invite Participants to Your Study via Email). - Ensure instructions are clear to prepare testers for the brief viewing period. 5. Analyzing and Interpreting Test Results 5.1 Quantitative Analysis - Review yes/no responses or ratings in the Crowd analytics dashboard. - Calculate metrics like the percentage of testers who noticed key elements (e.g., 80% saw the call-to-action). 5.2 Qualitative Insights - Analyze open-ended responses to identify common themes (e.g., "Most testers recalled the logo but not the tagline"). - Use thematic analysis to categorize feedback into actionable insights. 5.3 Comparing Feedback - Segment results by demographics (e.g., age, experience level) to uncover variations in perception. - Export data for deeper analysis if needed. 5.4 Reporting and Presenting Findings - Create a report with visual aids (e.g., pie charts for recall rates, quotes for qualitative feedback). - Share insights with stakeholders to guide design improvements. 6. Best Practices for Five Second Tests - Focused Design: Ensure the image highlights the key element you want to test (e.g., minimize distractions). - Diverse Participants: Include a balanced sample to reflect varied user perspectives. - Clear Instructions: Prepare testers for the five-second duration (e.g., "Focus on what stands out most"). - Iterative Testing: Conduct multiple tests to refine designs based on initial feedback. - Ethical Standards: Obtain consent and protect participant privacy during the test. 7. Troubleshooting Common Issues - Image Upload Errors: Verify file formats (e.g., PNG, JPG) and size limits; re-upload if necessary. - Low Recall Rates: Simplify the design or adjust instructions to focus testers’ attention. - Inconsistent Feedback: Increase the sample size or refine follow-up questions for clarity. 8. Next Steps Explore other test types like Web Evaluation (see Web Evaluation Test) or integrate with additional recruitment tools. For support, contact our team via in-app chat or email support@crowdapp.io. 9. Related Documents Enhance your research studies with these related guides: - Creating Your First Research Study in Crowd – A step-by-step guide to initiating a research study. - Get High-Quality Participants with Screener – Learn how to pre-qualify testers for targeted feedback. - How to Invite Participants to Your Study via Email – Discover how to recruit testers through email invitations. - Web Evaluation Test – Explore another test type for assessing website performance

Last updated on May 27, 2025

Applying Conditional Logic to your Research Studies

Conditional Logic is a powerful feature within the "Research Studies" section, enabling users to create dynamic question flows by displaying questions based on participants’ responses to previous questions. This functionality ensures that participants only see questions relevant to their answers, enhancing the efficiency and relevance of your research studies. This guide provides clear instructions, limitations, and best practices for applying conditional logic in Crowd. 1. When to Use Conditional Logic Conditional Logic is beneficial in the following scenarios: - Personalized Questioning: Display follow-up questions based on a participant’s response (e.g., "If you answered Yes to usability, explain why"). - Streamlined Surveys: Skip irrelevant questions to improve the participant experience and reduce completion time. - Targeted Feedback: Gather specific insights from subgroups based on their answers (e.g., advanced users vs. beginners). - Efficient Testing: Optimize research studies by tailoring content to individual participant profiles. 2. Types of Conditional Logic Crowd currently supports one type of conditional logic: - Show Conditional Logic: Displays a subsequent question only if a specific condition is met based on a previous question’s response. This type is limited to certain answer formats and cannot be applied to the first question in a block. 3. Creating and Administering Conditional Logic 3.1 Initiating a Research Study - Navigate to the "Research Studies" page in your Crowd dashboard. - Click "Create New Test" and choose to use a template or start from scratch. see Creating Your First Research Study in Crowd. - Name your project (e.g., "Conditional Logic Survey - Usability Test") and provide a description (e.g., "Evaluating usability with conditional follow-ups"). 3.2 Adding Questions with Conditional Logic - Click "Add New Question" within your research study setup to add questions to a block. - Ensure the first question in a block does not use conditional logic, as it requires a preceding question to reference. 3.3 Configuring Conditional Logic - Enable Conditional Logic: - Locate the specific question you want to apply logic to (must be a subsequent question in the block). - Toggle the "Conditional Logic" function to on. - [Insert Screenshot 1: Conditional Logic Toggle Interface] - Set the Condition: - Select the "Show" logic type. - Choose a previous question from the dropdown menu (e.g., "Question 1: Rate usability"). - Specify the condition by selecting "this question if", then choose a "value is" option from the possible answers (e.g., "4 or 5" for an opinion scale). - You can add as many conditions as required. - Supported Answer Types: - Conditional logic is applicable only to yes/no, single choice, multiple choice, and opinion scale answer types. - It cannot be used with short text, long text, or verbal responses due to the lack of selectable options. - Preview Logic: - Use the "Preview" button to test the logic flow and ensure questions display correctly based on responses before launching the survey. - Delete Logic: - To remove conditional logic, toggle it off or delete the question and recreate it if needed. 3.4 Administering the Test - Publish the research study and invite participants via the recruitment tools. See the article on recruiting testers here. - Ensure instructions clarify the dynamic nature of the survey (e.g., "Answer each question to proceed to relevant follow-ups"). 4. Best Practices for Conditional Logic - Logical Flow: Design conditions that are intuitive and avoid overly complex branching. - Supported Types: Use only compatible answer types to prevent configuration errors. - Clear Instructions: Guide participants on how to proceed based on their answers. - Testing Logic: Preview and test the survey to confirm conditional logic works as intended. - Ethical Standards: Obtain consent and ensure participant data is handled securely. 5. Troubleshooting Common Issues - Logic Not Applying: Verify the question is not the first in the block and uses a supported answer type; reconfigure if necessary. - Preview Errors: Check for misconfigured conditions and adjust the "value is" selection. - Low Engagement: Simplify conditions or provide clearer instructions to maintain participant interest. 6. Related Documents Enhance your research studies with these related guides: - Get High-Quality Participants with Screener – Learn how to pre-qualify testers for targeted feedback. - How to Invite Participants to Your Study via Email – Discover how to recruit testers through email invitations. - Web Evaluation Test – Explore another test type for assessing website performance. For support, contact our team via in-app chat or email support@crowdapp.io.

Last updated on May 27, 2025

Guide to Analyzing Test Results for Research Studies

After publishing your research study and collecting participant responses, Crowd offers a robust suite of tools to analyze and interpret your data. This workflow provides a detailed, step-by-step process to access, evaluate, and leverage your test results, ensuring you extract maximum value for decision-making. Step 1: Accessing the Results Interface Begin by navigating to the appropriate section of the Crowd dashboard to review your data: - From Research Studies: Go to the "Research Studies" tab, then click the "View Responses" button on the sidebar of the Research Studies homepage to enter the results area. - From Recruit Testers: Alternatively, access the "Recruit Testers" page and switch to the "Results" tab to transition from recruitment to analysis. Upon entry, the interface is organized into four specialized sections: Responses, Metrics, Sessions, and AI Analysis, each designed to provide unique insights into your study’s performance. Step 2: Overview of Responses Analytics At the top of the results page, a dedicated bar provides an immediate overview of key analytics to assess the study’s progress and engagement: - Participant Sessions: Displays the total number of sessions captured for the study, representing every instance a tester interacted with the test. This figure indicates the reach and initial engagement level. - Responses Submitted: Shows the total number of participants who completed the study, reflecting the number of fully recorded datasets available for analysis. - Completion Rate: Presents the percentage of participants who finished the test relative to those who started, calculated as (Responses Submitted / Participant Sessions) × 100. A high rate suggests a well-designed, engaging test, while a low rate may indicate usability issues or excessive length. - Average Duration: Indicates the average time participants spent on the test, measured in minutes or seconds. This metric helps evaluate test complexity—shorter durations may imply simplicity, while longer ones could suggest thorough engagement or potential confusion. Step 3: Navigating Result Sections Responses: Individual and Aggregate Insights - Aggregate Results: View cumulative data as percentages (e.g., 33% of respondents selected Option A), offering a snapshot of overall trends. - Individual Responses: Examine each participant’s answers with their anonymous unique tester ID (e.g., Tester ID #67890), enabling detailed tracking of individual input. *Viewing cumulative results *Viewing individual responses Metrics: Quantitative Performance Overview - Completion vs. Abandonment: Highlights the ratio of completers to drop-offs, indicating potential friction points. - Time Graph: Plots response times over the test period, revealing patterns in participant pacing. - Pie Chart Breakdowns: Visualize key demographics and engagement patterns through pie charts, including: 1. Responses by Countries: Displays the distribution of responses across different countries, helping identify geographic trends (e.g., 40% from the USA, 30% from India). 2. Sources: Shows the proportion of participants by recruitment source (e.g., hire, shareable link, email invite, web prompt), useful for evaluating recruitment effectiveness. 3. Devices: Breaks down responses by device type (e.g., 60% mobile, 30% desktop, 10% tablet), aiding in device-specific usability analysis. 4. Operating Systems: Illustrates the operating systems used by participants (e.g., 50% Android, 40% iOS, 10% Windows), highlighting platform-specific performance. Sessions: Behavioral Replay - Select a session, then click "Play" to review a participant’s journey, including clicks, scrolls, and pauses. This qualitative data complements metrics, aiding in usability diagnostics. You can also delete these sessions. AI Analysis: AI Summary - Access AI-driven insights by clicking the "AI Analysis" button, which directs you to the AI Summaries section. - Click "Generate Summary" to produce a synthesized report detailing themes (e.g., common usability issues), sentiment (positive/negative feedback), recommendations (e.g., redesign suggestions), and limitations (e.g., small sample size). - Note that when new responses are added, the AI will notify you, prompting you to regenerate the summary to incorporate the latest data. - Alternatively, interact with your data using the AI Chat (Chat with Your Data) feature, available on your dashboard or homepage by clicking the "AI Chat" button. For detailed guidance, refer to the AI Chat Documentation. Applying Filters for Targeted Analysis Customize your data view by clicking the "Filters" button at the top: - Options include country, duration, date, recruitment method (hire, shareable link, email invite, web prompt), device type, and browser. - Example: Filter by "Mobile" devices to assess mobile-specific performance. - For a comprehensive guide on understanding these filters, see guide. Managing Study Settings Access additional controls by clicking the "Settings" icon on the Responses page: - Edit Study: Adjust questions or settings mid-study. Refer to Editing Your Research Study for details. - Stop Responses: Cease new submissions to finalize data collection, ideal for time-sensitive studies. - Recruit Participant: Invite more testers to boost participation. Learn more in the Recruiting Participants. - Delete: Permanently remove the study and data, requiring confirmation to prevent accidental loss. Practical Tips for Effective Analysis - Iterative Review: Regularly revisit Metrics and Sessions to track progress and adjust live studies. - Contextual Filtering: Use device or location filters to address specific user groups. - Session Correlation: Cross-reference session replays with Metrics to pinpoint usability issues. - Secure Sharing: Password-protect shared links if sensitive data is involved. - Compliance: Adhere to data privacy regulations when handling participant information. Common Challenges and Solutions - Result Inaccessibility: Confirm test publication status or user permissions; escalate to support if unresolved. - Filter Misapplication: Reset filters and reapply with narrower criteria if data is missing. - Session Playback Failure: Verify session recording settings and participant device compatibility. Related Resources - Creating Your First Research Study in Crowd. - Leverage AI Analysis to inform design iterations. For assistance, reach out via in-app chat or email support@crowdapp.io

Last updated on May 27, 2025

Guides and Relevant Use Cases for Filtering Research Results

Filtering your research study results in Crowd allows you to focus on specific data subsets, making analysis more efficient and insights more actionable. This guide provides detailed instructions on using the Filter feature, located on the results page, to refine your data by Duration, Country, Date, Device, Browser, and Recruitment Source. Additionally, it offers practical use cases to demonstrate how filters can uncover meaningful patterns and trends. Accessing the Filter Feature On the results page of your research study, locate the "Filter" button at the top-right corner. Clicking this button opens a dropdown menu listing the available filters: Duration, Country, Date, Device, Browser, and Recruitment Source. Each filter comes with specific attributes to customize your data view. Understanding Filter Types and Attributes Below is a breakdown of each filter, its attributes, and examples to illustrate their application. Duration The Duration filter allows you to analyze results based on the time participants spent on the study. - Attributes: More than, Exactly, Less than, In range - Units: Seconds (sec), Minutes (min), Hours (hour) - Action: Select an attribute, specify a value, choose a unit, and click "Apply". - Examples: - More than 5 min: View responses from participants who took longer than 5 minutes, potentially indicating thorough engagement or difficulty. - Exactly 30 sec: Analyze responses completed in exactly 30 seconds, useful for quick impression tests. - Less than 2 min: Identify rushed responses that may lack depth. - In range 1 min to 3 min: Focus on responses within a typical engagement window, excluding outliers. Country The Country filter helps segment data by geographic location. - Attributes: Is, Is not, Starts with, Ends with, Contains, Does not contain - Action: Select an attribute, enter a value in the text field, and click "Apply". - Examples: - Is United States: Include only responses from the United States to assess regional feedback. - Is not Canada: Exclude Canadian responses to focus on other regions. - Starts with Aus: Capture responses from countries like Australia or Austria. - Ends with land: Include countries like Finland or Thailand. - Contains rico: Target responses from Puerto Rico or Costa Rica. - Does not contain Asia: Exclude countries with "Asia" in their name, such as Malaysia. Date The Date filter enables analysis based on the recency of responses. - Attributes: More than, Exactly, Less than, In range - Action: Select an attribute, input a number of days ago (e.g., 5 days ago), and click "Apply". - Examples: - More than 7 days ago: Review older responses to compare with recent feedback. - Exactly 3 days ago: Focus on responses submitted exactly three days ago for a specific campaign analysis. - Less than 2 days ago: Analyze the most recent responses for real-time insights. - In range 1 to 5 days ago: Examine responses from the past week, excluding older data. Device The Device filter segments data by the type of device used by participants. - Attributes: Desktop, Smartphone, Tablet - Action: Select a device type and click "Apply". - Examples: - Desktop: Assess responses from desktop users to evaluate desktop-specific usability. - Smartphone: Focus on mobile user feedback for a mobile-first design study. - Tablet: Analyze tablet responses to identify device-specific trends. Browser The Browser filter allows segmentation based on the browser used by participants. - Attributes: Is, Is not, Starts with, Ends with, Contains, Does not contain - Action: Select an attribute, enter a value in the text field, and click "Apply". - Examples: - Is Chrome: Include responses from Chrome users to check browser compatibility. - Is not Safari: Exclude Safari users to focus on other browser performance. - Starts with Fire: Capture responses from Firefox users. - Ends with Edge: Include Microsoft Edge users. - Contains fox: Target Firefox users specifically. - Does not contain IE: Exclude Internet Explorer responses to focus on modern browsers. Recruitment Source The Recruitment Source filter segments data by how participants were recruited. - Attributes: Website, Email, URL, Participant Pool - Action: Select a recruitment source and click "Apply". - Examples: - Website: Analyze responses from participants recruited via a web prompt to evaluate its effectiveness. - Email: Focus on responses from email invitations to assess campaign reach. - URL: Review responses from participants who joined via a shareable link. - Participant Pool: Examine responses from Crowd’s participant pool for broader demographic insights. Combining Multiple Filters 1. To streamline your data further, you can apply multiple filters simultaneously. For example: - Combine Country Is United States and Device Smartphone to analyze mobile responses from U.S. participants. - Use Duration Less than 1 min and Recruitment Source Email to identify quick responses from email-invited participants, possibly indicating rushed feedback. 1. After selecting your filters, click "Apply" to update the results. To remove a filter, click the "X" next to it in the filter bar. Practical Use Cases for Filtering Filters enable targeted analysis to address specific business and organizational goals. Below are robust use cases with relevant data, highlighting their objectives, findings, and actions. - Enhancing E-Commerce Mobile Experience - Objective: A retail business aims to improve mobile checkout usability to boost conversion rates. - Filters Applied: Device Smartphone, Country Is United States - Data: 300 responses, 120 (40%) from U.S. smartphones, completion rate 65%, average duration 3.2 minutes, 25% reported checkout issues. - Insight: U.S. smartphone users frequently cited small button sizes as a barrier, with 25% abandoning due to usability, impacting potential sales. - Action: Increase button size by 20% and retest with the same filter to measure improvement. - Optimizing a Global Marketing Campaign - Objective: A multinational corporation seeks to assess the effectiveness of a recent ad campaign across regions. - Filters Applied: Recruitment Source Email, Date In range 1 to 7 days ago, Country Contains Europe - Data: 500 responses, 200 (40%) from email invites in Europe over 7 days, completion rate 88%, 60% rated the ad positively. - Insight: High engagement in Europe suggests successful targeting, but 40% negative feedback highlighted unclear messaging in France. - Action: Refine messaging for France and re-survey with Country Is France. - Improving Software Compatibility - Objective: A software company aims to ensure cross-browser compatibility for a new release. - Filters Applied: Browser is Firefox, Duration is More than 4 min - Data: 180 responses, 45 (25%) from Firefox with durations over 4 minutes, completion rate 60%, 35% reported lag. - Insight: Firefox users experienced significant lag, reducing completion rates and indicating a compatibility issue. - Action: Optimize for Firefox and retest with the same filter. - Tailoring Educational Content by Region - Objective: An e-learning platform wants to customize content for Asian markets based on device preferences. - Filters Applied: Country Contains Asia, Device Smartphone - Data: 250 responses, 100 (40%) from Asian smartphones, completion rate 80%, 70% preferred video content. - Insight: High smartphone usage and video preference in Asia suggest a mobile-first video strategy. - Action: Develop mobile-optimized video courses and re-evaluate with the same filter. - Evaluating Corporate Training Engagement - Objective: A corporate HR department seeks to improve training program retention. - Filters Applied: Recruitment Source Participant Pool, Duration In range 10 to 20 min - Data: 400 responses, 150 (37.5%) from participant pool within 10-20 minutes, completion rate 92%, 15% dropped at module 3. - Insight: Strong engagement overall, but a 15% drop at module 3 indicates content fatigue or complexity. - Action: Simplify module 3 and retest with the same duration filter. - Diagnosing Browser-Specific Delays - Filters Applied: Browser Is Chrome, Duration More than 5 min - Data: 120 responses, 50 (42%) from Chrome with durations over 5 minutes, completion rate of 70%. - Insight: Chrome users experienced delays, with 30% citing loading issues, pointing to potential compatibility problems. - Action: Optimize for Chrome and retest. Best Practices for Effective Filtering - Start Broad, Then Narrow: Begin with a single filter (e.g., Country) to understand general trends, then add more filters (e.g., Device) for specificity. - Use Relevant Combinations: Pair filters that align with your goals, such as Device and Browser for technical analysis. - Monitor Data Volume: Ensure your filters don’t exclude too much data, resulting in insufficient responses for analysis. - Reset When Needed: If results are too narrow, remove filters and reapply with broader criteria. - Document Insights: Note which filters yield the most actionable insights for future studies. Troubleshooting Common Issues - No Results After Filtering: Check if filters are too restrictive (e.g., Duration Exactly 10 sec); broaden criteria like using In range instead. - Unexpected Data: Verify filter attributes (e.g., Country Is not Canada may still include responses if misapplied); recheck and reapply. - Slow Performance: Reduce the number of filters if the dataset is large, as multiple filters may slow down processing. Related Resources AI Chat - Chat with Your Data: Gain deep insights into your test responses by using the AI Chat feature. This tool allows you to interactively break down, analyze, and explain your collected data across the workspace, providing a comprehensive understanding of participant responses. For support, contact our team via in-app chat or email support@crowdapp.io.

Last updated on May 28, 2025

Recruiting Testers from Participant Pool

Crowd offers a robust participant pool to help you recruit testers for your user research studies. This guide focuses on the "Hire from our pool" method, allowing you to target specific testers based on demographics and preferences, ensuring high-quality feedback for your study. Why Recruit Testers 1. Access to Unbiased Testers Worldwide: Expanded to highlight the global reach and unbiased nature, specifying regions (North America, Europe, Asia) and emphasizing authenticity and reliability, 2. making it a standout point. 3. Precision Targeting with Diverse Demographics: Enhanced to focus on detailed criteria and relevance, moving beyond a generic targeting statement. 4. Seamless Scalability for Any Study Size: Added depth by mentioning scalability from pilot to large-scale studies, addressing efficiency and resource savings. 5. Cost-Effective Flexibility with Tailored Payments: Detailed the payment options ($5 per tester or 1 credit) and their adaptability, reinforcing budget control. This method simplifies recruitment while protecting both testers and recruiters through clear terms. Steps to Hire Testers from the Pool Step 1: Access the Hiring Feature 1. Navigate to your Crowd dashboard. 2. Go to Workspace > Research Studies > Create with Templates > Publish 3. Click the "Go Live" button and select "Hire from our pool" to proceed to the hiring page. 4. For guidance on creating and publishing a study, refer to "Creating Your First Research Study in Crowd". Step 2: Accept Terms and Conditions - Upon selecting "Hire from our pool," you’ll be directed to the Terms and Conditions (T&C) page. - Review the T&C, which safeguards testers and recruiters by outlining rights, responsibilities, and data usage policies. - Click "Accept" to proceed, confirming your agreement to the terms. Step 3: Configure Your Order In the order page: 1. Campaign Name: Enter a unique name for your hiring campaign (e.g., "User Experience Study May 2025"). 2. Number of Participants: Specify the number of testers needed (e.g., 50 participants). 3. Study Title and Description: Provide a clear title (e.g., "Website Usability Test") and a brief description of the study’s purpose. 4. Devices Supported: Select the devices testers should use (e.g., desktop, smartphone, tablet). 5. Targeting Attributes: Define your target audience by selecting demographic criteria (e.g., age range, location, interests) to ensure testers match your study requirements. Step 4: Choose Payment Method - Option A: Pay with Card: - Pay $5 USD per tester using a credit or debit card. - Option B: Pay with Credits: - Use Crowd coins (credits earned or purchased in-app) at 1 credit per participant. - Select your preferred method based on your budget and available credits. Step 5: Review and Place Order - View the order summary on the right side of the page to verify details (campaign name, participant count, study info, devices, targeting, and payment). - Ensure all information is correct, then click "Order" to publish your study. - The study will be distributed to testers matching your selected criteria, making it available to your targeted audience. Step 6: Monitor Performance and Results - Performance Section: Check the performance section on the recruit page to monitor metrics, including the number of invitations sent, testers who started the study, and those who completed it. - You can also manage your recruitment by pausing, resuming, or stopping the order as needed. - Result Page: Review detailed participation data and insights on the result page to assess study progress. For more on analyzing results, see Guide to analyzing your test results. Tips for Successful Recruitment - Define Clear Criteria: Use targeting attributes to select testers that align with your study goals. - Set Realistic Numbers: Choose a participant count that fits your budget and research needs. - Communicate Value: Include a compelling study description to encourage participation. - Monitor Regularly: Check performance metrics early to adjust if needed. Troubleshooting Common Issues - Testers Not Found: Ensure targeting attributes are specific but not overly restrictive. - Payment Issues: Verify card details or credit balance before ordering. - Study Not Published: Confirm T&C acceptance and order submission. By following these steps, you can recruit the right participants, publish your study seamlessly, and gather valuable insights to drive your business forward. Related Resources 1. Guide to Sharing Your Crowd Study via Email 2. How to Use Crowd Credits for Recruitment 3. Creating Your First Research Study in Crowd** ** Need assistance? Contact our support team at support@crowdapp.io

Last updated on May 28, 2025

Share Research Studies with your Web Visitors

Crowd enables you to recruit testers directly from your website audience using the "Launch to Website" feature. This guide outlines the process of creating a study, publishing it, and launching a customizable web prompt to engage visitors, provided Crowd is installed on your site. Benefits of Using Web Prompt Recruitment Recruiting via web prompt is ideal when you need: - To engage your existing website visitors as testers. - A non-intrusive way to invite participation without email outreach. - Real-time feedback from users interacting with your site. - Flexibility to target specific pages and devices. This method leverages your website traffic for efficient, targeted recruitment. Steps to Recruit Testers via Web Prompt Step 1: Create and Publish Your Study 1. Navigate to your Crowd dashboard. 2. Go to Workspace > Research Studies > Create with Templates > Publish Your Study 3. Click the "Go Live" button and select "Launch to website" to proceed to the hiring page. 4. For guidance on creating and publishing a study, refer to "Creating Your First Research Study in Crowd". Step 2: Access the Launch to Website Feature 1. From the recruitment page after publishing, click the "Launch to Website" option. Note: This feature is only available if Crowd is installed on your website. If not installed, a prompt will appear guiding you to install Crowd. Learn how in How to Install Crowd on My Website. Step 3: Customize Your Web Prompt Upon selecting "Launch to Website," a widget appears on your site, directing you to the customization page with three sections: - Edit Prompt: - Campaign Name: Enter a unique name (e.g., "Website Feedback May 2025"). - Title: Add a title (e.g., "Help Us Improve Your Experience"). - Description: Provide a brief study description. - Button Text: Customize the call-to-action text (e.g., "Take Survey"). Preview updates in real-time on the right side of the screen. - Style Prompt: - Position: Choose widget placement (bottom left or bottom right). - Colors: Customize title color, description color, button text color, background color, and button color. - Crowd Branding: Toggle on or off to display or hide the "Powered by Crowd" branding. - Audience: - Website Selection: Displays your installed website; if Crowd is not installed, a prompt to install it appears. - Page Selection: Choose to show the widget on all pages or specific pages (e.g., homepage, product pages). - Devices Supported: Select devices where the prompt will appear (e.g., desktop, mobile). - Reshow Frequency: Set how often the prompt reappears for a user: do not reshow, reshow after 24 hours, 3 days, or 7 days. Step 4: Launch the Prompt - Review your customizations and preview on the right side. - Click "Launch Prompt" to activate the widget on your website. - The prompt will appear to eligible visitors based on your settings, inviting them to participate in your study. Step 5: Monitor Performance and Results - Performance Section: Check the recruit page’s performance section for metrics like the number of prompts shown, users who started the study, and those who completed it. - You can also manage your prompt by editing, stopping, and viewing responses here. - Result Page: Analyze detailed participation data on the result page to evaluate study success. For more on results, see Guides for analysing test results. Tips for Successful Web Prompt Recruitment - Clear Messaging: Use concise, compelling titles and descriptions to attract participants. - Strategic Placement: Position the widget where it’s visible but not disruptive (e.g., bottom right). - Targeted Pages: Limit to high-traffic pages for better response rates. - Monitor Frequency: Adjust reshow settings to avoid overwhelming users. Troubleshooting Common Issues - Prompt Not Showing: Ensure Crowd is installed and pages/devices are correctly selected. - Low Participation: Refine targeting or adjust prompt visibility settings. - Customization Issues: Verify preview matches intended design before launching. Related Resources - Guide to Sharing Your Crowd Study via Email - Recruiting Testers from Participant Pool - Creating Your First Research Study in Crowd By customizing and launching your prompt effectively, you can gather valuable insights to enhance your site and business outcomes. For further assistance, contact our support team at support@crowdapp.io.

Last updated on May 28, 2025