Partner Panels

Dscout helps researchers recruit participants for user research studies. While its participant pool is known for quality, it struggled to support niche recruiting needs like specialized technical roles or strict demographic criteria. These studies often stalled or required manual intervention from Research Ops. To close this gap, Dscout partnered with Respondent.io, a platform specializing in professional and hard-to-reach audiences. The goal of this initiatve was a direct integration that allows Respondent to supplement Dscout’s recruiting engine when customers needed it. This was not a UI-heavy project, but a systems-level design challenge focused on orchestration, trust, and scalability.
Role
Lead UX Designer
Timeline
March - August 2025
Responisbilities
Systems-level UX design, Research synthesis, Constraint definition, Model evaluation, Cross-functional alignment  
Tools
Research & Discovery
User Journey Map

I mapped the end-to-end screener creation and recruitment flows across both Dscout and Respondent.io to identify gaps, failure points, and system-level mismatches. This surfaced critical differences that directly informed integration constraints and difficult design decisions. Some key issues are the following:

Screener Reuse vs. One-Time Screeners
  • Dscout screeners persist and can be reused across studies, while Respondent enforces a one-to-one screener-to-study relationship, limiting reuse and scalability
  • Dscout allows screeners to persist and be reused across multiple studies with no expiration
  • The integration had to treat Respondent screeners as disposable, not canonical
Incentive model conflicts
  • Dscout uses estimated incentives, allowing flexibility across studies and participant invites
  • Respondent requires fixed incentives tied to a specific study
  • Incentive communication needed to preserve participant trust without exposing internal complexity
Targeting attribute structure
  • Respondent requires users to choose B2B vs. B2C up front, unlocking different targeting attributes
  • Dscout does not enforce this distinction, creating structural mismatches in audience definition
  • This discrepancy impacted feasibility expectations and participant trust
Screener capability limitations
  • Respondent did not support video or photo screener questions
  • Dscout relies on these question types to assess participant quality
  • This constrained parity and required clear fallback logic
Ideating
Decision tree

To evaluate how Respondent.io could integrate into Dscout’s recruiting engine, I mapped a decision tree outlining the possible user flows. This helped surface edge cases, risk points, and unanswered questions early, before committing to a solution. Three primary integration models emerged:



What its Optimizing for

Risks
Verdict
1. Respondent-First (Dedicated Screener Upfront)
 Maximum transparency, operational safety, minimal ambiguity around incentives and feasibility

Added friction, vendor knowledge required, pool cannibalization
This approach was operationally safest. Respondent is an intentional recruiting path that can be used either upfront or as a supplement
2. Deferred Opt-In (Triggered in Context)
Contextual control, Dscout-first recruiting, reduced unnecessary exposure to complexity
Mid-flow complexity, trust risk during active studies, explaining behavioral differences (costs, reuse limits, incentive rules) at the right moment
This model balanced control and invisibility, but still required careful communication design
3. Full Automation (End-to-End)
Invisible experience, no user input, no new UI, Ideal customer experience if it worked
Ambiguous triggers, non-convertible screeners, unclear cost & reuse rules, hard to communicate participant limitatins
While ideal in theory, full automation introduced too much ambiguity and risk without strong guardrails. Again, full parity isn't feasable.
Solution and Outcome
Key Product Decisions
  • We did not fully automate recruitment, prioritizing user awareness, trust, and cost transparency
  • We did not force a single entry point, instead supporting both proactive (upfront) and reactive (supplemental) workflows
  • We did not auto-migrate underperforming screeners, avoiding unexpected costs or participant limitations
  • We did not make Respondent interchangeable with dscout, preserving clear behavioral differences to prevent misuse
What We Shipped

To support future recruiting sources, the integration was launched as Partner Panels, a flexible recruiting option that researchers can intentionally use either:

  • Upfront, when niche recruiting needs are known
  • As a supplement, when existing screeners underperform
Designing for Clarity Not Compexity

To maintain existing mental models and minimize cognitive load:

  • I used progressive disclosure to surface Partner Panel differences only when relevant
  • I designed a participant tag that clearly identifies Partner Panel participants
  • Key constraints (cost, participant reuse, incentive behavior) were shown contextually, not upfront
40
%
Improvment in recruitment efficiency for niche and hard-to-reach audiences by
90
%
Reduction in manual recruiting for internal ops teams through automation for customers
:
)
Expanded recruiting capabilities without adding customer friction or compromising trust
Upfront flow, select a recruiting approach
Mission details page
Targeting Attribute page to add recruiting attributes.
Mission details page
Modal when a mission is launched to confirm action and ask if user wants to review mission
Invite participant informational pop up