Indiefield's response to ESOMAR's 37 questions designed to help users and buyers of online surveys evaluate our product offering.
Q1. What experience does your company have in providing online samples for market research? How long have you been providing this service? Do you also provide similar services for other uses such as direct marketing? If so, what proportion of your work is for market research?
We recruit people for research. Only research. Never marketing. We've been doing it for decades. Every respondent we bring you is there to answer questions, not to be sold to.
Q2. Do you have staff with responsibility for developing and monitoring the performance of the sampling algorithms and related automated functions who also have knowledge and experience in this area? What sort of training in sampling techniques do you provide to your frontline staff?
Sampling here is human-led. Project managers know quotas, and incidence inside out. Every team member is trained to deliver samples that work in the real world, not just on paper.
Q3. What other services do you offer? Do you cover sample-only, or do you offer a broad range of data collection and analysis services?
We do fieldwork end-to-end: online, CATI, face-to-face, car clinics, qual, quant. We don't write strategy decks. We deliver the data you can build them on.
Q4. Using the broad classifications above (panels and intercepts), from what sources of online sample do you derive participants?
We use our own invite-only network. No open sign-ups. No affiliate mills. We don't chase sign-ups. We find people ourselves, invite them, and validate them. Recruitment is done through mixed methods (face-to-face, targeted digital, human referrals) always managed by us.
Q5. Which of these sources are proprietary or exclusive and what is the percent share of each in the total sample provided to a buyer?
Our panel is 100% proprietary.
Q6. What recruitment channels are you using for each of the sources you have described? Is the recruitment process "open to all" or by invitation only? Are you using probabilistic methods? Are you using affiliate networks and referral programs and in what proportions? How does your use of these channels vary by geography?
Our panel is not open to all. It is strictly invitation only direct from Indiefield. We perform this direct outreach ourselves. We don't buy traffic. We invite people in and validate and screen them on a project by project basis.
Q7. What form of validation do you use in recruitment to ensure that participants are real, unique, and are who they say they are? Describe this both in terms of the practical steps you take within your own organisation and the technologies you are using. Please try to be as specific and quantify as much as you can.
ID verification, document checks (e.g. V5C for vehicle owners), consistency checks, and manual review. Fraud is screened out before it reaches the client. Humans do the final check. No shortcuts.
Q8. What brand (domain) and/or app are you using with proprietary sources? Summarise, by source, the proportion of sample accessing surveys by mobile app, email or other specified means.
Our own. Invite-only. No third parties running the shop.
Q9. Which model(s) do you offer to deliver sample? Managed service, self-serve, or API integration?
None.
Q10. If offering intercepts, or providing access to more than one source, what level of transparency do you offer over the composition of your sample (sample sources, sample providers included in the blend). Do you let buyers control which sources of sample to include in their projects, and if so how? Do you have any integration mechanisms with third-party sources offered?
Not applicable.
Q11. Of the sample sources you have available, how would you describe the suitability of each for different research applications? For example, Is there sample suitable for product testing or other recruit/recall situations where the buyer may need to go back again to the same sample? Is the sample suitable for shorter or longer questionnaires? For mobile-only or desktop only questionnaires? Is it suitable to recruit for communities? For online focus groups?
Our sample isn't just a database. It's people we've actively recruited, verified, and engaged. That makes it flexible. We can use it for product tests, recalls, communities, or follow-ups because we know who's who and how to reach them again. Short surveys, long ones, mobile or desktop. We match the method to the respondent, not the other way round. And yes, it works for online qual too, because these are real people who actually want to take part.
Q12. Briefly describe your overall process from invitation to survey completion. What steps do you take to achieve a sample that "looks like" the target population? What demographic quota controls, if any, do you recommend?
We hand-pick and invite the right people, screen them carefully, and confirm they're who they say they are. Throughout fieldwork we monitor quotas in real time, adjusting as we go so the sample reflects the audience. Quotas are set on the variables that matter most for the brief, and we recommend only what drives accuracy, not clutter. The result: respondents who look like the market, and data that behaves like it too.
Q13. What profiling information do you hold on at least 80% of your panel members plus any intercepts known to you through prior contact? How does this differ by the sources you offer? How often is each of those data points updated? Can you supply these data points as appends to the data set? Do you collect this profiling information directly or is it supplied by a third party?
We hold the basics (location, contact information). Beyond that, we build detail as projects demand. All data comes directly from respondents, never bought in from a third party. Updates happen constantly because we re-ask information on a project by project basis. And yes, we can append profiling data back to the dataset which is clean, current, and directly sourced.
Q14. What information do you need about a project in order to provide an estimate of feasibility? What, if anything, do you do to give upper or lower boundaries around these estimates?
We need to know who you want, where they are, how many you need, and how long you want with them. With that, we can run the numbers. We set boundaries by being honest: if it's tight, we'll tell you. If it's easy, we won't pretend it's hard. Feasibility is never a guess it's an informed calculation, and we'd rather set a realistic range than overpromise.
Q15. What do you do if the project proves impossible for you to complete in field? Do you inform the sample buyer as to who you would use to complete the project? In such circumstances, how do you maintain and certify third party sources / sub-contractors?
If a project hits a wall, we don't. We move offline, into our community-based recruiters, whatever it takes. No stone unturned until every avenue is tested. If it still proves impossible, we tell you straight. If we bring in trusted partners we take full responsibility for their work. Our name is on the project, so the quality stays ours.
Q16. Do you employ a survey router or any yield management techniques? If yes, please describe how you go about allocating participants to surveys. How are potential participants asked to participate in a study? Please specify how this is done for each of the sources you offer.
No. We don't run a router system. Respondents are invited to specific studies only.
Q17. Do you set limits on the amount of time a participant can be in the router before they qualify for a survey?
Not applicable.
Q18. What information about a project is given to potential participants before they choose whether to take the survey or not? How does this differ by the sources you offer?
People deserve to know what they're saying yes to. We tell them what the study is about (in plain language), how long it will take, what's expected of them, and what they'll get for taking part. No tricks, no burying the detail. Whether they come through our panel or community recruiters, the principle is the same: full transparency before commitment. That's how you get engaged, reliable participants.
Q19. Do you allow participants to choose a survey from a selection of available surveys? If so, what are they told about each survey that helps them to make that choice?
Yes. We believe in choice, but with rigour. Participants see the essentials up front (topic, length, reward) so they know what they're signing up for. And then we need proof, so we validate. That means receipts for product usage, photos, documents whatever it takes. It's how we keep surveys honest and results real.
Q20. What ability do you have to increase (or decrease) incentives being offered to potential participants (or sub-groups of participants) during the course of a survey? If so, can this be flagged at the participant level in the dataset?
If a niche audience needs more to take part, we adjust and fast. Every change is tracked at respondent level and can be flagged in the dataset. No smoke, no mirrors. Just fair pay for the right people, and full transparency for you.
Q21. Do you measure participant satisfaction at the individual project level? If so, can you provide normative data for similar projects (by length, by type, by subject, by target group)?
Yes. After every project we ask participants how it went. We track satisfaction across type, length, and audience. If a study drags, we'll know. If an incentive feels fair, we'll know. And we use that feedback to keep standards high. Happy respondents give better data.
Q22. Do you provide a debrief report about a project after it has completed? If yes, can you provide an example?
Always. Every project ends with a clear report: what worked, what didn't, and what we learned. No fluff, no filler. Just the facts you need to trust the data and run your next project better.
Q23. How often can the same individual participate in a survey? How does this vary across your sample sources? What is the mean and maximum amount of time a person may have already been taking surveys before they entered this survey? How do you manage this?
No endless survey-takers here. We control frequency, rotate participants, and cut out professionals. Some join us fresh, others have taken part before but never too often, and always validated. We manage engagement so every response is genuine, not rehearsed.
Q24. What data do you maintain on individual participants such as recent participation history, date(s) of entry, source/channel, etc? Are you able to supply buyers with a project analysis of such individual level data? Are you able to append such data points to your participant records?
We track everything that matters: participation history, entry dates, recruitment channel, validation checks. We know where they came from and what they've done. We can append those details to records and supply project-level analysis when needed. Clean, transparent, accountable.
Q25. Please describe your procedures for confirmation of participant identity at the project level. Please describe these procedures as they are implemented at the point of entry to a survey or router.
Identity isn't assumed. It's checked. At recruitment, we verify with documentation, receipts, or photos where required. At entry, we layer in digital fingerprinting, VPN checks, and geolocation. Every respondent is cross-checked, every time. No ghost accounts, no mystery participants. Only real people, proven.
Q26. How do you manage source consistency and blend at the project level? With regard to trackers, how do you ensure that the nature and composition of sample sources remain the same over time? Do you have reports on blends and sources that can be provided to buyers? Can source be appended to the participant data records?
We don't leave blends to chance. Every source is logged, monitored, and controlled so composition stays consistent especially on trackers where stability matters most. If a change is made, it's documented and shared. Source data can be appended to participant records, giving clients full transparency on where respondents come from and how they ended up completing the survey.
Q27. Please describe your participant / member quality tracking, along with any health metrics you maintain on members / participants, and how those metrics are used to invite, track, quarantine, and block people from entering the platform, router, or a survey. What processes do you have in place to compare profiled and known data to in-survey responses?
Quality is never left to chance. Every respondent carries a history (participation, completion, consistency). We track it all. Anyone showing suspicious patterns speeders, contradictions, duplicates is removed and blocked. We cross-check what people tell us against what we already know: profile data, past answers, even receipts or photos. If it doesn't add up, they're out. Our digital tools flag the signals, but it's the human review that makes the final call. The result? A clean, healthy panel where real people stay engaged.
Q28. For work where you program, host, and deliver the survey data, what processes do you have in place to reduce or eliminate undesired in-survey behaviours, such as (a) random responding, (b) illogical or inconsistent responding, (c) overuse of item nonresponse (e.g., "Don't Know") (d) inaccurate or inconsistent responding, (e) incomplete responding, or (f) too rapid survey completion?
We build surveys that fight back against bad behaviour. Timers flag the speeders. Logic traps catch the contradictions. Too many "Don't Knows"? You're out. Half-finished answers? Removed. Every keystroke leaves a trail and we track it. Our system is designed to protect the data, but it's not just machines. Every project is overseen by real researchers who know the difference between an engaged respondent and someone clicking blindly. Result: interviews that hold up, insights you can trust.
Q29. Please provide the link to your participant privacy notice (sometimes referred to as a privacy policy) as well as a summary of the key concepts it addresses. (Note: If your company uses different privacy notices for different products or services, please provide an example relevant to the products or services covered in your response to this question).
https://indiefield.co.uk/privacy-notice .
Q30. How do you comply with key data protection laws and regulations that apply in the various jurisdictions in which you operate? How do you address requirements regarding consent or other legal bases for the processing personal data? How do you address requirements for data breach response, cross-border transfer, and data retention? Have you appointed a data protection officer?
Privacy isn't a policy, it's a principle. We comply with GDPR in the UK and Europe and hold ourselves to the same standard everywhere. Consent is always clear, data is transferred only through secure channels, and we keep it only as long as needed before deletion. Our Data Protection Officer oversees compliance, breach response and regulation, and if there's ever an issue, we act fast and tell the truth. No tricks, no shortcuts, just trust.
Q31. How can participants provide, manage and revise consent for the processing of their personal data? What support channels do you provide for participants? In your response, please address the sample sources you wholly own, as well as those owned by other parties to whom you provide access.
Consent is clear, upfront, and can be changed anytime. One click, one call, one email and you're out. No questions, no barriers. On our own panel we manage the process directly; with partners we only work to the same gold standard: explicit consent, easy exit, full transparency. Support is always human by email or phone, answered by a real person. Respect for data isn't optional, it's trust.
Q32. How do you track and comply with other applicable laws and regulations, such as those that might impact the incentives paid to participants?
We don't cut corners with compliance. Incentives aren't cash-in-hand guesswork, they're tracked, transparent, and fully aligned with UK tax and employment rules. Every payment is logged, every transaction auditable. We work within HMRC guidance, GDPR, MRS and ESOMAR codes, and adapt for local rules when projects go global. Our system flags the limits, our team checks the details, and respondents get what they're promised fairly, legally, and on time. Incentives done properly aren't just payments. They're part of the respect we show the people who make research possible.
Q33. What is your approach to collecting and processing the personal data of children and young people? Do you adhere to standards and guidelines provided by ESOMAR or GRBN member associations? How do you comply with applicable data protection laws and regulations?
Children aren't just another audience. They are the most protected people we deal with. We follow ESOMAR, MRS, and GRBN guidelines, but with one exception - at Indiefield anyone under 18 is a child. Parental consent is always verified before we start. Age checks are built into recruitment, questions are respectful, and data is stored securely then deleted. Nothing is sold, shared, or exploited. The rule is simple: with kids, there are no shortcuts.
Q34. Do you implement "data protection by design" (sometimes referred to as "privacy by design") in your systems and processes? If so, please describe how.
Yes. We build privacy in from the start, collecting only what's needed, encrypting what we hold, and limiting access. It's not bolted on. It's built in.
Q35. What are the key elements of your information security compliance program? Please specify the framework(s) or auditing procedure(s) you comply with or certify to. Does your program include an asset-based risk assessment and internal audit process?
Security isn't a box-tick, it's a discipline. Our IT systems are formally audited, our assets risk-assessed, and our processes stress-tested. Internal audits keep us alert, external scrutiny keeps us honest. Every piece of data is treated as if it were our own.
Q36. Do you certify to or comply with a quality framework such as ISO 20252?
Yes. We are ISO 20252 certified. It's not paperwork for the shelf, it's how we run every project, every day. Quality isn't an aspiration here, it's a system we live by.
37. Which of the following metrics are you able to provide to buyers, in aggregate and by country and source? Average qualifying or completion rate (trended by month); percent of paid completes rejected per month / project (trended by month); percent of members / accounts removed or quarantined (trended by month); percent of paid completes from 0–3 months tenure (trended by month); percent of paid completes from smartphones (trended by month); percent of paid completes from owned / branded member relationships versus intercept participants (trended by month); average number of dispositions per member (survey attempts, screenouts, and completes, trended by month, potentially by cohort); average number of paid completes per member (trended by month, potentially by cohort); active unique participants in the last 30 days; active unique 18–24 male participants in the last 30 days; maximum feasibility in a specific country with nat rep quotas, seven days in field, 100% incidence, 10-minute interview; percent of quotas that reached full quota at time of delivery (trended by month).
We don't run a panel stuffed with usernames and churn stats. We find people, invite them, and verify them every time. That means we don't talk about "tenure" or "opt-ins" instead we talk about how many real people we recruited, how many qualified and completed, how quotas are filling live in field, and proof that every respondent is who they say they are. This is recruitment you can trust and data you can use.