It's nothing to do with SureFire - it's about polls. I've had to arrange several feedback questionnaires and found during trial runs that you get what you ask for (which may not be want you want to know - which is my point).
For example in this poll it wasn't made clear whether I could answer both questions:
The First question is about whether I have any SureFires and if I do, have I ever had to send any back to SureFire. To this I would answer yes I do, and yes I have.
The Second question is about whether my experience of SureFire's service (if I had used it) was good or not. To this I would answer yes it was.
But, either because I could select only one of the four tick boxes, or it wasn't made clear to me I could select two the data gathered from me is incomplete. When using data gathered from polls it is important to recognise that the data collected is a sample from a specific time and place, and people who were there. Of those people only certain people responded. Why? And is the poll biased or less representative further as a result? And then there are the questions and the structure of the poll. Could the poll itself and the type of question (way it was asked) influence the answers?
If we take a poll about flashlights posted in a flashlight forum - we have people interested in flashlights, but these people may not be representative of the majority of flashlight users. Further, people visiting and posting in a flashlight forum may be more likely to be there because they either love flashlights and/or have had problems they'd like to solve. The sample is taken from a polarised population and since it was a poll open to all perhaps only those with really strong opinions (or those who think they have something worth contributing etc) are likely to respond.
So over 61% of those who are interested in flashlights and consider it important enough to respond have at least one SureFire and have never any of them back. Right?
Wrong. The data collected shows over 74% (total number of responces for the first question is the sum of the first two questions)
But is that valid? How many opted not to respond to the first two but to rate their satisfaction instead.
Should the percentage really be the sum of the 1st, 3rd & 4th questions compared to the 2nd? That would give over 54% of those who are interested in flashlights and consider it important enough to respond have at least one SureFire and have had to send at least one of them back.
Confused?
The data is.
IMHO at least.
Al