The goal at first was to just see what numbers people gravitate toward. I've heard lots of conjecture about how people pick 7 or 3 or 4 more than others, and for a variety of reasons, but had a hard time finding actual demonstration of this. Then, while implementing a choosing system, the problem became: how do you present the information so as not to bias it? This is why there are four different ways of picking. There are also a couple other metrics being measured, including a difference in phrasing (Pick a number… vs Pick a random number…) which may be interesting.
Apologies for any bugs or general wonkiness. The whole thing was a ~2 hour impulse project.
PS: The data will absolutely be shared! Just need time to do a breakdown of all the different permutations.
Interesting idea, but the slider may not be a great interface. It stops the experiment being purely about numbers - I found I picked '5' because it was close to being nice and symmetrical.
I think that is a good datapoint, myself. If there is a predisposition for people to choose "5" under those circumstances, we will have learned something.
This would be a fantastic app to run A/B -like testing on to determine whether say a circular "picker" gives different results than a horizontal "picker". You could even tighten it down to see if a random distribution of numbers between 1 and 10 gives different results than a ordered distribution.
It fits the model perfectly and, at least to my perspective, very clearly.
I also took 5 because it was close. Also, since I know people tend to pick 7, I somewhat tend to don't pick 7 when I'm asked something :p You could maybe "randomize" the order of the numbers, or hack them up so, for instance, 7 is far left and 3 is far right.
If there are enough data points (which it seems like there is), and each UI is chosen a random 1/4 of the time, then it will be very easy to isolate and determine the effect of the slider UI. Problems would arise if the slider was chosen 60% of the time in Firefox, but only 30% of the time in Opera, and if Opera had a 20% higher population of hackers, who had a distribution centered around 6 as opposed to the aggregate population centered around 5; as is, it sounds like there's plenty of random sampling solving these complications.
No, the UI is different every time you refresh the page. There are 4 UIs, randomly presented.
I'll be fascinated to see how the different UIs skew the results. Of course, there are now so many variables in this "experiment" it'll be hard to make any concrete determinations.
Honestly I wasn't expecting to get so many number picks so fast. The plan was to start putting together an analysis page once I got enough data, which I thought would be a couple weeks. I'm going to try and get one together as soon as possible.
Edit: Just had to enable billing in AppEngine so it can keep going! Way more data than I ever expected.
An alternative idea would be to display a link to a page where the data will be displayed once it is ready. I agree that just getting a "Science thanks you" message feels a little like being cheated, but it didn't bother me because I knew I'd be coming back here and waiting for info about results.
Hey, so I submitted it to reddit thinking it might get you a few more responses. It kinda ended up being the number one link on the front page. Hope that didn't hurt too much.
Thanks for submitting! The number of responses is vastly more than I expected, pushing 100k uniques now. AppEngine has handled it wonderfully. It blew past the free quota, so I did have to enable billing. But, it's still only pennies so far, and a quota reset is coming up soon.
The amount of data has been a challenge, since I have to put together more efficient stats tracking. A good problem to have, of course.
I was expecting to have to spam the crap out of my Twitter feed and leave links everywhere to get even 100 participants. Glad that wasn't the case!
This is where Appengine shines - it scales painlessly, so there won't be any hurt (apart from being billed for the resources).
This scalability comes at the cost of a wee paradigm shift from relational database to datastore mindsets, but is well worth it. Developers can do what they do best instead of having to become system architects and admins.
I would imagine you would get a distribution favored to 3 and 7. We did similar research during a Cognitive Science class. We asked a number from 1 to 4, and got over 40% 3's.
We also did research to find favored Mastermind patterns. Bias was a large problem there too. When presented with colors, people would pick a single color more often or place the same colors next to each other. When presented with letters, people would try to spell out words.
Peculiar: In product pricing and conversion testing, prices with 7's and 9's seem to provide more favorable results. I believe this is akin to the favorite-color bias we happened upon (7 is my lucky number!), mixed with the slight confusing nature of calculating/rounding down a price ending in 7 or 9 (hey, it's still $2999, so just 2 grand and then some).
Does your university have a significant Asian population?
Three (homonym with "alive") is a lucky number in Chinese culture. Four (homonym with "death") is an unlucky number. Apartment buildings built for Chinese persons often skip all floor numbers with "4" in them as well as all apartment numbers with "4" in them.
Don't know about 3, but if a price includes lots of 8s, the seller is definitely aiming at Chinese buyers. Seeing this left and right here in Vancouver, and it looks corny at best. Like trying to lure Russians with a picture of vodka, or Americans with that of a cowboy hat.
"I’m staying at a hotel right now, there’s no 13th floor because of superstition. But come on man, the people on the 14th floor, you know what floor you’re really on. If you jump out of the 14th floor hoping to kill yourself, you will die earlier." ~Mitch Hedberg
He makes a fair point though--if you're so superstitious that you're uncomfortable with the thirteenth floor, shouldn't you be uncomfortable about it regardless of its nomenclature?
I would also guess that it's biased more toward 7 than toward 3. That's the conventional wisdom, isn't it? Below 4, 3; below 10, 7; below 20, 17; below 40, 37. Or has this bit of pop-math folklore been lost in translation?
Whenever I see these sorts of things I try to psych them out. In this case I deliberately thought of a number, discarded it, and repeated this process several times (rather than putting down 7, which was my first thought). So, unless you're trying to measure frequency of picked numbers when the pickers are trying to game the system, I don't think this will prove much. Nice interface though.
That's why one of the metrics is time, both from start of page to picking a number, then start to hitting choose. It's not perfect, but it'll hopefully be possible to differentiate between people who pick right away, and people who think about it.
This would drastically vary.
When A person tells B to pick a number there are various things going on in mind. Once its brought down to select from 1-10 the choices are narrowed down to a single digit number. Hardly anyone would choose 1 or 10. If someone's birthday comes in between those numbers, the choice becomes obvious, other wise its moves to the favorable number or a lucky number.
7 being the universal lucky number many believe in that. For numerlogy believers 3 comes to be a common lucky number amongst many. So the choice, not sure why 4.
Anyways the reasons vary alot.
Do keep us updated w/ the result want to know how mind works over numbers for all.
The only problem being, many would refresh and choose all numbers, why don't you add a simple cookie or ip restriction to allow to choose only once? This would reduce alot of fake entries.
I presume this is your site? Your design has a serious flaw: the number "1" is "selected" when you first view the page, which biases the results. You apparently tried to "fix" this by forcing the user to move the slider before the choice would be accepted, but that also biases the results.
The only way to really make it unbiased is with a text box. The next best thing would have been ten buttons in a row (but then you would have had to make sure that they didn't go off the right side of the screen).
I'd actually be interested in how different ways of presenting the choice affect the number chosen. It would be interesting of the author ran an A/B test with different methods of selection (text box, row of buttons 1-10, row of buttons in random order, slider, etc) to see what the differences would be.
I wonder if adjusting the the numbers would influence the choice. Instead of listing them 1-10, maybe try a run with the numbers not listed in sequential order.
The goal at first was to just see what numbers people gravitate toward. I've heard lots of conjecture about how people pick 7 or 3 or 4 more than others, and for a variety of reasons, but had a hard time finding actual demonstration of this. Then, while implementing a choosing system, the problem became: how do you present the information so as not to bias it? This is why there are four different ways of picking. There are also a couple other metrics being measured, including a difference in phrasing (Pick a number… vs Pick a random number…) which may be interesting.
Apologies for any bugs or general wonkiness. The whole thing was a ~2 hour impulse project.
PS: The data will absolutely be shared! Just need time to do a breakdown of all the different permutations.