A recent experiment asks: who’s better – “the crowd,” or the computer?
Researchers at the review site Yelp wanted to see if Amazon’s Mechanical Turk was indeed better at cataloging than a machine algorithm. They tasked 4660 Turkers to identify the proper category for a business based on a multiple choice test. Of the sample, only 79 were successful. The 79 successful Turkers were then split into groups of three and each group given a second task of categorizing businesses. The 79 Turkers scored around 62% accuracy. They then sent the same set of tasks to a supervised learning algorithm – a Naive Bayes classifier, to be precise. The results showed that the algorithm succeeded about 80% of the time, handily beating the Turkers by a wide margin. The results suggest one of two things (or possibly both): that the Mechanical Turk process needs to be refined to encourage better work, or that perhaps crowdsourced cataloging is steadily being threatened by the growing sophistication of machine algorithms. In either case, the future of work is likely to be influenced by more such challenges pitting humans against their machine counterparts.
[via Technology Review]