9/09/2008

Crowdsourcing: Your Life May Depend On It

In the Sept 4th article "Following the crowd", the Economist revisits a popular topic in Internet computing. When we hear of crown innovation we typically think of Wikipedia - the crowd-crafted authority on knowledge (though the courts might not agree). Some people suggest that a large crowd lends itself to a multitude of voices to a point where it is hard to find the needle in the haystack (See James Todhunter's article on "Crowd Innovation").

On the other hand the phenomenon known as Emergence was vividly described in Steven Johnson ("Emergence: The Connected Lives of Ants, Brains, Cities, and Software"). Scientists have long ago discovered that a swarm of ants (who are individually dumb) use pheromones to mark their trails. The first ant to return "home" with food, thus reinforces her pheromone trail. Other ants start following the strongest "smelling" trail and before long you end up with a single line of ants traversing the shortest path to food.

When we outsource complex questions to a crowd of singularly inferior "experts" - will they conform to the most accurate answer? What if your life depended on it?

Companies have long relied on the power of the crowd to poll for ideas and test new product ideas ("what new themed restaurant would you want to create?"). Crowds have been used to track war criminals on Google maps, spot CIA airplanes by their call signs as they travel around the world and to report on traffic conditions.

Alpheus Bingham founded a marketplace for crowdsourcing complex questions. His site - innocentive.com - ellicits individuals to compete for solving some of science's most perplexing challenges by connecting companies, academic institutions and NPOs "with a network of 160,000 engineers, scientists, inventors and business people."

If we are to agree that the crowd is greater than the sum of its parts, how would you like to outsource your medical diagnosis to 1000 individuals around the globe? A reader on the blog "Crowdsourcing" suggests just that. Having endured conflicting interpretations of her own MRI scans she puts her trust in a crowd's ability to read the images.
Formerly I owned and managed a diagnostic imaging facility where we provided nuclear imaging, echocardiography and bone density testing. There are many factors that impact the outcome of a "read" (the interpretation of test results). For example, in our myocardial perfusion imaging (nuclear cardiac stress test) there were at least three:
  1. The imaging process: we used a 2 day procedure that enabled us to give two identical doses of radioactive tracer in two separate procedures (one for rest imaging and one for post-stress-test). Most hospitals used a single day procedure using Thallium or Cardiolite. In a same-day procedure you use a low dose followed by a high dose in order to mask the remnants of the first injection. If the time between both doses is too short then the interference of isotope from one injection to the other greatly impacts how they get "read".
  2. Imaging quality: The radiation technologist can help or hinder the correct interpretation of the results. After the images are taken various filters are applied to enhance the results and the images arearranged on "film". If the procedure was a single-day process then the technologist might over-compensate for the difference in dose sizes (see #1) thus resulting in a false positive or false negative
  3. Quality of the "reader": Many physicians are installing diagnostic equipment in their offices as a means to increase revenue. Their experience as clinical diagnosticians differs from that of a radiologist who typically "reads" these images. Accuracy is 70-90% depending on the expertise of the reader.
The crowdsourcing of diagnostic imaging neglects the other factors that render an accurate determination: do you have an inferior wall defect or do you not?

The classic value chain here is:
Technician prepares images to -> radiology who reads them and sends them to -> physician for diagnosis

Does a crowd "vote" help improve the odds of a correct diagnosis?


No comments: