Last week DuckDuckGo published a study examining the amount of Google bias in products like Google Search. CEO Gabriel Weinberg then told Business Insider:
What [our study] does reveal, or at least suggests, is that Google’s collection and use of personal data, including location, which is then used to filter specific search results, is having an effect akin to the effects of a political bias. That is an important nuance often missed in these discussions.
First, there’s a big difference between what a study suggests, and what it reveals. Language is important. Second, like other studies, the first one is interesting, but more studies need to be done in the form of peer review. I’d be interested to see one from an independent party that doesn’t have Google as a competitor.
Check It Out: What’s the Amount of Google Bias in Search? DuckDuckGo Finds Out
Andrew:
Your qualifiers regarding DuckDuckGo’s study are appropriate and important. Thanks for taking the time to point those out.
Apart from the findings, there are a number of issues regarding this study that limit its inference and authority.
First, is the obvious conflict of interest. Whenever, in science, one so much as writes an editorial, let alone gives a formal presentation at a conference, writes for a textbook or submits a manuscript for peer-review, one has to state all of one’s potential conflicts of interests (note, not necessarily actual, but even potential conflicts of interests, which is where otherwise smart people get into trouble). This is especially relevant if you are in competition with another team, company, product, treatment regimen or what have you.
Second, the stated methodology is vague on detail. Today, certainly for any human subjects trial that is submitted for peer reviewed publication, the full protocol must be available for review. It must have been published online for public viewing on ClinicalTrials.gov. The reason for this is self-evident; independent assessment by the community of the validity of the methodology for stated the objectives and findings. Are the methods appropriate? Are the methods standardised, meaning were they consistently applied throughout the study? Are the findings justified by the methods?
Which leads to the third issue, interpretation and conclusions. Without knowing the methods, it’s impossible to not only assess the findings, but interpreting what those findings mean, as well as what questions do they answer and what questions remain. It is commonplace for enthusiastic but otherwise honest teams to over-reach. This is where senior team members provide mentorship to junior scientists about sticking to what the data actually show, taking the most conservative interpretation regarding outcomes, and if speculation is made, that it remains grounded in the actual evidence, including other published studies. This is where review by one’s peers, fellow experts in the field, can be particularly brutal, but necessary.
Little, if any, of that due diligence and transparency is apparent here. This does not invalidate, by any means, what the article conveys, but it should limit and qualify how much weight the reader assigns it. As you’ve mentioned, a peer-reviewed investigation and analysis conducted by a disinterested team with no obvious conflicts of interests, using appropriate and publicly available methodology, is what is in order.
The topic could not be more timely.
Thanks for your comment, there’s a lot of insightful stuff. Yeah I definitely think there should be a more independent company doing another study; one without any conflict of interest with Google. I hate to throw DDG’s study out just because of bias, but it’s certainly a good start to the issue.
This dovetails with the documentary I just watched this week: The Creepy Line (on Amazon Prime).