Why SourceFed is Wrong About Google and Hillary Clinton

June 10, 2016 Leave a comment


Digital publisher SourceFed is levying a pretty serious allegation at Google – namely that the search giant is consciously manipulating Google Instant search results to favor Hillary Clinton in her campaign for the Democratic nomination. You can watch their explanation video, featuring SourceFed Host/Writer Matt Lieberman, here:

It’s entirely possible that there’s manipulation of the election process via social media … however this video is in no way evidence of any conscious manipulation. Identifying that sort of pattern as an end user is extraordinarily difficult (which is the scary part) – and is certainly not achievable by a few people searching from a handful of devices in a Los Angeles office. Hopefully SourceFed relied on more than that – but thus far they’ve published none of their methodology. Other media outlets are now fact-checking SourceFed.

Update: Matt Cutts has weighed in and debunked the claims.

Here’s the problem: in order to make this claim, they would need verification from the same experiment replicated on hundreds of different computers (also browsers, devices, and time periods).

They would also need to standardize their measures, which would be nearly impossible because so many people use Google services and have individually-tailored instant search results delivered to them based on what Google knows about them through their Gmail content, YouTube comments and preferences, etc. Essentially they would need access to a random sample of hundreds (if not thousands) of users’ Google accounts to conduct searches from.

You can test this now if you’re signed into Google: search for “hillary clinton cri” and see what shows up. If Google is consciously manipulating the results, one would expect the same favorable treatment of Clinton to show up for every user (despite their individually-tailored preferences) – and yet this is not the case. My results differ:


Even if a user isn’t signed in, Google (and Yahoo, Bing, Ask, etc.) still know a great deal about the user and make recommendations based on that information. For example, they know:

  • the user’s IP address (thusly location and ISP)
  • what time of day it is
  • the browser and version they’re using
  • whether they’re connecting via desktop or mobile.
  • what the user’s keystrokes were (including capitalization of proper names and how long it took to strike each key)

Even the information Google CAN’T get is a signal that Google uses to tailor searches to the user. That’s actually what we see in the SourceFed video – a Chrome browser with no user signed in:

Did Google Manipulate Search for Hillary    YouTube

Still with me? There’s more. Any results originally achieved are now going to be affected by the fact that SourceFed is influencing the results by encouraging people to test out their hypothesis (Observer Effect). Thousands of people are typing in “hillary clinton cri” and waiting (yet another signal to Google).

These are just SOME of the problems any researcher would need to overcome in order to detect this sort of bias as an end user.

The point SourceFed raises is legitimate – our digital media giants could be manipulating what we see for nefarious effect and we likely wouldn’t know about it until after the fact (if at all). However, what they’ve done is not evidence of such activity.

Update: After being rebutted by Google and SEO experts everywhere, SourceFed is doubling down on their lazy research and insisting they’re “comedians” so they get a pass on the requirement for rigorous research methodology in this response video. They admit to problems in their analysis, but claim they’re ‘ just asking questions, man’ – and they’ve left up the original video. What is particularly galling, however, is that they continue to promote the original [seriously-flawed] video – even as recently as today where it remains a pinned tweet on Twitter and a reshared post on Facebook mere minutes before this update was posted:

So “sorrynotsorry,” I guess. It’s kind of disturbing that SourceFed is owned by Discovery Communications and they tolerate this level of journalistic malpractice.

UC Davis and Social Media Permanence

April 14, 2016 Leave a comment

You know the story. Back in 2011 University of California, Davis campus police officer Lieutenant John Pike was videotaped and photographed casually pepper-spraying students peacefully protesting tuition hikes. It quickly turned into a meme:


The Sacramento Bee  just uncovered (through FOIA requests) that UC Davis paid a firm by the name of Nevins & Associates upwards of $175,000 to scrub references to the incident from the Internet.

Nevins & Associates proposed:

[…] Objectives:

  • Launch an aggressive and comprehensive online campaign to eliminate the negative search results for UC Davis and the Chancellor through strategic modifications to existing and future content and generating original content as needed
  • […] Advise and support UC Davis’ adoption of Google platforms to expedite the eradication of references to the pepper spray incident in search results on Google for the university and the Chancellor […]

…which is impossible. You can’t scrub something from the Internet (which should be common knowledge, as many people have tried).

They have failed miserably as a quick search of Google Images illustrates:

Pepper Spray Cop Google Image Search Results

(Sidebar: I’m dying to see one of the monthly reports they promised to deliver the university, measuring the progress of their efforts.)

Not only that, but they’ve made the situation vastly worse because the attempt to stifle free speech in the name of sanitizing the reputation of the Chancellor and the university has erupted into a firestorm. As I write this, the Wikipedia page for UC Davis is already in the process of being updated with the details about the attempt to scrub references to the incident:

UC Davis Wikipedia Page in the Process of Being Updated About the Pepper Spray Incident

I’m completely dumbfounded that Nevins & Associates actually claimed they could deliver this result. I’m loathe to criticize colleagues in PR/digital, but this is the kind of behavior that gives digital professionals a bad name.

They should have told UC Davis the cold truth: you can’t erase anything online – and trying to do so inevitably worsens the situation AND alerts more of the public to it. There’s actually a name for this type of situation: the Streisand Effect.

Had they proposed that they would try to MINIMIZE the impact of the wealth of content about the pepper spray incident by amplifying positive messages, that would be fine. But words like “eliminate” and “scrub” should not be in the vocabulary of anyone who works in the digital world.

I’m also agog that the (presumably) college-educated leadership of UC Davis actually thought this was possible. Moreover, that they thought a firm with such a small digital footprint itself was capable of making it happen:

This slideshow requires JavaScript.

If your organization faces a crisis of perception, the only path to follow are Arthur W. Page’s principles. The radically transparent era we live in has made them more relevant now than they were decades ago when he published them. Anyone who tries to convince you otherwise is not serving you well.

Nonprofit Resources for Scraping by on Social Media

September 11, 2015 Leave a comment

Thanks to everyone who attended the session at the 2015 WMPRSA Nonprofit Workshop. Make sure you apply to be WMPRSA’s PR4Good client. If you have any questions, please feel free to contact me at ddevries@lambert-edwards.com or post them in the comments section below.

Social Media Platforms by Monthly Active Users (Graphic) – 2015 Update

April 8, 2015 Leave a comment

Every so often I’ll try to take the temperature of the social media world and try to dig up the latest numbers for how many monthly active users (MAUs) are on the dominant social media platforms. Here’s the latest result (done on 1/31/2015).

Social Media Platforms by Monthly Active Users 2015 Graphic

Social Media Platforms by Monthly Active Users Graphic

Here’s a version with a transparent background in case you’d like to use it for slideware.

Some notes:

  • This measure isn’t entirely scientific; it’s based on reports on blogs and in the news media. I believe in all cases the numbers are self-reported – so take them with a grain of salt (I’m not aware of any sort of independent body that actually goes in to look at the data and verify any of these numbers). Given that many of these companies are publicly-traded, one can easily imagine the temptation to tweak the numbers to eke out a positive quarter.
  • For these graphics, I focus primarily on platforms that businesses are currently using. There are some new additions in this arena as companies and organizations are dipping their toes in (chiefly WhatsApp and Snapchat).
  • In some cases it’s difficult to get the latest numbers (I suspect because the platforms are experiencing stagnant or declining use) so I’ve pushed those platforms off the main chart and tagged them with “unknown.” Foursquare, for example, split itself into two separate platforms (Foursquare – offering Yelp-like reviews, and Swarm – retaining the “check-in” feature of Foursquare) and the company doesn’t discuss their user base except to say that they have 55M users which is too vague.
  • Though they boast 347M “total users,” Linkedin is likely flat in MAU growth; the 187M users number comes from February of 2014. Linkedin is a platform that people tend to use the “Ronco Oven” model for use: “set it and forget it” (creating a profile and then neglecting to interact or update it until they’re searching for a new job).
  • You can find the previous version of this graphic (from 2014) here.

Why the Lost Finale Sucked a Rancid Tub of Expired Dharma Ranch Dressing

April 8, 2015 Leave a comment

Happy Lost Day Everyone! Here’s my (oddly-popular) review of the Lost series finale from 2010. Spoiler alert, obvs.

Derek DeVries - Imprudent Loquaciousness

[Update: If you haven’t seen this video, you need to check it out (I’m not the only one that feels this way).]

I was really disappointed by the Lost season finale.

From Season 2 of Lost: a Screen Capture of the Hydra Logo on the Tail of a Shark Swimming Past the Camera

From the start, Lost thrived on setting up curious questions and then answering them in a way that only posed more questions. Not only was that the theme for the show – but the entire social media-driven marketing apparatus around the show catered to that aspect:

  • the creators set up fake show-related websites and 800 numbers (grabbed by astute fans who analyzed screen captures from the show that flashed by business cards or papers tacked to walls) with curious pre-recorded messages – all of which were part of two separate alternate reality games (The Lost Experience and Find 815).
  • the network’s website for the show (laden with hidden multimedia content) was filled with seething, writhing fan…

View original post 431 more words

What Law Enforcement can Learn From the Reaction to an Amber Alert

March 31, 2015 1 comment

Screen Capture of the 3/28 Amber Alert

On Saturday, March 28, 2015 around 5:30 a.m., Michigan residents were jolted awake to an ominous alert on their mobile phones. The warning sounded like the sort of alarm one hears at the end of a James Bond movie, as the arch-villain’s lair is about to collapse on the minions running frantically in the background as 007 and his female counterpart zipline to safety.

A six-year-old child was abducted by her father from a small town near Flint, Michigan. The Michigan State Police (MSP) feared she was in immediate physical danger and had a solid lead on her abductor, so they made the decision to send an immediate message using the Wireless Emergency Alerts program which can deliver messages to any mobile phones in a geographic area. This system is different from text messages (it receives priority over other data sent to phones so that it can go out more swiftly).

What I’ve learned from my years in public relations and the crises I’ve worked on is that nothing teaches you more about crisis communications than an actual crisis event. This Amber Alert was no different – and here are some of the insights I’ve gathered:

Citizens are Customers First

Customers have come to expect options and transparency from every brand they interact with – and the Michigan State Police brand is no different. While it would be great if everyone treated the loss of sleep caused by the Amber Alert as a minor sacrifice all citizens should make for the safety of the whole – not everyone shares that viewpoint and they have the power to opt out of these warnings. As a result of that choice the citizen-consumer has, law enforcement needs to think about citizens more as consumers and realize that they have an obligation to persuade (even sell) them on the benefits of opting in to the alerts.

Unfortunately the tone thus far from the MSP has come off as insensitive to the “customers” that were startled awake by the alert. To wit: “The Michigan State Police’s AMBER Alert coordinator told 24 Hour News 8 Monday she doesn’t regret sending out a loud, early morning text alert over the weekend and that she would do it again if it would help a child.”

This leads to another insight…

Framing the Message is More Important Than You Think

Instead of adopting a defensive stance, this Amber Alert could have been treated as a heartwarming and concrete example of the effectiveness and importance of the alert system. It’s an opportunity to position the role of everyone who received the alert (to make them more inclined to remain opted-in): selfless heroes whose noble sacrifice helped return a child in danger to her home. Sure it’s hyperbolic – but it employs a tried-and-true customer service tactic: thanking upset customers changes the trajectory of an interaction by helping disarm anger.

Your Audience is Larger Than You Think

In talking to the Grand Rapids news media, I found that the MSP were treating this case as a local story in Flint and primarily giving interviews and comments to the reporters there. Yet the alert went out state-wide. That means it’s a local story in EVERY locality. The MSP should have braced for the deluge of both local and state-wide interest from the news in this story and used it as an opportunity to tell the success story of the girl’s recovery and shore up support for the Amber Alert system.

This is one of the things I love about social media monitoring in crisis situations – it can help you identify blind spots you would never have considered otherwise. Its reach and two-way nature means that you can be made aware of stakeholders, agendas, and questions you didn’t know existed.

In a Crisis, More Communication is Better

The actual text of the alert was very simple, and some recipients were confused by what it meant. Here’s what it read:

“Emergency alert

Bancroft, MI AMBER Alert: LIC/7KJC97 (MI) 2000 Teal Ford F-250 Pickup

Type: Amber Alert”

Mis-communication frequently occurs when we make assumptions about what the audience knows. For example; one Twitter user mistook the message for an alert about a stolen vehicle; since the missing child is not explicitly mentioned in the text of the alert. It assumes that everyone knows that an “Amber Alert” means a child in danger. It’s not an unreasonable mistake to make (exacerbated by the vehicle being the focus of the investigation, and not the perpetrator or victim).

The messaging protocol likely has a character limit, which is why details were so sparse (and why no information about the victim or the perpetrator was included). This is a challenge, but one that can be overcome with more communication through other channels. This leads to another important lesson…

Use Social Media Engagement to Your Advantage in a Crisis

The good thing is that after the initial alert, users (customers) took to Google and social media to find more information – which is now the standard practice for every significant event. Unfortunately they would have found little on social media to fill in the gaps in their knowledge; there were no tweets, Facebook posts, or pages on the MSP’s website with the full story. That’s a missed opportunity to both (1) empower citizens to help, and (2) explain to citizens who are upset what the justification for the intrusion was.

The good thing is that there is still an opportunity to use social media to build the credibility and trust in both the MSP and the Amber Alert system (though the window is quickly closing). Were I to advise them on next steps – here they are:

  • Respond to every user you can find on social media that tweeted or posted about the alert. There are probably a few hundred of them, so it will take an investment of time – but realize that it’s time well spent. For every user you reach, you tap into their social graph (all the people they’re connected to) which exponentially expands the reach of your message.
    • For the Positive Comments: Thank the user for helping share the information that ultimately saved this child’s life and for their continued commitment to public safety.
    • For the Negative Comments: Apologize for the intrusion, tell them the success story, and tell them you hope they’ll consider remaining opted-in to the alerts.
  • Use this case to create an informational campaign about Amber Alerts while the event is still fresh in the mind of the public. You have their attention – USE IT:
    • Speak openly and transparently about the criteria used to make the decision to issue this alert (to assuage their concerns that this will now happen every weekend).
    • Inform them about how the technology works.
    • Empower the audience by telling users can find more information about these alerts (which isn’t easy to find – it varies by carrier and by phone).
    • Show them other examples from around the country where the system has saved missing children.
    • Expand your message to talk about missing & exploited children, public safety, and any other relevant topics you’d like the public know about.

Understand Conventions

Were I to make a recommendation to authorities and wireless carriers, it would be to change the tone currently used for the Amber Alert. The particular alarm that is employed already has a clearly-defined psychological implication to the general public: immediate physical peril to oneself. It’s effective in the case of severe weather or a national emergency, but an Amber Alert doesn’t fit that convention. It’s a crisis only to the victim and their immediate family – the rest of us are spectators and only a tiny handful of people who receive the alert have relevant information to help the investigation. Something as simple as changing the tone could do wonders to temper the negative reaction to being woken from sleep to an alert.

Another convention that customers are now well-accustomed to is being able to silence all of their messages for a given period of time (such as when they are asleep or in meetings). This message broke that convention, so the anger seen on Twitter was in part about the larger issue of a loss of control. That has to be factored in to the decisions about how to communicate with the audience.

In the big picture, this is an opportunity for wireless carriers and phone makers (perhaps even app developers) to use technology to mitigate this intrusion to keep users from opting out of receiving Amber Alerts. What if, for example, users had the ability to set the alerts so that they are delayed if the phone hasn’t moved in a given period of time? (i.e. when someone is away from it, or it is sitting on a nightstand because they are sleeping) – That might entice more people to stay in the system (which is only effective if a large number of people do so).

Consider the Context

The Wireless Emergency Alerts protocol has been available to law enforcement since 2012, and this was the first time the MSP opted to use it for an Amber Alert. That lends more significance to the event than one might think. This means two important things:

  1. Any communication about the Amber Alert protocol will have been forgotten or ignored by the vast majority of people because it didn’t immediately affect them at the time (now it does – which is the most important time to communicate).
  2. This case may be your ONE AND ONLY opportunity to convince the average person to remain opted-in to the alert messages. You have to seize it.

Hopefully someone in law enforcement finds this valuable and is able to put it to use in the next event.

For what it’s worth, below is a curated list of example social media posts I gathered from the West Michigan area that illustrate the conversation that took place after the alert went out.

The Big Mistake Mark Cuban Doesn’t Know He’s Making on Social Media

December 20, 2014 Leave a comment


Recently entrepreneur and noted Twitter user Mark Cuban discovered that companies are collecting data about activity of social media users, which was apparently a revelation to Inc Magazine and its readership.

In the hilarious fear-mongering advertorial, Cuban postulates that our digital histories will someday be used against us in court or for job interviews.

This is perhaps only a revelation to Cuban and Inc. The rest of the Internet-using public has been aware of this reality for more than a decade.

In fact, the well of data social media makes available to advertisers was one of the first concerns raised by observers of the then-nascent “social networking” phenomenon when it first appeared in the early 2000s. I did a quick search of databases to find early studies about this topic and to wit, this quote is from a 2005 report created by the Annenberg Public Policy Center at the University of Pennsylvania:

“Most internet-using U.S. adults are aware that companies can follow their behavior online.” (Turow et al, 2005, p. 4)

That same study went on to reference the 2002 Tom Cruise blockbuster “Minority Report” (which Cuban also references in the Inc Magazine interview).

An even older Annenberg report (from 2003) detailed the pitch of the now-defunct Gator Corporation, which embedded tracking software on social networking software services like KaZaA (remember KaZaA?):

“Let’s say you sell baby food. We know which consumers are displaying behaviors relevant to the baby food category through their online behavior. Instead of targeting primarily by demographics, you can target consumers who are showing or have shown an interest in your category. … Gator offers several vehicles to display your ad or promotional message. You decide when and how your message is displayed to consumers exhibiting a behavior in your category.” (Turow et al, 2005, p. 6)

So it’s not a revelation that algorithmic data is mined and analyzed by marketers. What I do find revelatory is that Cuban thinks he has the power to do something about it.

There are two major problems with the claims made by Cuban about two upcoming apps, Cyber Dust (a ripoff of Snapchat with a 30-second window) and Xpire:

  1. They can’t possibly hide or delete a user’s social media activity from advertisers.
  2. What a person DOESN’T do on social media can be just as valuable to marketers as what conscious actions they take.

Allow me to explain.

First, one can’t truly delete one’s social media activity to remove it from the prying eyes of marketers using it to produce an algorithmic profile.

You can delete the post from your timeline, sure, but that doesn’t actually mean it’s “deleted.” As far back as 2010, for example, it has been public knowledge that Facebook caches a server-side copy of all of your content. In order to truly delete all of your posts and photos from the prying eyes of advertisers, you would need to hack into Facebook and remove it from the inside (which would be illegal).

Moreover, even if we discount the server-side caching that takes place on social media platforms, simply viewing a social media site like Facebook creates a trail of data that feeds the digital profiles sites like Facebook build for each of us. At the most basic level, Facebook tracks what you scroll past (counted as “impressions”), the time you spend on content, and what you search for.

Apps (like Snapchat or Cuban’s “Cyber Dust”) which purport to delete content within a certain time window are fatally-flawed in concept because of the many touchpoints they have to make as they go from one user to another. If you “snap” a compromising photo, that data can be accessed at many times between Person A and Person B – here are just a few:

  • From the data cached on Person A’s phone (tracked by mobile phone carriers).
  • Intercepted between the phone and whatever Internet connectivity point is used to send the message (be it wi-fi or cellular).
  • From the server used to pass the content through to the app’s (Snapchat’s) servers.
  • From the app’s (Snapchat’s) servers.
  • From the server receiving the content from the app’s (Snapchat’s) servers.
  • From the data cached on Person B’s phone (or by Person B if they decide to take a screen capture of the photo and publish it to the web, which has been the downfall of several Snapchat users recently).

Further, the above scenario assumes you don’t have one app integrated with another (which adds an additional layer of touchpoints upon which this data can reside).

Second, the actions you DON’T take can be just as valuable to marketers as the actions you DO take. This reality plays out in a couple of different ways:

Facebook Caches Unposted Data: In 2013, the public became aware that Facebook tracks and saves posts that users delete at the last minute without posting. Re-read that sentence. Facebook is caching the keystrokes you enter – even if you decide not to publish them.

That data, analyzed by a PhD student from Carnegie Mellon University and a Facebook researcher, was used to produce a report revealed at the Association for the Advancement of Artificial Intelligence. Here’s one of the key findings:

“Our results indicate that 71% of users exhibited some level of last-minute self-censorship in the time period, and provide specific evidence supporting the theory that a user’s “perceived audience” lies at the heart of the issue: posts are censored more frequently than comments, with status updates and posts directed at groups censored most frequently of all sharing use cases investigated.” (Das and Kramer, 2013, p. 1)

“Escher Fish Theory”: I’m loathe to coin a term, but there isn’t really an existing shorthand (that I’m aware of) to describe the value of observing the gaps in our social graphs. For example, who we’re not connected to (interests we don’t have, posts we don’t like, updates we don’t comment on) can be a valuable insight now that we have the computing power to crunch those petabytes of data. The tessellations of M.C. Escher provides a good illustration of this concept (that recognizable patterns exist in between other patterns):

M. C. Escher Tessellation

The only way to stop social media platforms from gathering this data would be to try to clog the datastream with phony likes, shares and comments.

Cuban’s premise is flawed for another reason – namely the idea that out-of-context messages will be used to incriminate us. This is pointedly absurd because the same systems that cache all of this data track iterations of that data, which would provide exculpatory evidence in the event someone were to modify them to distort what we posted.

Even if we were to assume that Cuban’s apps worked as intended (they won’t) they could conceivably produce the opposite of their intended result. A social media user with a completely sanitized history could actually create suspicion. A benign and mundane history of digital activity draws less attention than a blank page.

Our privacy is certainly going through dramatic changes – and so are our notions of privacy. The reason social media platforms continue to grow in both the number of monthly active users and the volume of content those users create is that they provide a benefit that transcends the loss of privacy we’re experiencing. No one has a comprehensive solution of how to balance privacy and the utility derived from transparency, least of all Mark Cuban.


Das, S., & Kramer, A. (2013). Self-Censorship on Facebook. In Association for the Advancement of Artificial Intelligence. Retrieved from http://bit.ly/1r9EZ6A

Turow, J. (2003). Americans & Online Privacy: The System is Broken. In Annenberg Public Policy Center. Retrieved from http://bit.ly/1HfZOPV

Turow, J., Feldman, L., & Meltzer, K. (2005). Open to Exploitation: American Shoppers Online and Offline . In Annenberg Public Policy Center. Retrieved from http://bit.ly/1r9Et8M