Google Focuses on Ridding Their Index of Duplicate Content

In a recent blog post, Matt Cutts, a Google Engineer and high profile personality in my industry stated:

“My post mentioned that “we’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.” That change was approved at our weekly quality launch meeting last Thursday and launched earlier this week.” Read the full article.

That’s big news for legitimate businesses like mine that are commonly scraped or have content stolen and used on other websites without my approval. Several years ago Google introduced a patent that had identification of duplicate content as its core technology. I wondered at that time how Google was going to determine who the original content owner was. I wondered if a new meta tag would be introduced that allowed us to tag ourselves as owners of original content.

As of today, no new meta tag has surfaced but we do advocate one best practice to our clients, and that is to post the content on your own website first.

What Matt Cutts of Google is addressing here is not duplicate content on article networks, but rather what we in my industry call scrapers. Scrapers are robot tools or disreputable website owners that just steal your content and then place it on their website.

In many cases these scrapers are really stealing content to beef up AdSense advertisement websites. Meaning that they are not really in competition with you and your services, but hope to make money off of someone clicking their ad when they visit their site to read your stolen content.

Thank goodness Google is finally addressing this issue. I for one believe that my site will benefit from this action.

Share