Sun. Jan 23rd, 2022

I have noticed a huge trend for Google to move duplicate syndications of articles into their supplemental index at the blink of an eye. This cannot be good for the article directories because, like my own at Moxie Drive Expressions it causes a dramatic decrease in AdSense income and exposure of the article across the internet. My question to Google is, when you go to buy a book or decide it is time to go to a book store to peruse the volumes, do you always go to the same bookstore? The answer to that is no. We probably go to the one that is the closest. If I as an author have published a book, I would like that my readers could find my book in whatever bookstore my reader chooses and I would like it to show up on every shelf in every bookstore in the world.

Google is not thinking this way scrape google . They want one “original” copy, from one article directory showing up in their search engine, and the rest of us with a syndicated copy get thrown in the “Google Dungeon”. This makes room for all of the other “videos, blogs, images, news articles, and other media available online”.

Apparently Google does not see that the article directories still have value with respect to content on the web. At least someone thinks so as scrapers still abound and many seem to be dependent on AdSense as a source of income. The article directory exists for that reason, to make money from Adsense, as well as provide an author’s exposure for their articles. I believe that Google has mistakenly included article directories in the category of MFA web pages (made for Adsense), and that is what has caused the precipitous fall into the supplemental index for all article directories and more.

Matt Cutts says that the solution to this problem is quality content (no duplicates) and back linking. First of all, there is no such thing as an original unless the author submits an article to one and only one article directory. To do this would mean much less “direct” traffic, that is non-search engine related and coming directly from the article directory in this case. This was the original method of getting traffic on the internet before the advent of the search engines. Second, even the author does not have control over which copy of a syndicated article gets chosen by Google to be the “original”. It seems to be random. Go ahead, submit the same article, with the same author’s box, to half a dozen article directories and see if you can guess which one Google doesn’t throw into the supplemental index. Third, How do you get back-links for the thousands of articles submitted to an article directory? Socializing them is a great risk because of what is called “source hopping”. You may not be socializing the “original” copy of the article and subsequently pissing the author off. This will get you banned from the social sites. The only option is to socialize only your own personal content.

As marketers, what can we do? I would say change, but how when it seems unclear what Google is up to? Then we wait and grit our teeth as we watch our content drop into the supplemental index, our Google PR disappear, and our traffic statistics go the way of the dinosaur.

I have a forum friend who has experienced an 85% reduction in AdSense income since this started happening, I say in January 2007. He was making a few thousand a month. This mentor suggests Web 2.0 tactics to boost your readership and increase your visitors for your original content. Swapping original blog posts and articles would also be a great plan.

There has been a lot of talk about the latest Google algorithm update. Known as Panda (or Farmer, due to its apparent targeting of ‘content farms’), the change has been implemented to improve overall search quality by weeding out low-quality websites. But what does this mean for SEO, and how can you use it to your advantage?

First, let’s recap. Google changes its algorithm all the time, but most changes are so subtle that they are hardly noticed. Panda is different through. In February, Panda went live in the US, affecting around 12% of search queries. Now, as of mid-April, the update has been rolled out to all English-language queries worldwide, and is expected to make a similar impact.

The idea behind the update is to help the searcher find information with real value to them- not just the information that has been best optimised for their search phrases. In particular, this means weeding out websites with large amounts of shallow, low-quality content, known derisively as ‘content farms.’ These sites mass-produce content that specifically targets popular search queries. Hundreds or even thousands of articles are written daily, but little effort is put into quality assurance. Usually, these articles are written by poorly-paid writers, often based in developing countries, so the standard is generally low, with poor English usage and shallow information.

The problem has been that often these sites rank higher than well-researched, well-written and authoritative information- in other words, content that would actually have much greater value to the searcher. Panda was designed to change this, rewarding sites with high-quality content and lowering the visibility of less useful content.

The effect on site rankings
Since the update was rolled out, the web has been buzzing about its implications. Many sites have noticed a considerable difference in their search rankings- some for better, and some for worse. Content farms have suffered noticeable rankings drops, and scraper websites (sites that do not publish original content, but instead copy content from elsewhere on the internet) are also being punished. On the up side, some established sites with high-quality information have been rewarded with higher rankings. You can see some interesting data on the biggest winners and losers here.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *