The panda uprising took the majority of us unexpectedly. Few anticipated it or saw it coming. Panda didn't change everything search engine marketing related, but they certainly delivered a very loud awaken demand those SEO issues we've ignored for a long time. So let's take now to consider the matter of duplicate content since it stands now with Panda within the picture. Learn to rank your website on the first page of google.
First let's examine what duplicate content articles are. Duplicate content occurs when there are 2 or more pages which have the identical content. Given that sounds pretty easy doesn't it? Then why all the trouble grasping this? Most of the confusion is that many of us believe that a website is really a document on the server. But to some Googlebot a website is all about an original URL who's finds through links, which is often internal or external. It's actually not difficult to produce two URLs that land for a passing fancy content.
Duplicate content is becoming a search engine optimization issue well before Panda, and it has controlling it's got come through many algorithms through the years. Well before Panda there had always been a debate going on whether duplicate content had been penalized. That said there is a difference between a penalty and a filter, although following your day the webmaster doesn't fully realize or care who's is. They just know their page(s) are not being indexed.
Then along came panda, which now treats duplicate pleased with a lot more severity, no less than in some instances. What Panda did is incorporate duplicate content right into a broader qc equation. Now instead of duplicate content affecting specific pages it can affect your whole site, and even pages that aren't duplicated will take a hit with their ranking power or worse not rank whatsoever. Duplicate content articles are no longer treated as a possible isolated issue.
You can find three different types of duplicate content. Let's take a look at them.
1. True Duplicates - this is the time 100% with the content articles are identical to another page as well as the only thing different may be the URL.
2. Near Duplicates - this takes place when merely a small amount of content differs. As an example, a photo, or even a text block.
3. Cross Domain Duplicates - this takes place when both #1 and #2, and cross domain duplicates really are a problem for legitimate sites too.
You can find tools available that can help you fix duplicate content.
1. 404 (Not Found) - simplest way to deal with duplicate content articles are to eliminate it and then contain it return a 404 error.
2. 301 Redirect - use a 301 redirect to tell visitors and bots the page has gone to live in another locations. This can be seamless for visitors, and many of your inbound link value will pass for the new page.
3. Robots.txt - here you exit the duplicate content available for these potential customers to find out, but block the crawlers from having access. It's a very simple method of implement.
4. Meta Robots - this lets you control how the crawlers act on your own pages, including telling the bots not to index certain pages.
5. Rel+Canonical - this tag lets webmasters specify a canonical version for almost any page by putting a tag within the header. When the crawler arrives it sees the canonical URL no matter what the actual URL.
There's a good start into understanding duplicate content and ways to deal with it. There's lots of different ways to deal with your duplicate content, but this can be a great start. Find the best SEO on the Gold Coast and Web Design on the Gold Coast and get your website and business up and ranking.
No comments:
Post a Comment