For some time I was concerned with my own SEO website building practices because of all the talk about duplicate content but I don’t worry so much about it any more. Matt Cutts says so.
After doing some reading on this I’ve discovered that there really never was a “duplicate content penalty”, and that makes sense to me. What there is is a duplicate content “FILTER” so that Google can present on the 1st page of SERPS the most relevant of all the duplications. Whichever one of the pages of duplicate content you may have that is the most relevant to what the searcher is looking for is the one they want to display. So if you have 10 pages of identical content and the only difference is the name of the city because you offer the same service to each of those cities Google isn’t going to penalize your site, they are going to filter out the 9 least relevant of those 10 nearly identical pages, which most likely will be the ones for the cities that aren’t in the searcher’s search term. Makes sense, huh?
As for a penalty for plagiarism, Google doesn’t really have a system to do that yet, until they can crawl the whole internet every 5 seconds or less. The problem for them is to determine which one of several identical pieces of information was the first one to get published or otherwise is the original piece that was plagiarized. That may seem simple but what if I publish an article on my blog that gets crawled on a weekly basis and the day I publish it someone else copies it and publishes it on their blog which gets crawled on an hourly basis. Their copy is going to show up in Google’s index before my original version shows up and they might end up getting the credit for it and I’d get the penalty if there was one. That just wouldn’t be fair, even by Google’s standards.