information websites because of too much capacity, especially a lot of news through the acquisition, it is easy to cause the death of the site too much! In addition to acquisition is easy to cause the chain, those in advertising information and articles on the site, in the deletion process, is also prone to death, if not timely treatment, it will not only affect the spider crawling on the site of smooth, will greatly affect the brand image of the website! Compared to other types of sites, dead chain information websites are more prominent, to avoid the risk of death information site sooner rather than later! Here I come to the analysis of how to avoid!
from the picture, we can only know their own website how many links, but do not know which page is dead, so this method is not able to give us solve the website links to help much, only can let us know our website links in aspects of health
I think the best way to detect information site links, is to view the ISS log, webmasters may think the webmaster need to view the ISS log is station knowledge, in fact this is a misunderstanding of the webmaster Web log, ISS log is actually very easy to understand, such as the log return page is 200. This shows that this page is healthy, if it is 404, that is the chain, of course there are many 304 pages, which represents the page is not updated! I believe 404 pages are less! Can easily be webmaster by viewing the IIS log
use webmaster tools for testing, the main purpose of this way, look at the site there are no dead links, as shown in figure
wants to solve information websites of the chain, it should learn to check with the information website of hundreds of thousands of millions of Web links, how to find a small number of links, it is impossible for us to master each link for the site of the site will have a manual monitoring, is the link to the site one by one click, see no loading problem, the workload is too large, then there is no better way to solve the
: a search site chain
method to solve the information website chain catch these links!
here I do not advocate the use of ROBOTS.TXT to shield the site of the chain, because of the information site, the number of chain site will never be less than a dozen, hundreds of Qianshuqian, if in the robots file to add one by one, the workload is also great, operability is not high, so I think that for information for the website, there are two ways to solve the best
is through a 301 redirect, which have both advantages, there are defects, the advantage of the right to the original > dead links have