Home   |   About   |   Terms   |   Contact    
Read & Learn
 

Robots.txt Fetch Error

Web Design & Development

☯ All Web Related Articles     ☯ All Articles


A new product.






Popular Google Pages:




This article is regarding Robots.txt Fetch Error.
Last updated on: .


◕ What is robots.txt fetch error?

- Before crawling our Website, any Search Engine tries to access our robots.txt file from our Website's root directory to determine if we are blocking the Search Engine to crawl any page or any URL of our site.

When the Search Engines try to access the robots.txt file, they get 200 or 404 HTTP Status Code. If it happens then they proceed further, otherwise they left crawling our Website. Because the Search Engines do not want to take a risk to crawl our web-pages which we don't want to be crawl.

- For an example, let there is a robots.txt file in my root directory. But the Search Engine is not getting the 200 or 404 HTTP status code while accessing my robots.txt file. Then they will left my website without crawling. This is called robots.txt fetch error.



◕ How we can solve robots.txt fetch error?

We can solve this problem easily.
Let, we want Search Engines to index everything of our website. In that case we don't need a robots.txt file. We can delete it completely. In such a case our server will return the 404 HTTP Status Code to the Search Engine. And as a result the the Search Engine will continue crawling to our website.

We should remember that we need a robots.txt file only when we want to block Search Engines to crawl some of our webpage content.



◕ Related articles:
► How to make XML Sitemap?
► Robots.txt Fetch Error
► How to find URL of my Twitter Account?
► Difference between .html & .htm
► List of wildcard used in MySQL
► MySQL match a string





Popular Google Pages:





Top of the page

Amazon & Flipkart Special Products

   


Top of the page