A customer receives the message "No information is available for this page" when running a Google search for their site.
By clicking on the "Learn why" link, a Google Support page provides further details on the output, by stating that robots.txt is blocking the site from being searched.
The robots exclusion standard, also known as the robots exclusion protocol or simply robots.txt, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies how to inform the web robot about which areas of the website should not be processed or scanned. To receive this message, the robots.txt is usually set with:
User-agent: * Disallow: /
This setting is disallowing any crawler to index any page within this website. In order to change this configuration, please follow the instructions in the Microsoft IIS article, Managing Robots.txt and Sitemap Files, to allow access to the site pages that should be indexed.
Note: Although crawlers can still crawl all public pages and directories, this standard is kept as agreed baseline for letting website owners tell crawlers what to list what to not, and the majority of the crawlers respect this standard.
Content Author: Hamid Waqas