Indexing Your Blog | SEO Blogger

- Incorrect use of this feature can result in your blog being ignored by search engines. We will make sure it is done correctly by testing it.
- Custom robots.txt is a way for you to instruct the search engine that you don’t want it to crawl certain pages of your blog.
- “Crawl” means that crawlers, like Googlebot, go through your content, and index it so that other people can find it when they search for it.
- Go to blogger.com
- Open the blog you’re are working with
- Click on settings
- Click on Search Preferences
- Under the heading ‘Crawlers and indexing’, you will see ‘Custom robots.txt’
- Click ‘Edit’
- Select ‘Yes’ to Enable Custom robots.txt content
- 11. Copy & Paste the robots.txt file code:
- User-agent: Mediapartners-Google
- Disallow:
- User-agent: *
- Disallow: /search
- Allow: /
- Sitemap: http://EXAMPLE.blogspot.com/feeds/pos...
- Replace ‘EXAMPLE’ with your blog URL
- Click ‘Save Changes’
- Now go to your webmasters account and give it a test

- all: index and follow the links.
- noindex: prevent indexing of the page.
- nofollow: prevents the Googlebot to follow the links on this page.
- none: shortcut to noindex, nofollow, noarchive.
- noarchive: prevents Google to display the link associated with Cached page.
- nosnippet: prevents the display of an extract in the search results. Therefore indicates to use the page title (and in some cases the sitelinks) only (Google, Yahoo!)
- noodp: prevents the use of a replacement drawn description of ODP websites / DMOZ.
- notranslate: prevents translation of your pages in search results.
- noimageindex: Specifies that you do not want your page to appear as the source of an image appearing in Google search results.
- unavailable_after: [date]: lets you specify the exact time and date that the crawling and indexing of this page must stop.
Post a Comment