Showing posts with label SEO Tools. Show all posts
Showing posts with label SEO Tools. Show all posts

Full Guide Tutorial: How to Create Robots.txt File (Understanding & Use of robots)

Hey folks, hope you are doing great. Today in my tutorial I will showing how to create Robots.txt file, the use and understanding it for the implementation in your website/blog.

1. So what’s the robots.txt file?: 
The robot.txt file is simple a text file which should be placed on the web server and tell the web crawlers whether to access a file or not.

2. What’s the point of using robots.txt?:
The robot.txt is a very powerful file to avoid the indexing of pages without quality content. For instance, if you have two versions of a web page and let’s say the one for viewing in browsers and the other one for printing purpose. You would better prefer the printing version to be excluded from being crawled, otherwise you would risk the imposed of a duplicate content penalties.

See robots.txt examples:


N/B: For this to be applicable, the robots.txt should be placed in the top-level directory of a web server, i.e. https://yoursite.com/robots.txt for blogger blog follow this steps.

3. How to create a robots.txt file
Well don’t get scared when you hear the term robot, then you think there is something to do with building yourself a robot. Rather note that a robots.txt file is just a text file and hence you can use the Notepad or any other plain text editor also you can create it in the code editor or even “copy and paste” it from somewhere else.

Importantly don’t put much focus on the idea that you are creating a robots.txt file, just think of it like as if you are writing a simple note as they are pretty much the same procedure. Robots.txt file can be created either manually and also by using online services.

Manual method: As mentioned earlier, robots.txt file can be created using any plain text editor. You can create the content depending on your requirements then save it as a text file with the name of robots in txt format as robots.txt.

Online method: There is a great amount of online sources for robots.txt creation tools. You may choose whichever tool you prefer using but you have to be very careful and check your file if it contains some forbidden information that may risk your blog performances. Robots.txt file is somehow delicate so using online method is not very safe as a manual method.

4. How to set up a robots.txt file 
A properly robots.txt file configuration prevents the private information to be found by the search engines and displayed to the public. However, we should not forget that the robots.txt commands are not the full protection but just a guide to crawl action. It’s good to note that googlebots follow the instructions in a robots.txt but other robots can easily ignore them therefore to achieve the result, you have to understand and use  robots.txt correctly. The correct form of the robots.txt begins with the directive “User-agent” naming the robot that the certain directives are applied to.

See example below:


N/B This setting makes the robot use only the directive corresponding to user-agent’s name as shown in the examples given below:


User-agent directive provides only the task to a particular given robot. Right after the directive there are the tasks for the named robot. In the example mentioned above, you can check up the usage of the prohibited directive which are “Disallow” meaning “/*utm”  and that's how you close the pages with UTM-marks.

See example of incorrect line in robots below:


And below you get the example of correct line in robots:


As seen in the example given above, the tasks in the robots.txt are in blocks. Every block considers the instruction for the certain robot or for all the robots “*” furthermore it is very important to observe the right order of the tasks for robots.txt when using both directives “Allow” and “Disallow

Allow” is the permission directive while “Disallow” is the opposite directive which you restrict permission to the robot.

Here below is an example of using the both directives:


The example above forbids all the robots to index the pages beginning with “/contact”, and permits to index the pages beginning with“/contact/page”.

So let's again see the same example in the right order below:


As you can see in example above, at first we forbid the whole part and then we permit some of its parts and in below example is the other way to use both directives:


The directives “Allow” and “Disallow” can be used without switches, though it will be read opposite to the switch “/”.

Now below is an example of the directive without switches applications:


Well it’s decision on how to create the right directive since both variants are appropriate. Be attentive and don’t get confused as you put the right priorities and point the forbidden details in the switch of the directives.

5. Robots.txt syntax 
Search engine robots execute the commands of the robots.txt and therefore every search engine can read the robots.txt syntax differently. What you need to do is to check the set of the rules to prevent the common mistakes of the robots.txt as outlined below:

  • Every directive should begin from/with a new line. 
  • Avoid putting more than one directive on the line. 
  • Avoid putting spaces in the very start of the line. 
  • The directive switch must be on one-line. 
  • Do not put the directive switch in quotes. 
  • Do not put a semicolon after the directive. 
  • The robot.txt command must be like:- [Directive_name]:[optional space][value][optional space]. 
  • The comments must be added after hash mark #. 
  • The empty line should be read as the finished directive User-agent. 
  • The directive “Disallow” (with a null value) is equal to “Allow: /” that means to allow everything. There is only one switch to put in the directives “Allow” or “Disallow”. 
  • The file name is case sensitive therefore uppercase letters are not allowed. E.g Robots.txt or ROBOTS.TXT is not correct. 
  • The robots.txt itself is not so case-sensitive, while the names of files and directories are very much case-sensitive. 
  • In case the  directive switch is the directory, put slash “/” before the directory name, e.g Disallow: /category. 
  • Much heavier robots.txt (exceeding 32 Kb) are read as allowed and equal to “Disallow:”. 
  • Unavailable robots.txt can be read as allowed one. 
  • If the robots.txt is empty, it will be read as allowed one. 
  • Some listing directives “User-agent” without empty lines will be ignored, except the first one. 
  • Using of national characters is not allowed in robots.txt. 

 Different search engines can read the robots.txt file syntax in their own way and therefore some rules can or may be missed. With note always try to put just necessary content to the robots.txt. The fewer lines you have, the better result will be and attend to your content quality.

6. Testing your robots.txt file 
In order to check if the syntax and file structure are correct, you may use any of online tools such as what Google provides https://www.google.com/webmasters/tools/siteoverview?hl=ru 

Googlebot are used by Google for websites indexing in its search engine and so fine that it understands a few more instructions than other robots. To check tfor robots.txt file online, put the robot.txt to the root directory of the website or else, the server will not detect your robots.txt. It is recommended to check your robots.txt availability e.g: yoursite.com/robots.txt. There are many and different online robots.txt validators. It’s just depends with your own choice.

7. Robots.txt Allow 
Since Allow is opposite to Disallow. This directives has the similar syntax methods with “Disallow”.
as the example below displays:


Indexing the whole website is not allowed unless the pages beginning with /page.

Below is example of Allow and Disallow with empty value:


8. Robots.txt Disallow 
Disallow is the prohibitive directive used in the file robots.txt. “Disallow” prohibits to index the website or some of its parts. This depends with the path given in the directive switch.

See the example of forbidden website indexation:

The example above closes the access for all robots to index the website.

Special symbols * and $ are allowed in the Disallow directory switch.

* – any quantity of any symbols. For example, the switch /page* suffices /page, /page1, /page-about-me, /page/good-food.

$ – points to the switch value correspondence. The directive Disallow will prohibit /page, but the website indexation /page1, /page-about-me or /page/good-food will be allowed.


While closing website indexation, the search engines may react with “url restricted by robots.txt” error. If you need to prohibit the page indexation, you can use not just robots txt, but also the similar html-tags as show below:

  • meta name=»robots» content=»noindex»/> — not to index the page content; 
  • meta name=»robots» content=»nofollow»/> — not to follow the links; 
  • meta name=»robots» content=»none»/> — forbidden to index the page content and follow the links; 
  • meta name=»robots» content=»noindex, nofollow»/> — equal to content=»none».

9. Robots.txt sitemap 
Directive “Sitemap” is used to detect sitemap.xml location in the robots.txt file. See the example below of the robots.txt having a sitemap.xml:


The sitemap.xml points through the Sitemap in your robots.txt therefore that directive will allow the crawler to learn about the presence of a sitemap and then starts indexing it.

10. Directive Clean-Param 
“Clean-param” Directive allows excluding from indexing pages with dynamic parameters. Hence the pages give the same content but with a different URL. Here you decide if the page is available in several locations. The biggest task is to remove all the extra dynamic addresses which can be a very much and to do this, we have to eliminate all dynamic parameters using a robots.txt directive “Clean-param”. See the example shown below:


Let’s Consider the example of the page with the following URL:




11. Directive Crawl-delay 
This instruction allows the avoiding the server overload if the web crawlers are used to reach your site too often. Therefore this directive is effective mainly to the sites with a huge page size. Below is our Robots.txt “Crawl-delay” example:-


In the example above, we just requested Google robots to download the pages of our website no more than once per three seconds. Some search engines read the format with the fractional number, as a guideline parameter “Crawl-delay” robots.txt.

12. Comments in robots.txt file 
The comments in the robots.txt begin with the hash sign # and are valid until the end of the current line and ignored by robots as shown below:
User-agent: *
# Comment can start the line
Disallow: /page # Comment can also continue the line
#robots
#ignor
#comments
Host: www,yoursite.com

The Common Mistakes 

1. The mistake in syntax:

Wrong  : User-agent: /
                Disallow: Yahoo
Correct: User-agent: Yahoo
                Disallow: /

2. The several directives “Disallow” in one line:

Wrong : Disallow: /css/ /cgi-bin/ /images/
Correct: Disallow: /css/
                              /cgi-bin/
                              /images/

3. Wrong file name:

Wrong  : Robot.txt(unproper case)
                robot.txt(missing 's')
                ROBOT.TXT(Caps lock)
Correct:  robots.txt


Robots.txt file is one of the most important SEO tools, as it has a direct impact to your website indexation procedures.
Share:

Seo Friendly and Valid HTML5 Meta Tags

How to create a powerful SEO meta tag for blogspot - meta tags? Most bloggers wouldn't know what meta tags means and their important but just make you understand, actually the function of meta tags is generally to provide information in form of metadata associated with html and xhtml based on its functions. Let's go a head and break things down a little better for more understanding.

Meta tags is one of optimization methods used to identify blog posts and articles in search engine. n other words you posts will be more easily traced in the browser/search engine such as Google, Bing, or other search tool and therefore will be more easily sorted in the SERP (search engine results page).

The function of the meta tag basically consists of several key elements such as:

  • Meta tag description Meta description tag is used to give a general overview of the contents of your blog page. The characters should not be more than 200 characters. See code below.

<meta content='Enter Your Blog Description Here'  name='description'/>

  • Meta tag keywords meta tag keywords are very important to define and determine what keywords can be found on your page in search engines. It also good for Adsense optimization.

<meta content="Enter Keywords Here'' name="keywords"/>
  • Meta tag robots The main function of robots tag use is to determine which and where the blog pages will be indexed in search engines. This tag is very useful especially if a blog uses frames for navigation.
<meta content="index follow" name="robots"/>
Now combining the three code component mentioned above you get the one shown below:
<meta content='Enter Your Blog Description' name='description'/>
<meta content='Enter Keywords Here' name='keywords'/>
<meta content='index, follow' name='robots'/>
Okay with that well noted. Let's see how we can deal with the 'xxxx' value that you normally see in your blog template.

Meta Tags Seo Friendly and Valid HTML5

Since nowaday we have newly modified template, I will be using the example below to show you to change the code in your template, note that not all blog template will have the same feature but at least you should get an idea how to do it. Do not worry I will answer you questions when needed. The below code are installed after the opening tag <head> or before </head>
<!-- [ Meta Tag SEO ] -->
<include expiration='7d' path='*.css'/>
<include expiration='7d' path='*.js'/>
<include expiration='3d' path='*.gif'/>
<include expiration='3d' path='*.jpeg'/>
<include expiration='3d' path='*.jpg'/>
<include expiration='3d' path='*.png'/>
<meta content='sat, 02 jun 2020 00:00:00 GMT' http-equiv='expires'/>
<meta charset='utf-8'/>
<meta content='width=device-width, initial-scale=1' name='viewport'/>
<meta content='blogger' name='generator'/>
<meta content='text/html; charset=UTF-8' http-equiv='Content-Type'/>
<link href='http://www.blogger.com/openid-server.g' rel='openid.server'/>
<link expr:href='data:blog.homepageUrl' rel='openid.delegate'/>
<link expr:href='data:blog.url' rel='canonical'/>
<b:if cond='data:blog.pageType == &quot;index&quot;'>
<title><data:blog.pageTitle/></title>
<b:else/>
<b:if cond='data:blog.pageType != &quot;error_page&quot;'>
<title><data:blog.pageName/> - <data:blog.title/></title>
</b:if></b:if>
<b:if cond='data:blog.pageType == &quot;error_page&quot;'>
<title>Page Not Found - <data:blog.title/></title>
</b:if>
<b:if cond='data:blog.pageType == &quot;archive&quot;'>
<meta content='noindex' name='robots'/>
</b:if>
<b:if cond='data:blog.searchLabel'>
<meta content='noindex,nofollow' name='robots'/>
</b:if>
<b:if cond='data:blog.isMobile'>
<meta content='noindex,nofollow' name='robots'/>
</b:if>
<b:if cond='data:blog.pageType != &quot;error_page&quot;'>
<meta expr:content='data:blog.metaDescription' name='description'/>
<script type='application/ld+json'>{ &quot;@context&quot;: &quot;http://schema.org&quot;, &quot;@type&quot;: &quot;WebSite&quot;, &quot;url&quot;: &quot;<data:blog.homepageUrl/>&quot;, &quot;potentialAction&quot;: { &quot;@type&quot;: &quot;SearchAction&quot;, &quot;target&quot;: &quot;<data:blog.homepageUrl/>?q={search_term}&quot;, &quot;query-input&quot;: &quot;required name=search_term&quot; } }</script>
<b:if cond='data:blog.homepageUrl != data:blog.url'>
<meta expr:content='data:blog.pageName + &quot;, &quot; + data:blog.pageTitle + &quot;, &quot; + data:blog.title' name='keywords'/>
</b:if></b:if>
<b:if cond='data:blog.url == data:blog.homepageUrl'>
<meta content='BLOG-DESCRIPTION' name='keywords'/></b:if>
<link expr:href='data:blog.homepageUrl + &quot;feeds/posts/default&quot;' expr:title='data:blog.title + &quot; - Atom&quot;' rel='alternate' type='application/atom+xml'/>
<link expr:href='data:blog.homepageUrl + &quot;feeds/posts/default?alt=rss&quot;' expr:title='data:blog.title + &quot; - RSS&quot;' rel='alternate' type='application/rss+xml'/>
<link expr:href='&quot;http://www.blogger.com/feeds/&quot; + data:blog.blogId + &quot;/posts/default&quot;' expr:title='data:blog.title + &quot; - Atom&quot;' rel='alternate' type='application/atom+xml'/>
<b:if cond='data:blog.pageType == &quot;item&quot;'>
<b:if cond='data:blog.postImageThumbnailUrl'>
<link expr:href='data:blog.postImageThumbnailUrl' rel='image_src'/>
</b:if></b:if>
<link expr:href='data:blog.url' hreflang='x-default' rel='alternate'/>
<link href='/favicon.ico' rel='icon' type='image/x-icon'/>
<link href='https://plus.google.com/GOOGLE-PLUS-USER/posts' rel='publisher'/>
<link href='https://plus.google.com/GOOGLE-PLUS-USER/about' rel='author'/>
<link href='https://plus.google.com/GOOGLE-PLUS-USER' rel='me'/>
<meta content='GOOGLE-WEBMASTER-VALIDATION-CODE' name='google-site-verification'/>
<meta content='BING=WEBMASTER-VALIDATION-CODE' name='msvalidate.01'/>
<meta content='Tanzania' name='geo.placename'/>
<meta content='ADMIN-NAME' name='Author'/>
<meta content='general' name='rating'/>
<meta content='tz' name='geo.country'/>
<!-- [ Social Media Meta Tag ] -->
<b:if cond='data:blog.pageType == &quot;item&quot;'>
<meta expr:content='data:blog.pageName' property='og:title'/>
<meta expr:content='data:blog.canonicalUrl' property='og:url'/>
<meta content='article' property='og:type'/>
</b:if>
<meta expr:content='data:blog.title' property='og:site_name'/>
<b:if cond='data:blog.url == data:blog.homepageUrl'>
<meta expr:content='data:blog.metaDescription' name='description'/>
<meta expr:content='data:blog.title' property='og:title'/>
<meta content='website' property='og:type'/>
<b:if cond='data:blog.metaDescription'>
<meta expr:content='data:blog.metaDescription' property='og:description'/>
<b:else/>
<meta expr:content='&quot;Silakan kunjungi &quot; + data:blog.pageTitle + &quot; Untuk membaca postingan menarik.&quot;' property='og:description'/>
</b:if>
</b:if>
<b:if cond='data:blog.postImageUrl'>
<meta expr:content='data:blog.postImageUrl' property='og:image'/>
<b:else/>
<b:if cond='data:blog.postImageThumbnailUrl'>
<meta expr:content='data:blog.postThumbnailUrl' property='og:image'/>
<b:else/>
<meta content='Put URL address of you blog logo here' property='og:image'/>
</b:if>
</b:if>
<meta content='https://www.facebook.com/FACEBOOK-PROFILE' property='article:author'/>
<meta content='https://www.facebook.com/FB-FUN-PAGE' property='article:publisher'/>
<meta content='FB-APP-CODE' property='fb:app_id'/>
<meta content='FB-ADMIN-CODE' property='fb:admins'/>
<meta content='en_US' property='og:locale'/>
<meta content='en_GB' property='og:locale:alternate'/>
<meta content='id_ID' property='og:locale:alternate'/>
<meta content='summary' name='twitter:card'/>
<meta expr:content='data:blog.pageTitle' name='twitter:title'/>
<meta content='USER-TWITTER' name='twitter:site'/>
<meta content='USER-TWITTER' name='twitter:creator'/>

Information- Use the table below to learn how to make changes in the meta tags

DETAILS TO CHANGE DESCRPITION EXPLAINED
BLOG-DISCRIPTION Decription or Keyword of your blog
GOOGLE-PLUS-USER Your Google+ admin link e.g +amsamuel
GOOGLE-WEBMASTER-VALIDATION-CODE Your google validation code obtained from Search Console
BING=WEBMASTER-VALIDATION-CODE Fill in your Bing validation code
ADMIN-NAME Fill the author's name
tz fill the country abbreviation code of where the blog is being published from. eg for Kenya is ke, Tanzania is tz.
FACEBOOK-PROFILE Fill you facebook profile link
FB-FUN-PAGE Fill you facebook fun page
FB-APP-CODE Fil with facebook Application code: get it from https://developers.facebook.com
FB-ADMIN-CODE Fill in with your facebook profile code: use http://findmyfbid.com
USER-TWITTER Fill in with your twitter user name


N/B Delete the meta tag if not in use(optional) 

Re-customize the 'xxxx' value with your own value following the examples in the table above. Now that you are done and all set, tell me how you are doing I will be there to guide you step by step untill you get it right. Okay don't forget to share with friends too.
Share:

Trending

Unordered List

  • Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
  • Aliquam tincidunt mauris eu risus.
  • Vestibulum auctor dapibus neque.

Pages

Theme Support

Need our help to upload or customize this blogger template? Contact me with details about the theme customization you need.