Friday, 29 September 2017

Fear is real and you have to face it.

  No comments
September 29, 2017


Hello everyone, it's been a long time since I posted something. To get you all motivated I am posting a small message which was sent to me by my teacher when I was preparing for GATE. Hope this helps.

Dear all,

     Fear is real and we all have to face it everyday. What makes it even more difficult is when everyone else around you is following the trend and you are following your heart. You might be in one of the following situations
1. Your friends might be preparing for semester exams(everywhere you see, there will be books and lab records, and people make you feel that they are working hard and learning more compared to you, even though you know that they will get marks but not knowledge) and you have completely given up on semester exams and you are preparing for All INDIA RANK 1 in GATE.


2. Your classmates are all preparing for service based companies ( and they make it look as if they will die if they don't get placed) and you don't give a damn about it because you want to get into IITs and go for the top league product based companies.


3. All your colleagues in your office must be thinking about settling down by getting married and buying a house and car on loan (and everyday during lunch, they make you feel that you are doing wrong by deciding to go for masters now when they are all learning to limit their lives to the size of their earnings) and you don't even think about it because you know that we have one life and you want to live it king size. 


4. All your cousins are earning and their parents call your dad to talk about the 5000 rupee hike they got this year and a bonus of 10000 for the Durga puja (for working in a 9 am to 6 pm meaning less job) while you have dropped an year to learn and become an entrepreneur.

If any of these circumstances ring a bell, I can tell you that you are really great. Great to have taken that decision to stand out and not to fit in. Great to have had those guts to follow your heart. Great, because you face the criticism every single day in your life and have the courage to smile and continue. Great that you have big heart to forgive everyone who make you the butt of the joke everyday. Great, that you have not give up yet and great that you are going to see through it till the end.

Now this is the time to put in more hours and increased focus. It is the time to gather even more courage and convert the panic time into preparation time. I have seen people getting into IITs in just 2 months. They are normal students like you. Only thing is that they believed that it is possible. Please believe in yourself. Never let your friends, parents, and colleagues decide what you have to do with your life. You know what you want. And you decided to take the action. You have come too far to turn back now. Finish it strong. Finish it great. Do it now. Your future self will thank you for what you are doing now. Don't give up. 

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook or google+. Don't forget to follow the blog post.

Read More

Tuesday, 11 July 2017

Virtual machine, statically typed vs dynamically typed, WORA, machine code vs byte code, compiled vs interpreted languages

  No comments
July 11, 2017


1. Compiled vs interpreted languages:

Compiled languages: The languages which when compiled is directly converted to target machine format. For example if we have
a = b + c
in our source program, it can be directly converted to machine level language like
ADD B,C
MOV B,A
Language like this need not be compiled again and again for execution. It can be compiled once and executed any number of times.
Some of the languages which fall into this category are C, C++ and C# etc.


Interpreted languages: The languages which are not directly converted into the destination machine language are interpreted languages. In these kinds of languages the source code is converted into an intermediate representation by the interpreter and then finally into the machine format.
If we take the example of adding b and c i.e “b+c”. This high level language code is first read by the interpreter which then converts it into it’s equivalent interpreted format let’s say add(b,c). This function would call the machine instruction ADD.
The source code of interpreted languages cannot be executed in one shot. Every line is interpreted and executed independently by the interpreter.
Every time the program needs to be executed it needs to be interpreted. So it’s slow compared to compiled languages.


Actually the languages are not interpreted or compiled it’s how we implement them. But if are to classify languages based on this category this is the explanation.

2. Machine Code and byte code:

Machine code: A machine code is the set of instructions that can be directly executed by the cpu. It is hardware dependent and every machine has its own machine code instruction set.
Machines which have the same architecture often have the same instruction set and thereby have the same interpretation for the same source program.
For example intel processor might have the follwing instruction for adding -> 00101010.
And an MOTOROLA processor will have different instruction for adding something like -> 10100010
A machine code of an intel machine can not be run on an motorola machine.


Byte code: A byte code is an intermediate representation of the java source code. Java is compiled and interpreted language. Once we compile a java program (.java file) it is converted to an intermediate representation called bytecode. The .class file is the byte code.


The reason we call it as bytecode is because every instruction in the .class file is of one byte. This .class file is given to java interpreter which is specific to a machine. This interpreter takes the .class file and execute it line by line. javac is the compiler and java is the interpreter.

3. Virtual machine:

A virtual machine is a computer file that works as a actual computer. It is same as creating a computer inside another computer. An operating system can run another operating system giving the user the feeling that it has its own machine.
Using virtual machine we can use the hardware of the machine same as the parent operating system.
For example: We can use windows as the base operating system and on top of it we can install VmWare which is used to create an instance of the hardware of the machine. It can then be used to run ubuntu. And ubuntu works as if it is the parent process.

4.WORA:

WORA stands for write once run anywhere. This is one of the key feature of java which makes it popular. Java source code is compiled and converted to an intermediate form which is called bytecode.
This byte code is then interpreted by an interpreter, which is specific to the machine. So to run a java program we only need the java interpreter for that platform. It is because of this feature it is WORA.

5. Statically typed and dynamically typed languages:


Statically typed:
All the languages in which the type of the variable is known during the compiled time. It is the reponsibility of the programmer to specify to which type the variable belongs. If it is not done the compiler raises error.
Some of the statically typed languages are C, C++, Java etc.
Dynamically typed: All the languages in which the type is associated with the run time values not the variables are dynamically typed. So a variable can be assigned any value in a program. For example:
Employee_name = “Himanshu”
Employee_name = 48
There will be no compilation error in the above scenario in case of dynamically typed languages.
But it is not allowed in statically typed languages.

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook or google+. Don't forget to follow the blog post.

Read More

Sunday, 25 June 2017

What is paging and why do we need it?

  No comments
June 25, 2017

As I mentioned in my previous posts about the disadvantages of fixed partitioning and dynamic partitioning, the processes in memory are not allowed to be at different places. They have to be continuous. And it was because of this very reason we faced internal and external fragmentation.

Paging is one of the most popular methods and is still in use. In this technique, we divide the process into equal size small parts. These parts are called as pages. We then divide the main memory into small parts called frames. The division is such that size of the pages and frames are equal.

Whenever the process wants to execute we transfer the pages of the process from the secondary storage to main memory. But we don't transfer all the pages. Only the pages which are required are transferred to the main memory. Whenever the execution completes for the process the pages are wiped of the frames. But the question might arise how do we keep track of pages of the processes.

For this, we use page table. For every process, we have a unique page table. The page table tells which page is stored in which frame. To better understand it let's look at an illustration.

Consider the following diagram. In this, the size of the process is 1 KB. And main memory size is 1 MB. Assume the page size allowed in our computer is 1 Byte.


Now we divide the process into pages. Since each page size is 1 Byte, the process will contain 1024 pages of size 1 Byte. Similarly, when we divide the main memory into frames, it will contain 2^20 frames each of size 1 Byte.
Now as an when the pages are requested they are brought into main memory and are stored in frames. They need not be stored continuously. They can be allocated anywhere in the memory except the reserved space.

The page table keeps track of the pages of the process. In the above diagram, we can see page table 1 keeps track of process 1 pages and page table 2 keeps track of process 2 pages. The entries in the page table keep on updating as and when the pages move in and out of the memory.

Now because of paging many of the disadvantages of fixed and dynamic partitioning are removed.

  • There is no external fragmentation.
  • The degree of multiprogramming increases.
There is internal fragmentation but it is negligible. As the maximum space wasted could only be approximately one page.

There is a lot more to be discussed about paging like how paging work with virtual memory? how frames are allocated to pages? All this will be explained in the next post.

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook or google+. Don't forget to follow the blog.

Read More

Tuesday, 13 June 2017

Dynamic partitioning in operating system

  No comments
June 13, 2017


In my previous post, I discussed fixed partitioning. You might want to check out that because this post is just the continuation of that.
If you remember the major drawback of fixed partitioning was that it could not allocate memory to processes even if we had memory. This was all because of the fixed partitions that were already created. To overcome this problem people came up with another solution which was called dynamic partitioning.

In this method, the memory was not divided until or unless a process asks for it. Whenever a new process used to come, the operating system uses to find out the size of the process and allocate that much space to the process. To illustrate this let's look at the following scenario.

We had 20 MB of memory available whenever the process P1 arrived we assigned it 1 MB of memory. Similarly, when process P2 arrived we assigned it 2 MB of memory and so on. So in this approach, we didn't waste any memory. This means there was no internal fragmentation.

But even this method had some flaws. Can you think of something? The limitations were that once the process completes its execution and empties the memory slot, we are left with a hole. The problem arises when the processes at different places in the memory finish the execution and create holes. The following scenario explains the drawback of dynamic programming.

Assume now after the execution processes P1, P3 and P5 left. Now we have holes of sizes 1 MB, 3MB and 5 MB. Now another process say P6 comes in and asks for 6 MB of space. Now you can see we have more than 6 MB of space but still we are not able to allocate any space to the process. This is called as external fragmentation.
One solution which was proposed to overcome this problem was compaction. In this technique, the free spaces or the holes were brought together to form a bigger hole. But this process was very tedious and consumed a lot of time. So dynamic allocation was also not a big hit though it was much better than the fixed partitioning.
So altogether dynamic partitioning had following advantages and disadvantages:

Advantages:


  • The degree of multiprogramming is dynamic as compared to static partitioning.
  • The size of the process is not limited by the size of the partition.
  • It doesn't suffer from internal fragmentation.

Disadvantage:

  • Allocation and deallocation of memory is tedious. 
  • It suffers from external fragmentation if compaction is not applied.

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook or google+. Don't forget to follow the blog post.

Thank you!

Read More

Monday, 12 June 2017

Fixed partitioning in operating system

  No comments
June 12, 2017


Main memory in a computer is a very useful resource. It's something which is needed by each and every process at some point in time. In this post, we will discuss a technique which was used to assign memory to processes. But as always we will not directly jump into the discussion of what is fixed partitioning.
We will go step by step and find out how reach fixed partitioning? What made us opt for fixed partitioning?

So when main memories were not in existence we were using relays and delay lines for main memory functions. But the problem with these devices was that the devices would reproduce data only in the order in which the data was written into it. This made it very difficult to access the data randomly.

History of main memory:

Even if you want to read the end of the data you were supposed to make a sequential go through the data to reach the end. This problem leads to the discovery of drum memory, but to retrieve the data efficiently we were supposed to know the physical layout of the drum. So even this was not so popular.
Drum memory
Soon there was an advancement of technology and with transistors, people started making low memory high-speed devices such as registers. But registers at that time were not like we see them today. They were large and were very costly. So could not be used for a large amount of data. 

The first practical form of random-access memory was the Williams tube starting in 1947. It could store the data as electrically charged particles on the face of cathode ray tube(CRT). The electron beam of CRT could read and write the spots on the face of CRT in any order, so it was random. And that's where the first random access memory was built.
The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. After few more years, we had a fully randomised access memory which was much more efficient.

Fixed partitioning:

After the invention of main memory, the next challenge was, how to use it. As there are many processes in the memory it's very obvious that the memory allocation was needed. So fixed partitioning was introduced. 
In this technique, our main memory was divided into unequal parts. If our memory was 10 MB, the division was something like 1 MB, 2 MB, 3 MB, 4 MB. 

The reason for this kind of division was depending on the sizes of the processes. Since processes were of different sizes so were the divisions. The following diagram shows fixed partitioning.
Whenever a process makes a request the operating system looks into the primary memory and finds out the appropriate hole for it. How it finds out the appropriate hole is a different issue and we will look into that in some other tutorial. But assume it somehow finds out a correct spot for the process.

But the problem comes in when there is memory wastage. It's very difficult to find a slot which of the same size as that of process size. Most of the time it is either larger or smaller. This leads to a very big problem called as internal fragmentation.

To illustrate this problem let's assume all the above three slots of memory are occupied and now another process of size 1 MB comes in. We have no choice but to allocate it the 4 MB slot. This leads to wastage of 3 MB. This 3 MB cannot be occupied by any other process, even if it is of size <= 3 MB.
Process allocation in fixed partitioning

This was a major drawback of fixed partitioning and a major reason for its rejection. There are few more drawbacks of fixed partitioning.

  • Processes were not allowed to expand over the partition area.
  • The size of the processes was limited by partition sizes.
  • It suffered from internal and external fragmentation.
  • The degree of multiprogramming was limited.  
The reason for limited multi-programming is that it could only allow that many numbers of processes as there are a number of slots or partitions. Even if we have memory left because of internal fragmentation still we could not load more processes into the memory.

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook or google+. Don't forget to follow the blog post.

Thank you!

Read More

Friday, 9 June 2017

Retransmissions in TCP

  No comments
June 09, 2017


Whenever you send a packet, there is no guarantee that packet will be delivered to the destined node. The concept of retransmission comes into picture when you send a packet and you don't receive any acknowledgement for it.
But the question comes in when do you decide that our packet is lost and is not delivered to the destined host? When do you retransmit the lost packet? This is where two concepts of TCP comes into play.
  • Retransmission after the timeout.
  • Retransmission after 3 duplicate acknowledgements.

Retransmission after the timeout:

Since TCP is a connection-oriented protocol whenever it sends any packet, it waits for the acknowledgement of the sent packet. But for how long? For that, it uses timeout timer. The timeout timer is started as soon as the packet is transmitted. If the acknowledgement is not received within this time, the sender retransmits the packet.
The following diagram shows the retransmission when the timeout occurs.

If this kind of scenario occurs, it means there is severe congestion in the network.

Retransmission after 3-duplicate acknowledgements:

Let's say host A wants to send 5 packets to host B. Also assume that it can transmit all the packets one after the other continuously. The following diagram shows the complete scenario. 

Once all the packets are transmitted the timeout timer is started for each and individual packet. In the above scenario packet 1, 3, 4 and 5 are safely delivered to B. But packet 2 is lost on the way. So the receiver when gets packet 1, it sends back an acknowledgement by sending the next sequence number i.e 2. 
Now on receiving packet 3 it again sends back sequence number 2. Remember it cannot send sequence number 4 or 5 because TCP uses cumulative acknowledgement (As it is 75% SR and 25% Go Back N). If it sends sequence number 4 it means that we have received all the packets till sequence number 3. And now we want sequence number 4.

The same thing happens for packet 4 and packet 5. For these packets, the receiver sends back the acknowledgements with sequence number 2. Now at the sender side, we have received three duplicate acknowledgements from the receiver. The acknowledgement of the first packet is not counted in this.

Now if we receive three duplicate acknowledgement and timeout timer has also not expired, then we don't wait for the completion of timeout timer. We immediately retransmit packet 2. 

When we receive 3-duplicate acknowledgements it means congestion is not there but it's about to get congested.

This was all about this topic. As always if you like the post, share it. If you have any queries, or you want to suggest something you can always ping me on Facebook or mention in the comments section below. I will be happy to help. You can follow me on Facebook and google+. Don't forget to follow the blog post.

Read More

Thursday, 8 June 2017

TCP congestion control

  No comments
June 08, 2017


This post is more of problem oriented. This topic include less of theory and more of problem solving. Before I explain about TCP congestion control algorithm let's see why people came up with this algorithm.

History of congestion control:

Back in late 1980's when ARPANET was getting adopted to TCP/IP and when researchers started developing networks of networks (which was later called as internet), a severe problem occurred. Whenever any node in a network wanted to send some data, it used to transmit it without any restrictions. Because of which the entire internet used to come down. To understand it more clearly, let's take an example.
Assume there is node named S who wants to send a 1 GB of file to a node named D. Assume the packet size is 1 MB. This means the node S cannot send the entire 1 GB of file in a single piece. It needs to be divided into smaller chunks (packets) of size 1 MB. If we divide 1 GB by 1 MB we get 1024 packets.

Now since there is no congestion control algorithm back then, so all the nodes had full freedom of sending as many packets as they want. So nodes like S would start sending all the packets at once. In this case node S will transmit all the 1024 packets at once. 

1024 is a small number now, but back then when bandwidth was very less and we had limited resources, this number was of some significance. Here we are talking of just one node. Imagine if even quarter of the total nodes transmit 1024 packets what would be the amount of traffic generated at every router.

It was because this situation the internet would come down very often. This problem was called as Congestion meltdown. To overcome this problem we came up with TCP congestion control algorithm. TCP congestion control algorithm acts as a protective layer over the internet. 
Before we discuss TCP congestion control algorithm there are few things you should know.
  • MSS: It's called as maximum segment size. A packet at transport layer is called as a segment. So MSS means the maximum size segment that the sender can send onto internet.
  • Ws: It is called as the sender window size. It is the maximum amount of data that sender can have at any instance of time. You can call it as the buffer present at the sender side.
  • Wáµ£: It is called as the receiver window size. It is the maximum amount of data that receiver can store at its end, at any given instance of time. It is the buffer present at the receiver's end.
  • Wc: It is called as the congestion window. It is the maximum amount of data that the internet can have at any given instance of time.

Congestion control algorithm:

The algorithm is divided into 3 phases - 
  • Slow start phase. 
  • Congestion avoidance phase. 
  • Congestion detection phase.
As I said the algorithm is used to put restrictions onto unrestricted traffic. So if a host has some data to send we don't allow him/her to send all the data once.
During the slow start phase we grow exponentially till a point called threshold (we will see how to calculate it). This means the rate at which we start sending the packets starts growing exponentially in this phase. 

Initially we send 1 segment, then we send 2 segments then we double it and send 4 segments, next time we send double of 4 segments and this goes on till we reach threshold value.
You might get the doubt that, why do we call it as slow start phase? It is so because we start with very small value example 1 or 2.

After this we grow linearly till we reach Ws. This phase is called as congestion avoidance phase. In this phase we grow linearly because we have crossed the threshold and if now we try to send packets at exponential rate there might be chances of congestions and network breakdown.

We increase the number of packets linearly. If threshold is 10 then next time we send 11 segments, then 12, and then 13 and so on.... till we reach Ws.

Once we reach Ws we keep sending the packets at the same rate. It is so because now we don't have the capability to send more packets than Ws.  This is the most packets that we can generate at our end. 
Now coming to congestion detection phase. It is that phase which is active all the time. You can detect congestion at any instance of time and in any phase. So it's not represented in graph shown above. There are 3 different ways of detecting congestion:
To solve the problems we should keep the following things in mind:
  • Whenever timeout occurs, new threshold becomes half of current window size and we start from slow start phase.
  • Whenever 3-duplicate acknowledgement occurs, new threshold is equal to half of current window size and we start with congestion avoidance phase. That means we start from threshold value.Now let's take an example and understand it more clearly.
Example: We are given the following data - 
Wáµ£: 64KB.
MSS: 1KB.
Start with 1 MSS initially.
Find out after how many transmissions Ws reaches its maximum value? if the sender gets timeout timer after 8th transmission and 3 duplicate acknowledgement after 17th transmission.
↦ To answer this question we need to find out certain values. First is Wáµ£ in terms of the segments and threshold.
Wáµ£ = (Wáµ£ size in packets)/(MSS size) = 64 KB/1 KB = 64 MSSs.
Threshold = Wáµ£/2 = 64 / 2 = 32.
Now according to the algorithm we start with 1 MSS.
124816323334⨯(timeout)12481617181920⨯(3-duplicate acknowledgements)1011121314 . . . . . . .64 MSSs. So after 71 transmissions we will reach maximum value MSSs i.e Ws.

So this was about TCP congestion control. If I missed out something or you want to suggest something, mention in the comments section below. Tell me how did you like the post. Don't forget to follow the blog. You can also follow me on Facebook and google+.

Thank you!

    Read More

    Wednesday, 7 June 2017

    Trace route application of ICMP

      No comments
    June 07, 2017


    If you have missed out my earlier post on ICMP, you can check it out here. In my previous post, I have explained in-depth about ICMP.
    Now coming to traceroute application of ICMP. This is one the most interesting application of ICMP. You can have hands on experience with this. Before I begin explaining how it works first let's see what is it?

    What is traceroute?

    I know the answer to this question is very simple but believe me, this thing is important and has a lot of applications. So just for the sake of completion, I will answer what is traceroute.
    Traceroute is a computer network diagnostic tool for displaying the route (path) and measuring transit delays of packets across an Internet Protocol (IP) network. 
    There is one another alternative to traceroute, it's called as record route. But unfortunately, this command is not for all. It can be used by network administrator only. You might get the doubt that when both the commands do the same thing then why record route is not allowed for the general public. It's because traceroute is not reliable as compared to record route which gives accurate results. Traceroute is a way of cheating the routers and the destination for getting the information. You will understand this more clearly once you understand how traceroute works?

    Now let's find out how to use traceroute command. The usage of traceroute depends on your operating system. Different operating systems have different commands to trace the route.

    If you are using a Linux-based system, it's simply traceroute. If you are using windows it's tracert. The following snapshot is from Ubuntu, which shows the traced route of google.com. When you type the command: traceroute www.google.com this is what is showed in Ubuntu.
    The command gives the IP addresses of all the hops/routers which it crossed to reach Google server. It also shows the respective time taken to reach these routers and google.com.
    There is a lot of information given by this traceroute command, but for the time being, I want you to remember the last 3 words of the first line i.e 60-byte packets. You might get the doubt that why is it packets instead of a packet. I will explain it clearly and in-depth. In fact, that line forms the basis of our tutorial. 
    Now let's come to the most interesting part of our tutorial, which is how it is done?
    Whenever a user gives the traceroute command, the machine initially creates an IP packet. The IP packet contains two very important things inside it. First, the UDP packet with dummy port number and second, an ICMP packet. You will come to know why is it so?
    The TTL for the first IP packet is kept as 1. It means that this packet can travel at most 1 hop before it is discarded. So now when this packet crosses 1 hop (i.e the source itself), it reaches router R1 and it's discarded. The router sends back an IP packet embedded with an ICMP packet. The ICMP contains the information like who discarded the packet? why it was discarded? when it was discarded and so on. When this packet reaches the source S, it will give away all this information to S. So now source has the IP address of router R1 which was on the way of the packet.

    Now source creates one more IP packet with the same content but with TTL = 2. TTL = 2 means this packet can cross at most 2 hops (including source). Whichever router finds that its TTL = 0, that router will discard it and send back ICMP packet to the source. In this case router R2 discards the packet and sends back an IP packet with an ICMP packet inside it. Now source gets the IP address of the second router.
    Similar things happen with router R3 and R4 and source find out there IP address also. The problem is how do we make the destination send an ICMP to us. Remember, if a packet reaches the destination with TTL >= 0 it is accepted by the destination. This is tricky right? 

    This is where our UDP packet comes into the picture. I told you we will send a UDP packet inside IP packet with dummy port number ( a port number which is invalid). That dummy port number will help us here. Now when the source sends an IP packet with TTL = 4. The packet crosses all the routers and reaches the destination with TTL = 0. Destination D can happily accept it. But before accepting the packet at the transport layer, D checks to which port number this packet has to be delivered. It finds out that there is no such port number as mentioned in the UDP packet.

    Now destination discards the packet and sends back an ICMP to source S. This packet include IP address of destination along with other relevant information. So finally source gets the IP address of destination as well.
    After getting all the IP addresses, the traceroute program gives us the output as shown above. 

    But if you remember I told you that traceroute is not 100% reliable when compared to record route. Can you tell me why is it not reliable? You can give it a try. Or ping me if you need any explanation on this.

    As always thank you for your time. If you have any doubts regarding this or any of other tutorials you can always send me a message. Or even better, put it in the comments section below. 
    And follow our Facebook page, and follow me on google+. Don't forget to follow the blog.

    Read More

    Tuesday, 6 June 2017

    ICMP - Path MTU discovery

      No comments
    June 06, 2017


    Before you start with this tutorial I recommend you to read my first tutorial on ICMP.

    Every network on the internet has MTU. MTU is an acronym for Maximum Transmissible Unit. It is defined as the maximum size of the packet that a network can carry. If a packet whose size is more than the size of MTU arrives, it is discarded by the default router or the router which is at the entry point of the network.

    It is able to do so because every router has the information about the network to which it is connected to. Not only this, every router has as many IP addresses as the number of interfaces on the router. If suppose a router is connected to 4 different networks using 4 different interfaces then it will have 4 different IP addresses with respect to the networks it is connected to.

    This might feel little strange but this is what happens with your computer as well. As many interfaces, you have, that many IP addresses you can get.

    Let's now discuss the PMTU problem and how ICMP helps in solving the problem. Consider the following scenario, where we have two nodes namely S the source and D the destination. Both are present in different networks with different MTU sizes. There are two intermediate routers and a network of MTU = 500 Bytes between S and D.

    The source should know the minimum MTU of the networks, which comes in the path of the packet transmitted from S to D. If it doesn't know the minimum MTU then every time the packet will be discarded by the network and we will never be able to send the packet from S to D. The solution to this problem is given by ICMP Path MTU discovery technique.


    Let's say we want to send some data from S to D. The packet size is 1500 Bytes. We will intentionally set DF = 1 (DF means do not fragment). You will come to know why we have set the DF bit. When this packet is transmitted by S onto the network there could be two chances: 1. The packet size is very big and cannot be forward by the network and eventually, it is discarded 2. The packet is transmitted successfully.

    In this case, the packet is transmitted successfully. It is because the network in which we are transmitting the packet is our own network and we would be knowing the MTU of our network. So we will not make packets whose size is greater than our network MTU size.

    The problem comes in when the packet crosses the default router of the network. In this case, when the packet reaches router R1 it finds out that network onto another side of the router has lesser MTU and the packet cannot be transmitted further. One obvious choice would be to fragment the packet and then transmit it. But we have set DF = 1. Which means the packet cannot be fragmented.

    It is because of this very reason the packet is discarded at the router and an ICMP is sent back to S. ICMP will carry the message that Destination host is unreachable because MTU of next network is 500 Bytes. On reading this message S will create three new packets from 1500 Bytes packet whose MTU = 500 Bytes. Again this packet is transmitted.

    Now the packet is rejected by router R2. It is because the MTU of the next network is less than the packet size. Now, router R2 creates an ICMP and sends it back to S. Sender now comes to know that there is a network on the way whose MTU = 300 Bytes. So now it breaks the 1500 Bytes packet into 5 equal parts of size = 300 Bytes.

    Again these 5 packets are transmitted by S. Now when these packets reach router R2, they are accepted and delivered to the D. And this is how the ICMP helps in delivering the packets across the networks which have variable MTU sizes.
    Now you would have understood why we have set DF = 1. Had we not set DF = 1, the packet would have been fragmented and happily transmitted and we would have never come to know the path MTU.
    Now you might get a doubt that why should we do this. Can't we leave DF = 0 and let routers do the fragmentation and then they can easily transmit it. Why do we need to do all this tedious stuff?
    Well, it might look tedious but it's the best choice we have.

    Assume we have a packet of 1000000 Bytes. Now to send it across the network shown above imagine how much amount of work the routers have to do. To make it worse let's say there are 10,000 packets of the same size. To fragment these many packets and then do the transmission, is very much inefficient.

    The better option would be to find out the least MTU and generate packets of that size and then transmit them. This will help routers to process the packets faster and packets will not be discarded due to congestion.
    So this was one another application of ICMP which is used everywhere.

    Hope this was helpful. If you have any questions related to any of my posts feel free to ask. I will be happy to help you. At the same time, if you have any suggestions for me, you can put it in the comments section below. Do follow us on Facebook and google+ to get latest updates.

    Thank you!

    Read More

    ICMP (Request and reply)

      No comments
    June 06, 2017


    ICMP Request and Reply:

    1. Router solicitation:


    Whenever a host joins a network, the first thing it should know is, what is the IP address of its default router. To find that out it sends an ICMP packet to all the routers it is connected to. Whichever router wants to be its default router sends back another ICMP, containing its IP address.

    2. Router advertisement:

    Whenever a router joins a new network, it advertises itself by sending an ICMP packet to all the hosts in the network. It says that I am a new router in the network if anyone wants to make me their default router they are welcome.

    3. Network mask Req and reply:

    If a host in the network doesn't know it's network mask, it can use ICMP to find it out. The host can send an ICMP packet to its default router and default router sends back another ICMP, which contains the network mask of the host.

    4. Timestamp Req and reply:

    Whenever two hosts are present at two different parts of the world, it's obvious that there could be some synchronisation issue related to time. One of the hosts can send an ICMP to other host and can find out the time onto other hosts machine. Both can synchronise their time clock to communicate properly.
    This technique is not used anymore. New message protocols are used now. Which are better than this.
    There is one more application of ICMP Req & reply, Traceroute. I have covered this topic in-depth in another post. You can read about it here.

    Hope this was helpful. If you have any questions related to any of my posts feel free to ask. I will be happy to help you. At the same time, if you have any suggestions for me, you can put it in the comments section below. Do follow us on Facebook and google+ to get latest updates.

    Thank you!

    Read More

    ICMP (Error handling or feedback messaging)

      No comments
    June 06, 2017


    Before you start with this tutorial I recommend you to read my earlier post on overview of ICMP, where I gave some insight into this topic. You can check it out here.

    As I have already told where all ICMP feedback messaging is used, now it's time to study them in detail. The following diagram gives the complete picture of this post.

    ICMP Feedback messaging:

    1. TTL exceed:

    There is a field called TTL in IPv4 packet (and Hop Limit in IPv6). It is used to specify how many hops at max the packets can cross to reach the destination. If before reaching the destination the TTL value reaches 0 then the packet is discarded by the router and an ICMP packet is generated.
    The sender A sends a packet with TTL = 2. But TTL reaches zero at router R2 before it could reach the destination B. The router R2 generates an ICMP back to the sender A. Which contains the reason for discarding the packet.

    2.Source quench:

    This is a situation where source generates a lot of data and pushes everything onto the network. The router on the way might not be able to handle such huge amount of data and might get congested.
    In this kind of situation, the congested router sends back an ICMP to the sender saying, please stop till I process the previous packet. Quench itself means stop.

    3.Parameter problem:

    If you know strict source routing you will understand this. In strict source routing, we predefine the path in the packet itself by providing it with the next hop information. If I want to send a packet from A to B which have many routers between them, but we want the packet to follow a specific path then we can mention the path in the packet. Such as A→R1→R4→R7→B.

    Now if a packet is travelling through that path and in-between it encountered a problem saying there is no path from router R4 to router R7, such problem is called parameter problem.
    Parameter problem says that the parameters provided for strict source routing are wrong. The router R4 then generates an ICMP and sends it back to A.

    4.Destination unreachable:

    There are two types of destination unreachable problem:
    • Destination host unreachable
    • Destination port unreachable
    Destination host unreachable: This problem occurs when we send a packet and the targetted host is down. Also if the link between the router and the destination is down. In both the cases, the router sends back an ICMP to source specifying Destination host unreachable.

    Destination port unreachable: This problem occurs when you have reached the destined host but the port number to which the packet has to be delivered is not open. In that case, the destination host sends back an ICMP to the source, specifying destination port unreachable.

    5.Source redirect:

    When A sends a packet to B. It first reaches router R0. If R0 by mistake forwards the packet to R3 instead of R1. Then R3 sends back an ICMP to R0, saying that there is a shorter path through R1 and it can use it.
    But the good thing is the first time the packet is forwarded by R3. The ICMP is for the next packet.
    There is one more application of Feedback messaging PMTU i.e Path MTU discovery. This topic is big and needs a lot of explaining. I have created a separate post for it. You can check it out here.

    Hope this was helpful. If you have any questions related to any of my posts feel free to ask. I will be happy to help you. At the same time, if you have any suggestions for me, you can put it in the comments section below. Do follow us on Facebook and google+ to get latest updates.

    Thank you!

    Read More

    Monday, 5 June 2017

    ICMP (Internet Control Message Protocol)

      3 comments
    June 05, 2017


    The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating, for example, that a requested service is not available or that a host or router could not be reached.
    ICMP works with IP at the network layer. It is also called as the companion of IP. The following diagram shows where ICMP is used.
    ICMP is mainly used for two purposes: Error handling and Request and reply. In this post, I will give a brief overview of these two concepts. If you want to read in-depth, about these two concepts you can read about them here (about ICMP-Error handling) and here (about ICMP- request and reply).

    Error handling or feedback messaging:

    Error handling or feedback messaging is used to find fault in the network or to get the feedback about the packets travelling on the network or which have already reached the destination. Feedback messages are one-way messages. That means they are generated at source and they die at the destination or at intermediate routers. No reply is sent for the feedback messages.

    To make it more clear let's take an example. Consider the figure given below, where we have 2 intermediate routers between S and R. If we send an IP packet from  S to R and if it's discarded at second router because of buffer overflow then, there has to be some mechanism by which you should be able to convey this message to source that its packet is discarded and he needs to resend it. This is where ICMP - feedback messaging comes into the picture.

    Whichever router discards the IP packet makes and ICMP packet, puts it inside the IP packet and then sends it back to the sender in this case S. Sender S can read the content of ICMP packet and find out what happened to its packet. ICMP is sent only when an IP packet is discarded or when an IP packet carrying some other packet other than ICMP packet is discarded. When an ICMP packet is discarded no new ICMP packet is generated for it.
    To understand why this happens consider the following scenario.
    Let's say IP packet of S is discarded by the second router. Imagine the second router is heavily loaded and it is not able to process the packets. So it sends back an ICMP packet to router one to pass it onto S. Let's say the ICMP is discarded by router 1. Now the question is should we generate ICMP packet for the lost ICMP packet. 
    Well if you do so you can fall in an infinite loop. Let's say you generated ICMP from router 1. You will send it to router 2. Router 2 will discard this packet (as it is overloaded) and generate another ICMP for router 1. If router 1 again discards the ICMP there will one more ICMP generated which will be sent to router 2. This continues and you will fall in the infinite loop. 
    So you now know that why we don't generate ICMP for ICMP.

    Request and reply:

    This service of ICMP is used to get information about the network through which it travels or nodes in between. You might have heard about this program called ping. It stands for Packet INternet Groper. It is a classic program implemented using ICMP echo request and reply technique.

    PING is a computer network administration software utility used to test the reachability of a host on an Internet Protocol network. To check the availability of a host you need the IP address of the host and ping on your computer. Almost every OS has this program preinstalled. Whenever you type ping followed by the IP address of the target (ex: ping 192.168.100.39), your shell which is a child process created by the kernel, creates one more child process by running the program called ping. It passes the IP address of the target as the command line arguments to the ping program.

    Now when ping receives the arguments it initiates an ICMP request and replies from network layer of the device. This request travels through the network, passing through many intermediate routers. Finally, it reaches the destination and comes back from the network layer of the device. Remember that it only goes till network layer, not the application layer. It is because of this the host is not able to know anything about this.

    Hackers use this command to bring down the servers. They send repeated ping requests to the server, because of which the server gets busy responding to ping requests and finally comes down. And the worst about this is that the victim doesn't even come to know about this because nothing is showed at the application layer.
    Point to be noted that PING is not a client server application. It is because it works only at the network layer.

    So this is all about ICMP. If you liked the post, please do show your support by following the blog. And like us on facebook and google+.

    Thank you!

    Read More

    Data link layer - Framing

      No comments
    June 05, 2017


    Before we discuss framing there is a small prerequisite needed. You should know the structure of the frame. You can read about this here.

    How does a frame travel in a network:

    Whenever a frame is transmitted by a node in a network, it is very important that the other nodes should know that a frame is travelling in the network. For this very reason, there is an important field in the frame, it's SFD. SFD is called start frame delimiter. SFD is a continuous sequence of 10's, something similar to this sequence 10101011. SFD is present at the start of the frame.

    10101011(SFD)
    (0+1)*(rest of the frame)

    To understand how SFD is used let's consider the following arrangements of the nodes.

    A terminal is used to connect a node to the network. Every terminal has a sequential circuit present in it. Whenever a frame passes through the terminal the sequential circuit finds out that, a frame is starting. You might get be curious to know, how it does that?.

    Initially, we come up with a regular expression that represents the pattern of the SFD. Now that regular expression is converted to an NFA. This NFA is converted to a DFA using the appropriate algorithms from TOC (you must be knowing the algorithm). And then this DFA is converted to a sequential circuit. So now you know that entire credit of constructing sequential circuits doesn't go to electrical engineers. Software engineers do play some role in this.

    Now after understanding how does a sequential circuit detects the start of the frame, next thing is, it should know the end of SFD. For this very reason, every SFD ends with 11. Whenever the sequential circuit sees 11 it comes to know that SFD is about to end and everything after this is useful data.

    Now let me explain the entire process briefly. Every station has terminals at its end. These terminals have sequential circuits. Whenever a frame passes the terminal a signal is sent to NIC of the node. The NIC is told that a new frame is passing through the terminal. Check if the frame is for you or not?. Then NIC reads the content of the frame such as the destination address and finds out whether the frame is intended for it or not.

    If the frame destination address matches the MAC address of the node it reads the frame and then destroys it. Now having understood the complete outline of framing, let's understand framing in depth.

    What is Framing?

    Framing is defined as the process of wrapping up the data coming from the network layer and putting it in the data link layer frame. This is an informal definition of framing. Framing is of two types.
    • Fixed length framing
    • Variable length framing
    Fixed length framing means that the length of the frame remains same, no matter whatever amount of data is delivered by the network layer. This means, that the data link layer passes a fixed amount of data to the physical layer. On the other hand, variable length framing means that the length of the frame at data link layer is variable. It means the data passed on to physical layer is depending upon the data received from the network layer.

    Fixed length framing:

    Fixed length is not important and is not used practically. The reason being that it is very inefficient. It was proposed because, with it, it becomes very easy for the stations to find out the end of the frame. Since the frame size is fixed the stations can add the fixed length to the beginning of the SFD to get the end of the frame.

    Drawback:

    Assume the length of the frame is fixed and is 1000 bytes. Now if we want to send 900 bytes of data we can easily send it as 900 bytes can easily fit in 1000 bytes. And if data is larger than 1000 bytes say 2000 bytes, we can divide it into 2 equal parts of 1000 bytes and send it. But the problem comes in when we don't have enough data to send.
    Let's say we have to send 10 bytes of data. We can easily fit in 10 bytes in 1000 bytes of a frame but remaining 990 bytes goes wasted. And you will unnecessarily occupy extra bandwidth in the network. It was because of this major drawback we quit fixed length framing and moved onto variable length framing.

    Variable length framing:

    Variable length framing is widely used and is the topic of discussion here. Now you might get the doubt that, how are we going find the length of the frame and how will we come to know the end of the frame. To overcome this issue there were two new things introduced in the frame. 
    • The length of the frame: This was introduced to find the total length of the frame.
    • End delimiter: It was introduced to find out the end of the frame.
    But there are certain issues with this also. It might happen that our frame gets corrupted and the length might not be correct. In this case, CRC also cannot help us, it is because by the time you read CRC you would have already read the wrong length. This is the reason we have end delimiter. End delimiter can tell us that it's the end of the frame.
    Next, it might happen that the End delimiter matches with part of the data. The solution to this problem is to use such characters for data which cannot match with any part of data or which is not very likely to be in the data part. There are two different ways of implementing the concept of end delimiter.
    • Character stuffing or byte stuffing: A character or a byte is stuffed in between the data to unmatch it with end delimiter.
    • Bit stuffing: Instead of inserting a special character or a byte in between the data we use a single bit to unmatch it with end delimiter.

    Character stuffing or byte stuffing:

    The structure of a frame is shown below -
    SFD
    Source
    Destination
    Data
    End delimiter

    Let's say our data is 1010001010110100 to send this data with SFD and end delimiter we have the following packet structure.
    10101011
    A
    B
    1010001010110100
    $

    When a station encounters $ it understands that it's the end of the frame and it should stop reading the frame. But the problem comes in when the data matches with $. 

    Till late 1980's this concept worked fine because the data used to be simple mainly characters and digits. But with the advancement in the technology, the types of data sent changed. Now people started sending music, pictures and videos. This caused problems. Till now nothing matched the end delimiter but now data started matching end delimiter.
    To overcome this problem people came up with another solution. If data matched with end delimiter, they used to append data with NULL. For example, data is 10110101$00 and end delimiter is $ then we append a null after $ and data sent will be 10110101$'\0'00. When the frame reaches the receiver, it sees null in the data part and understands that the data would have matched ED so sender has appended a NULL. It removes the NULL part and then reads it.
    Question- What happens if NULL is to be included in the data. Then we append another null for this NULL. Example: Data = 1010'\0'1 then data sent will be 1010'\0''\0'1. 
    But this method was tedious and was soon ruled out after bit stuffing came into the picture.

    Bit stuffing:

    In this case, instead of using NULL or any other symbol in between the data to unmatch it with ED, we use a single bit. We decide on a fixed end limiter say 01111 and send it with data. Now again there are chances that data might get matched with our ED. To overcome this problem we use bit stuffing.

    To understand it more clearly let's take an example. Assume we have some data (011110) to be sent along with ED (01111). Now data will not be sent directly. We try to find out if there is any part of the data that is matching with ED. If we find anything of that kind we try to modify that part of the data.
    Considering the above example we can clearly see that data matches with ED. Now to unmatch data with ED we have many choices. But we'll follow a simple technique. Whenever any part of the data matches ED, we insert a '0' just before the last 1 of the ED (011101). In the above case, the first 5-bits matches with ED so while sending the data we insert a '0', just before the last 1. And we get 0111010 as the data.
    To make it more clear let's take one more example. Assume data is 01111011110 and ED are 01111. Then the data sent will be 0111010111010.
    Now when this data reaches the receiver with the ED it examines the data. The rule at the receiver is whenever it sees 0 followed by 3 1's, it should remove the next bit and then process the data. These rules are prior known to both sender and receiver. In the above case when the data (0111010111010) reaches the receiver, it removes the zeroes from the 5th and 11th position. Finally, data at receiver is 01111011110.
    This is the entire concept of framing at data link layer. 

    If I missed out something or you have some suggestions, put it in the comments section below. I will be happy to answer your queries. And yes if you found this post useful, please show your support by following the blog. And follow us on facebook and google+.

    Thank you.

    Read More