A python code which performs a full discrete event simulation of a very simple Internet router. The python code should include:
1. Random arrival of packets
2. Random distribution of packet sizes
3. Multiple router output interfaces (‘servers’)
4. Serving schedule – the simulation will have to support both first-in-first-out (FIFO) serving,
and priority serving
The python code splits into four commands (Which the user can choose from such as A, B, C, D).
A discrete-event simulation of an Internet router with a single input queue and a single output interface. The packets arrive following a random Poisson process, with a pre-defined average time between two arriving packets. The Poisson process is characterized with an exponential distribution of inter-arrival times. The packets have random sizes – the packet size should be calculated by the simulator at the packet arrival. The packet size should be considered a random variable with exponential distribution. In the case the packet has arrived and cannot be immediately served, it must wait for the first available opportunity to be served. The output interface (the server) should operate without any pauses, and should operate at a constant serving rate (this means that packet size determines the service time duration).
The code should request the average system parameters - average packet size and average interarrival time for the packets, and should calculate the following parameters:
- Average waiting time
- Average queue size
- Probability that the queue size is 0 at arrival of a new call
- Probability that the queue size is greater than 5 times the average packet size.
The code from the previous command should be modified to support four separate servers (four output router interfaces). In the simulation, upon the arrival of the packet, the server (output interface) should be randomly identified for that packet, regardless of the queue sizes.
The code should request the average system parameters - average packet size and average interarrival time for the packets, and should calculate the same simulation results as in Command 1.
The simulator should operate as in the previous commands, but this time at the generation (“arrival”) of the packets the packets should be assigned a class (“priority” or “economy”). The class assignment should be random, but on average 20% of the packets should be “priority” and 80% should be “economy”. The serving policies should be as follows:
- Policy A: one server should be dedicated for the priority packets only, and one for the economy packets. If there are no priority packets, that server should be empty
- Policy B: one server should be dedicated for the priority packets, and one for the economy packets. If there are no priority packets, the first waiting economy packet should be served.
The code should request the average system parameters – average packet size and average interarrival time for the packets, probability of the priority packets (default = 20%) and should calculate the following parameters:
- Average waiting time for both traffic classes
- Average queue size for both traffic classes
- Probability that the queue size for each of the classes is more than 5
- Probability that the queue size is 0 at arrival of a new packet
Generate graphs representing the results of the simulation. These results should show how do the parameters calculated during the simulation (the average waiting time, the average queue size etc.) change during the simulation time – the xy graphs should have simulation time on the x-axis and the measured parameter on the y axis.
Please add Comments to explaining each part so to make it possible to tweak, change and understand.
Libraries allowed to use: