Українська   Русский
DonNTU   Masters' portal

Abstract

Content

Introduction

In modern society, the Internet is an integral element in the life of every person. The quality of its work depends on many factors, let's consider one of them – delay transmission of packets on the network.

1. Theme urgency

In modern times, the volume of traffic tends to grow every year, which in turn increases the load on the network. Reducing delays is one of the software ways to increase the network bandwidth, thus, This will increase the efficiency of transport layer network protocols, which will have a positive impact on the quality of work of applications using data transmission over the network.

2. Goal and tasks of the research

The purpose of the research work is to study the possibility of increasing the network bandwidth by reducing the delays in the transmission of packets in protocols, in the computer network.

Main objectives of the study:

  1. Search and identify characteristics of existing methods to reduce delays in protocols.
  2. Study of the features and differences of different transport protocols.

Research object: delays in data transmission protocols in computer networks.

Research subject: mechanism to reduce delays in data transmission protocols.

3. Delays in protocols in computer networks

3.1 Defining delays

Network latency is defined as the amount of time it takes a packet to pass through the network from the device that created the packet to the destination device and back.

The vast majority of network traffic falls on one of the two types of traffic: UDP (User Datagram Protocol) and TCP (Transmission Control Protocol). Most of this traffic is usually TCP[9].

3.2 Reasons for delays

End-to-end delay is the combined effect of individual delays on the end-to-end network path. Below are some typical delay components from the workstation to the servers:

Network routers are the devices that create the most delay among any devices on the end-to-end path. Routers can be found in each of the above network segments. A packet queue due to channel congestion is most often the cause of long delays through the router. Some types of network technologies, such as satellite communications, add a lot of delay because of the time it takes a packet to get through the channel. Since delay is cumulative, the more links and jumps the router makes, the more end-to-end delay will be.

3.3 How the delay is measured

To measure the data transfer delay it is necessary to perform a number of simple steps[10]. You will need to measure the time the packet is sent and the time of arrival of the response, then you need to subtract from the time of arrival, the time of dispatch. This will be the delay value in milliseconds.

3.4 UDP delay effects

UDP – is a protocol that defines how to generate messages sent over IP. The device that sends UDP packets assumes that they reach their destination, so there is no mechanism to notify senders that a packet has arrived. UDP traffic is typically used for streaming media applications where the accidental loss of a packet does not matter.

Since the sender of UDP packets does not need any information that the recipient has received the packets, UDP is relatively latent. The only effect that delay has on the UDP stream, – is the increased delay of the entire stream. Second order effects such as jitter can have a negative impact on some UDP applications, but these issues are beyond the scope of this document.

It is important to note that latency and bandwidth are completely independent of UDP traffic. In other words, if the delay increases or decreases, the UDP bandwidth remains unchanged. This concept is more important for the impact of delay on TCP traffic.

Delays do not affect the sending device with UDP traffic. The receiver may need to buffer UDP packets longer with more jitter to make the application work better.

3.5 TCP delay effects

TCP is more complicated than UDP[10]. TCP – is a guaranteed delivery protocol, which means that the device that sends packets, it's reported that the package has arrived or hasn't arrived at the destination host. For this to work, the device that needs to send packets to the destination host must set up a session with the destination host. Once this session has been configured, the recipient tells the sender which packets were received by sending a confirmation packet to the sender. If the sender does not receive a confirmation packet for some packets after some time, the packets are resent.

In addition to providing a guaranteed delivery of packets, TCP has the ability to configure the bandwidth of the network, adjusting the "size of the window". The TCP – window is the number of packets that the sender sends before the confirmation is expected. When confirmation is received, the window size increases. As the window size increases, the sender can start sending traffic at a rate that the end-to-end path cannot handle, resulting in packet loss. Once the packet loss is detected, the sender will respond by halving the packet transmission speed. Then the process of increasing the window size starts again as more confirmations are received.

As the end-to-end delay increases, the sender can spend a lot of time waiting for confirmations rather than sending packets. In addition, the configuration process the size of the window becomes slower, as this process depends on receiving confirmations.

Given this inefficiency, delay has a profound effect on TCP throughput. Unlike UDP, TCP has a direct inverse relationship between delay and bandwidth. As the throughput delay increases, TCP bandwidth decreases. This data was obtained by means of a delay generator between two PCs connected via fast Ethernet (100 mb/s). Note the dramatic decrease in TCP bandwidth as the delay increases.

As the delay increases, the sender may not act while waiting for confirmation from the recipient. However, the recipient must buffer the packets until all the packets have been assembled into a complete TCP message. If the recipient is a server, this buffering effect can be complicated by the large number of sessions that the server can complete. This extended use of buffer memory can lead to performance degradation on the server.

With all the problems that latency creates for TCP, packet loss exacerbates these problems. Loss of packets leads to a reduction in the size of the TCP window, which can cause the sender to stay idle longer while waiting for confirmation with high latency. In addition, confirmations can be lost, which causes the sender to wait until the waiting time for the lost confirmation expires. If this happens, the linked packets will be retransmitted, even if they may have been transferred properly. As a result, packet loss can further reduce TCP throughput.

Some loss of the package is inevitable. If the network works fine and does not drop any packets, it cannot be assumed that other networks are working as well.

Regardless of the situation, keep in mind that packet loss and latency have an extremely negative impact on TCP bandwidth and should be minimized to a minimum.

4. Slow start

Slow start of TCP – is an algorithm that balances the speed of the network connection.[11, 12] Slow start gradually increases the amount of data transferred until the maximum network bandwidth is found.

Slow start of TCP is one of the first steps in the process of overload monitoring. It balances the amount of data, that the sender may transmit (known as an overload window), with the amount of data that the receiver may receive (known as a recipient window). The lower of the two values becomes the maximum amount of data that the sender is allowed to transmit before receiving confirmation from the recipient.

Step by step, that's how slow start-up works.:

The sender is trying to contact the recipient. The sender's original package contains a small overload window, which is determined based on the maximum sender's window.

The recipient confirms the package and responds with his or her own window size. If the recipient does not respond, the sender knows whether to continue sending the data.

After receiving the confirmation, the sender increases the size of the next packet window. The window size is gradually increased until then, until the recipient is able to confirm each packet anymore, or until the limit is reached, the sender's or recipient's window.

Once the limit has been set, the slow start is complete. Other overload control algorithms take over to maintain the connection speed.

Conclusions

Reducing delays in computer networks in data transfer protocols is a very important issue. Promotion in which not only will make the life of a modern person more comfortable, but also will open up new opportunities in the development of network technologies.

When writing this abstract, the master's work is not finished yet. Final completion: May 2020. The full text of the work and materials on the topic can be obtained from the author or his manager after this date.

References

  1. N. Dukkipati, M. Mathis, Y. Cheng, M. Ghobadi – Proportional Rate Reduction for TCP [Electronic resource]. – Access mode: https://ai.google...
  2. G. Huston – Latency and IP [Electronic resource]. – Access mode: http://www.potaroo.net...
  3. T. M. Tukade – Data transfer protocols in IoT-an overview [Electronic resourceс]. – Access mode: https://www.researchgate.net...
  4. A. I. Мiночкiн, В. А. Романюк, О. Я. Сова – Шляхи вдосконалення TCP-протоколiв у мережах MANET [Electronic resource]. – Access mode: http://www.viti.edu.ua...
  5. Сирант Андрей Васильевич – Исследование эффективности сетевых протоколов в клиент-серверных приложениях Electronic resource]. – Access mode: http://masters.donntu.ru/2017/fknt/sirant/
  6. Щитникова Анастасия Николаевна – Разработка метода оценки параметров трафика мультисервисной сети [Electronic resource]. – Access mode: http://masters.donntu.ru/2004/kita/schitnikova/
  7. Кузнецов Алексей Дмитриевич – Исследование передачи видеопотока по сетям передачи данных [Electronic resource]. – Access mode: http://masters.donntu.ru/2005/kita/kuznetsov/
  8. Кравчук Василий Анатольевич – Исследование и усовершенствование протокола передачи данных по линиям электроснабжения 220В, 50Гц для SCADA-систем [Electronic resource]. – Access mode: http://masters.donntu.ru/2008/kita/kravchuk/
  9. Steve – TCP vs UDP – What’s The Difference? [Electronic resource]. – Access mode: http://www.steves-internet-guide.com...
  10. Boris Rogier – Measuring Network Performance: Links Between Latency, Throughput and Packet Loss [Electronic resource]. – Access mode: https://accedian.com...
  11. Ilya Grigorik – Внутренние механизмы ТСР, влияющие на скорость загрузки [Electronic resource]. – Access mode: https://habr.com...
  12. Robert Gibb – What is TCP Slow Start? [Electronic resource]. – Access mode: https://blog.stackpath.com...