Research and development of a method for optimizing the transport infocommunication network using multi-criteria optimization
Content
- INTRODUCTION
- 1. ANALYSIS OF EXISTING METHODS FOR EVALUATING RELIABILITY
- 2. RELIABILITY ASSESSMENT
- CONCLUSION
- LIST OF SOURCES
INTRODUCTION
Communication of subscriber systems with the information computer network (ICS) is a function of subscriber access networks. Subscriber systems are connected to the ICS either directly or through the subscriber access network. The uncertainty of the circumstances that we have to face in reality allows us to classify ICS as a class of complex systems.
In the process of establishing a connection, the source sends a call that passes to the destination via one of the many built alternative routes of virtual channels (VC). The VC consists of paths that connect the switching nodes that are included in this route. In relation to the completion of the call from station to station as ways binary: either it is busy and the call it is not (condition 1) or the path is free, and call through it will be (0). It is possible that the call:
• will reach the destination, and the connection will be established in no more than the allowed time;
• will reach the destination, and the connection will be established, but in a time exceeding the allowed one;
• will not reach the destination because all paths are loaded or inoperable.
The idea of the master's thesis is to analyze existing methods for assessing reliability and create a new method based on the information obtained during the analysis.
1. ANALYSIS OF EXISTING METHODS FOR EVALUATING RELIABILITY
In the first Chapter, as a result of the review of information sources, the basic concepts, analysis of the principles of reliability assessment are given.
The emergence of a global network of Internet networks and the growing number of its users at an enormous pace is becoming a planetary phenomenon that can even lead to social changes. In other words, the world community is approaching such a degree of dependence of its existence on the functioning of information networks, which is comparable to dependence on electricity supply systems. This, in addition to the obvious advantages, has a downside. Failure of the communication network can have consequences that exceed the consequences of power system accidents. In this regard, the problem of assessing and ensuring the reliability of networks is relevant.
Telecommunications are any form of communication, methods of transmitting information over long distances. Telecommunications also represent the processes of transmitting, receiving and processing information at a distance using electronic, electromagnetic, network, computer and information technologies.
The main branches of telecommunications today are: Internet, mobile communication, data transmission networks (wireless, fiber-optic, etc.), satellite communication systems, digital and analog television, telephone communication, electronic banking (figure 1).
According to UNESCO, more than half of the working-age population in developed countries is now directly or indirectly involved in the production and distribution of information. The three leading sectors of the information sector of public production (computer technology, industrial electronics and communications) now play for these countries the same role that heavy industry played at the stage of their industrialization.
Reliability is a property of an object (system), which consists in its ability to perform specified functions under certain operating conditions. Quantitatively, reliability is characterized by a number of indicators, the composition and method of determining which depend on the type of system being analyzed.
Reliability theory is the basis of engineering practice in the field of reliability of technical products. Often, reliability is defined as the probability that a product will perform its functions for a certain period of time under specified conditions. Mathematically, this can be written as follows:
where Pr – density function of time to failure and t – the length of the time period of the device operation, assuming that the product starts working at time t=0.
Reliability theory assumes the following four basic assumptions:
• Failure is treated as a random event. The reasons for failures, the relationships between failures (except that the probability of failure is a function of time) are given by the distribution function. The engineering approach to reliability considers the probability of failure-free operation as an estimate at a certain statistical confidence level.
• system Reliability is closely related to the concept of given function of the system
. Basically, the mode of operation without failures is considered. However, if there are no failures in individual parts of the system, but the system as a whole does not perform the specified functions, then this refers to the technical requirements of the system, and not to the reliability indicators.
• the Reliability of the system can be considered for a certain period of time. In practice, this means that the system is likely to function during this time without failures. Reliability indicators guarantee that components and materials will meet the requirements for a given period of time. In General, reliability refers to the concept of operating time
, which, depending on the purpose of the system and the conditions of its application, determines the duration or amount of work. The operating time can be either a continuous value (the duration of operation in hours, mileage in miles or kilometers, etc.), or an integer value (the number of operating cycles, launches, weapon shots, etc.).
• according to the definition, reliability is considered relative to the specified modes and conditions of use. This restriction is necessary because it is impossible to create a system that can work in any environment. The external operating conditions of the system must be known at the design stage.
Reliability assessment methods have been around for a long time. Consider some of them:
< p> 1. Structural methods for calculating reliability.They are the main methods for calculating reliability indicators in the design process of objects that can be broken down into elements, the characteristics of which are known or can be determined by other methods at the time of calculations. Calculation of reliability indicators by structural methods generally includes:
• representation of an object in the form of a block diagram describing the logical relationships between the States of elements and the object as a whole, taking into account the structural and functional relationships and interaction of elements;
• description of structural schemes of reliability of the object adequate mathematical model, which, as part of the introduction of assumptions to calculate the reliability indices of the object according to about reliability of elements in the considered conditions.
As structural schemes of reliability can be used:
• the schemes of functional integrity;
• structural block diagrams of reliability;
• bounce trees;
• state and transition graphs.
2. Logical-probabilistic method.
In logical-probabilistic methods (LPM) original formulation of the problem and building a model of the functioning of the studied system object or process is structural and analytical tools of mathematical logic, and measurement properties of reliability, survivability and safety is done by means of probability theory.
LVM is a methodology for analyzing structurally complex systems, solving system problems of organized complexity, evaluating and analyzing the reliability, safety and risk of technical systems. Lvms are convenient for the initial formalized formulation of problems in the form of a structural description of the investigated properties of the functioning of complex and high-dimensional systems. The LVM has developed procedures for converting the original structural models into the desired computational mathematical models, which allows them to be algorithmized and implemented on a computer.
3. General logical-probabilistic method.
The need to extend LVM to non-monotonic processes led to the creation of the General logical-probabilistic method (olvm). In olvm, the reliability calculation apparatus of mathematical logic is used for the primary graphical and analytical description of the conditions for the implementation of functions by individual groups of elements in the designed system, and the methods of probability theory and combinatorics are used to quantify the reliability and/or danger of the functioning of the designed system as a whole. To use the olvm, special structural schemes of the functional integrity of the systems under study, logical criteria for their functioning, probabilistic and other parameters of the elements should be set.
The so-called event-logical approach is the basis for setting and solving all problems of modeling and calculating the reliability of systems using olvm. This approach provides for the sequential implementation of the following four main stages of the olvm:
• stage of structural and logical problem statement;
• the stage of logical modeling;
• probabilistic modeling stage;
• stage of performing calculations of reliability indicators.
2. RELIABILITY ASSESSMENT
For a communication line between two interacting objects, the following definition of reliability is applicable: reliability is the property of a communication system (SS) to store in time within the established limits the values of parameters that characterize the ability to perform the required functions in the specified modes and conditions of use. However, in order to give a comparative assessment of the reliability of various products, it is necessary to quantify the reliability of various systems and their elements. The most universal quantitative characteristic of reliability is the availability coefficient, which is uniquely associated with the forced downtime coefficient (or unavailability coefficient).
The availability factor K R is the probability that the system will be operational at an arbitrarily selected time and is calculated by the tuyere (4).
where: N – the number of failures on the communication line during a given period of time (N=5);
K – the number of years in which N failures occurred (K=7 years);
L – the length of the projected communication line (L=101.5 km).
The average time between failures To can be determined from the expression:
where tв – average connection recovery time (tin=2.3 hours).
Availability factor:
The forced downtime coefficient (unavailability coefficient) K n is the probability that the system will not be operational at an arbitrarily selected time, calculated by the following formula:
Despite the simplicity of formula (4), its practical use is associated with the possibility of calculating the parameters included in it: the average time to failure and the average time to restore the operational state. While the average operating time for individual components is determined by the manufacturer, the recovery time depends on many specific operating conditions. It is easy to notice that the availability factor (Kr) of individual components and the communication network as a whole are different, but interrelated. So, if the reliability (availability factor) of the system components is low, then the reliability of the entire system will be lower than when using more reliable components. The international standard G. 602 characterizes the readiness of an optical link channel, leading it to the readiness of a reference hypothetical transmission system with an optical cable length of 2500 km in one direction (taking into account possible redundancy). In this case, the readiness factor should be at least 0.996. For Russian communication lines, it is recommended to recalculate the readiness factor for a national hypothetical line with a length of 13,900 km. The availability factor of such a line must be at least 0.98 (without reservation), which, when recalculated, corresponds to the international norm. Four factors that affect the availability rate:
- hardware fault tolerance;
- automatic protective switching;
- methodology and technological discipline of operation;
- nature of the route and protective measures.
The main way to improve the reliability of the fiber-optic communication network as a whole is redundancy. In the event of an accident, it is necessary to automatically switch to backup communication lines.
A universal method for assessing the quality of a digital communication system is the error coefficient (BER), defined as the ratio of the number of erroneous bits (Nor) to the total number of transmitted bits (N), the formula (5):
It is recommended to use the following criteria for the quality of the communication system:
The norm < 10^(-10).
Reduced quality 10^(-10)< < 10^(-6).
Damage 10^(-6)< < 10^(-3).
Refusal > 10^(-3).
Since the system cannot function under normal conditions with an error coefficient > 10^(-3), this criterion can be used as a criterion for the system's inoperable state. At this level of error rate, the system automatically switches off the equipment.
As the complexity of the communication system increases, the probability of failure of any of its components increases. If there is no redundancy in the system, the system availability factor decreases. Modern communication systems use a large number of elements, which makes it necessary to use redundancy and bypass routes to increase the availability of the communication system as a whole.
Systems without redundancy can be used when the repair time is no more than 2 hours, while systems with full double or even triple redundancy are needed to achieve the desired availability factor when a hard-to-reach point becomes a critical communication node. For most calculations of the system availability factor under normal access, the average repair time of 4 hours is generally considered acceptable for repairing electronic components. Restoration of fibers or cables can take much longer (various standards set the recovery time of the optical line from 5 to 48 hours). The reliability of fiber-optic systems depends on the reliability of the components (optical lines, multiplexers, switches, routers, etc.), on the presence of additional sources of failures, as well as on the selected protection scheme.
A key method for improving the reliability of the communication network is redundancy. The most reliable, but at the same time the most expensive solution is given by the scheme of full duplication, when there is a full set of unused equipment. At the same time, the redundancy of optical fiber is advisable, from the point of view of increasing reliability, to carry out on spaced routes. Cost-effective solution - using redundancy according to scheme 1: N-one line can be used as a backup for N lines. In communication systems with dense spectral multiplexing (DWDM), the use of tunable lasers reduces the cost of spare equipment. Unfortunately, a well-developed network infrastructure is required to reserve optical cable lines according to the 1: N scheme-the most vulnerable element of the communication system.
The reliability of the cable line is determined by the reliability characteristics of the cable and its operating conditions. The most important operating conditions are the environmental impact and the impacts associated with human activities. Economic activities, mainly mechanized earthworks, are often the main source of damage to underground cables. Air cable lines are more exposed to adverse external conditions (lightning, snow sticking, icing, strong wind, etc.). Measuring and calculating the actual reliability of a cable system under specific operating conditions is a very difficult task. In practice, to obtain a reliable value, it is necessary to accumulate experimental data for a significant period of time. Common reasons for both underground and aerial damage laying methods are the following:
- vandalism;
- hidden defects in the production of optical cable (OK);
- poor-quality construction work or installation;
- design errors (incorrect choice of cable type, non-compliance of technical requirements with operating conditions).
Vandalism is one of the most common causes of damage. To a greater extent, cables with metal elements (detected by metal detectors) and OK air gaskets are subject to it. A hidden defect in the production of OK is currently unlikely due to the fact that almost all OK manufacturers are certified according to the ISO system and control at the production stage is much easier to organize than at later stages. Low-quality construction work or installation is usually revealed when the fiber optic cable is put into operation and in most cases can be corrected and eliminated in a relatively short time. Design errors have the most serious consequences, they are difficult to diagnose at the stage of putting the communication system into operation and manifest themselves after some time.
The main causes of damage to underground cable lines are the following:
- mechanical damage to the OK during construction and installation work by third-party organizations within the security zones of the cable line;
- mechanical damage caused by soil movement (landslides, heaving, landslides, mudflows, etc.), usually within one or two construction lengths of the optical cable;
- damage to the OM due to aging or moisture entering the cable core;
- damage to cables from thunderstorms (if there are metal elements in the optical cable structure);
- mechanical damage to the OK with the breakage of optical fibers, not related to damage to the elements of the supporting structure;
- deformation of the support element that caused the OK to break;
- the fall of the support (s) that caused the OK to break;
- OK breakage or spontaneous breakage of the optical fiber.
CONCLUSION
In conclusion, it can be said that an accident is easier to prevent than to eliminate, but the practice of building fiber optic lines shows that this truth is often neglected. The reliability of the future system is laid down at the very first stages of the project. A practical recommendation is that you should not save too much on the preparatory stages of the project, which include:
- pre-project survey;
- preparation of technical specifications (TU) and technical requirements (TT) for the future system;
- preparation of requirements for the project organization;
- conducting a tender for the selection of a project organization.
it is Highly desirable to take measurements at least twice a year. However, at present, many organizations that have FOCL do not carry out operational measurements and technical repairs at all. It will eventually be rejected for economic reasons.
LIST OF SOURCES
- Правила проектирования, строительства и эксплуатации волоконнооптических ли ний связи на воздушных линиях электропе редачи напряжением 0,4–35 кВ // Министер ство топлива и энергетики РФ, РАО
ЕЭС России
, 2002. - Павлова Е.Г. Внедрение перестраивае мых лазеров и мультиплексоров в телеком муникационные сети / /Lightwave Russian Еdition, 2004, № 4, с. 20.
- Кабыш С. Надежность прежде всего // Сети и телекоммуникации, 2004, № 3, с. 74–79.
- Спиридонов В.Н. Приемка оптических ка белей на заводахизготовителях // Lightwave Russian Еdition, 2003, № 2, с. 35-37.
- Спиридонов В.Н. Оптические волокна и кабели для протяженных линий связи // Lightwave Russian Еdition, 2003, № 1, с. 31-35.
- Спиридонов В.Н. Двенадцать характер ных ошибок при строительстве ВОЛС // Lightwave Russian Еdition, 2004, № 3, с. 34-37.
- А. А. Зацаринный, А.И. Гаранин, С. В. Козлов. статья
Некоторые Методические Подходы к Оценке Надежности Элементов Информационно-Телекоммуникационных Сетей
Системы и средства информатики 2011 г. - Быховский, М.А.
Об одной возможности повышения пропускной способности широкополосных систем связи
, - Мобильные системы, - май 2006. - Парнес, М.
Адаптивные антенны для систем связи WiMax
, -Компоненты и технологии, апрель 2007. - Дайлан Ларсон, Рави Мерти, Эмили Ци.
Адаптивный подход к оптимизации производительности беспроводных сетей
Technology at intel, март 2004. - Принципы организации сотовой сети мобильной связи [Электронный ресурс] – Режим доступа: http://afu.com.ua/gsm/obshchie-polozheniya