Библиотека

P. Aranzulla, J. Mellor and Professor P. Mars.

Dynamic Routing in ATM Networks.

1. Introduction.

Network routing strategies have always been closely aligned with the capabilities of technology. To aid manual switching in early telephone networks, the switching centres were ordered in a fixed hierarchical fashion. A fixed or static routing algorithm might choose from a number of possible paths to a destination, but the probability for choosing a path did not vary with time. In hierarchical routing nodes are grouped into domains, so that a node needs to know only the next node to which it should send the packets destined for a node in the remote domains. Therefore the routing table size is diminished, facilitating manual operation. With high connectivity throughout the network the number of switches crossed by a call was limited, even in long distance calls. By having a hierarchical routing pattern, looping calls back and using an excessive number of trunks on a call is avoided. Its use continued after the automation of the switching function because the scheme was simple, as routing decisions of the early electro-mechanical switches were limited. This basic routing strategy remained unchanged in AT&T until it was replaced by Dynamic Nonhierarchical Routing (DNHR) in the 1980s. Dynamic routing allows switch routing tables to be changed either in a pre-planned time varying manner, or on-line in real time. Pre-planned changes might occur hourly or several times a day, and real-time changes might occur on a call-by-call basis. Dynamic traffic routing increases network utilisation as the routing patterns are varied according to the varying traffic patterns, so causing a lower blocking probability and call set up delay.AT&T devised various routing schemes. Dynamic Hierarchical Routing (DHR) keeps a fixed hierarchical routing structure but includes three subgroups to satisfy hourly demands, where the number of available trunks in each subgroup varies dynamically. DNHR Progressive causes a call to be sent from one switch to the next towards the destination, not allowing backtracking (i.e. returning to a previous switch). DNHR Multilink Path uses predefined paths from the source to the destination nodes, the source node cycling through the various possibilities until the call is accepted, otherwise the call is blocked. Fixed and Dynamic Nonhierarchical Two-Link Path Routing (FNHR and DNHR 2-Link Path) has the restriction that paths can only be one or two links long, which simplifies the switching costs. Table 1 shows that instead of this restriction decreasing throughput, the opposite occurred. This is due to about 98% of traffic being normally routed on one and two link paths anyway (due to the high connectivity in telecommunication networks) so that the restriction to a 2-link maximum decreases the average path length (and so number of links utilised) whilst giving a similar blocking probability. Real-Time Dynamic Traffic Routing (RTDTR) can be traffic-dependent, event-dependent, or state-dependent. Traffic dependent strategies change routing patterns every few minutes based on calculated projected traffic load estimates (requiring large computation and centralised control of routing tables) or call completion statistics. Event-dependent strategies might use blocking as an event, routing calls along one path until blocked, and only then selecting another path. More efficient strategies are state-dependent, changing the routing patterns every few seconds or on every call. The AT&T RTNR (Real Time Network Routing) strategy calculates the optimum one or two-link path on a call-by-call basis, the computation being performed by each source node, with the information obtained via call set up signalling.

2. Routing in ATM Networks.

The ATM protocol has been designed to increase network speed by reducing the complexity of the switching function, reducing the routing table size, and removing flow control and maintenance from the switch level. ATM can be viewed for routing purposes as a packet-switched connection-oriented network, with each packet being a fixed length of 53 bytes. Virtual channels (VCs), being called Virtual Channel Connections (VCCs), are allocated to users to establish end-to-end connections, these comprising of one or more VC links, each link having a Virtual Channel Identifier (VCI). However routing actually occurs on two levels in ATM; at the VC level and at the Virtual Path (VP) level. Each VC link comprises of a Virtual Path Connection (VPC), which comprise of one or more VP links (VPL). These correspond to physical links between nodes, with each VPL having a VPI. The concept of virtual paths was introduced in ATM networks to reduce the size of the routing tables and the complexity of network control functions. This occurs by each VPC representing a bundle of VC links, so that at intermediate nodes of the VPC the VCI is ignored and the switching function occurs only using the VPI field in the cell header. This reduces the routing table size because the many different VC links being 'carried' by the VPC are switched using only one entry in the routing table. As only half the cell header address is required for processing (the VPI only) this results in a speedier switching function. An ATM network can therefore be thought of as a physical network having a topology comprising of links and nodes, with a logical network being overlaid over it. This consists of logical links (VPCs) and nodes, which are the end nodes of the VPCs, as the intermediate nodes will not be accessed or switched directly at the VC level. The connection occurs at the VC level, and is routed over logical links or VC links, these being VPCs. The routing of the connection is abstracted from the actual physical network topology by being able to access only the defined logical network topology.

2.1. Defining the Routing Problem.

For any network technology, the routing function determines over which physical links the information will be sent or routed. In circuit switched telephone networks this is one function, but this is not the case however for ATM networks as shall be shown. The first function or part of the routing problem in ATM networks concerns the VCC routing over the VPC topology that has been set-up. Suppose there is an ATM network with the physical topology shown in Figure 1. Now suppose two VPCs were set-up, one along path A-B-C and the other along path A-D-C. Connections or VCCs can be routed either over the first or second VPC to reach destination node C. According to this decision, the packets or cells are routed over different physical links at node A and so use different physical network resources. Therefore VCC route decision is part of the routing function in ATM network. The second function or part of the routing problem in ATM networks concerns the VPC topology set-up. Now suppose two VPCs were set-up, both along path A-B-C. Connections or VCCs can again be routed either over the first or second VPC to reach destination node C. However whatever the VCC routing decision taken, the cells will be routed along the same physical links. The VPC topology set-up affects over which physical links the cells will be sent. Therefore the VPC topology set-up affects routing and is part of the routing function. The third function concerns the Bandwidth Allocation to the VPCs. If a relatively small bandwidth is assigned to the first VPC, and a large bandwidth to the second, then most VCCs will be routed along the second VPC, whose path is A-D-C. Therefore most cells will be routed along these physical links. Therefore the Bandwidth Allocation process affects routing of the cells, and so can be considered to be part of the routing function. It is felt that insufficient research has occurred in this area of ATM traffic control, with few algorithms having been proposed to perform any of the three routing functions. Initial effort concentrates on the first part of the routing function as routing decisions will be required most frequently at the VC level, when trying to route a new connection over the pre-defined logical network topology.

2.2. Dynamic VCC Routing.

A good routing algorithm has the following properties: robustness, stability, fairness, optimality. Often optimality and fairness are not compatible. In the case of ATM networks fairness is guaranteed at call set-up time, for a call is accepted only if its Quality of Service (QoS) can be guaranteed by the network. With regard to this study, QoS was taken to be the cell loss probability. An optimal algorithm will maximise network throughput, so minimising the network connection blocking probability. Assuming that the Call Admission Control function operates using the effective bandwidths of the new and existing connections, a new connection will only be admitted if the sum of the effective bandwidths is equal to or less than the VPC's allocated bandwidth. An equation for effective bandwidth of each connection is obtained from [[iv]] as: where c is the effective bandwidth for the single connection, x is the buffer size, b is the mean burst duration, Rpeak is the peak rate of the connection, r is the utilisation, and where e is the buffer overflow probability. Performing calculations when varying a and keeping all other values fixed indicates that a lower buffer overflow probability results with a higher effective bandwidth, which is as expected. This result is important when considering VCC routing over VPCs. For a connection to be guaranteed a certain cell loss probability, the sum of the losses over the various logical links must not exceed the specified cell loss probability. Therefore the overall cell loss probability must be divided by the number of logical links traversed to obtain the cell loss probability required for each logical link or VPC. This means that a higher hop count will result in a higher effective bandwidth requirement for the connection. To maximise network throughput it is therefore necessary to minimise the hop count as much as possible, whilst not blocking calls due to a lack of available network resources but alternate routes of a slightly higher hop count. Previous work in this area has attempted to adapt currently used circuit switched network routing algorithms, such as AT&T's RTDTR, to ATM networks [[v]]. However these algorithms assume a highly connected network structure and cannot operate with sparsely connected topologies. Telephone networks are indeed highly if not fully connected, but other networks, such as the US Internet backbone, are known to be more sparsely connected. So another type of algorithm called Minimum Sum of Loads (MSL) [[vi]] has been proposed, which operates with a more general network topology. This algorithm routes connections over the least loaded routes, but does not take into consideration the length of the path. It can therefore choose long paths which require a higher effective bandwidth, overly using network resources to the greater blocking of future connection requests. To overcome this problem, another algorithm called Adaptive Minimum Hop Routing (AMH) was shown to outperform MSL in most cases [[vii]]. This algorithm calculates the loading on each of the minimum hop routes, choosing the least loaded. However by being able to choose from only the minimum hop routes the usefulness of this algorithm is greatly hampered in highly connected topologies. To overcome this limitation, a new algorithm called Alternate Adaptive Minimum Hop (AAMH) is proposed. This algorithm is a superset of AMH for rather than having only minimum hop paths to choose from, the next minimum set of paths are also permissible. For a fully connected topology this will cause AAMH to perform as RTDTR, which will out-perform AMH as it will be operating only on direct routes. With a sparse network topology it is expected that AAMH will out-perform AMH due to the second of set of possible routes being permissible when the connection cannot be routed using the first set. http://www.telecomms.com/Dynamic%20Routing%20in%20ATM%20Network1.htm