Understanding the MTP congestion control mechanisms and developing an effective application congestion control strategy is a critical step in developing a reliable system.
MTP tracks and controls both inbound and outbound congestion. When outbound congestion occurs, upper layers bound to MTP are notified, enabling them to take some corrective action such as reducing the traffic load that they generate before the congestion becomes severe and impacts the operation of the service. MTP also takes action on its own during both inbound and outbound congestion to assure that application problems or very high network traffic does not overwhelm normal operation.
In most places, MTP uses a four level congestion control strategy, where level 0 indicates that the destination is not congested and level 3 indicates the most congested state. This strategy matches the ANSI standards.
This topic discusses:
In MTP, there are two possible causes of outbound congestion:
The application generates MTP traffic at a rate greater than the capacity of the SS7 links or downstream network, resulting in network overload.
The application generates MTP requests faster than the MTP layer can be process them, resulting in the MTP service send queue building up beyond pre-determined thresholds (MTP service congestion).
Network overload occurs when the MTP layer outbound queues build up beyond configured limits due to a traffic load exceeding the capacity of the available signaling links or to receipt of a transfer controlled message (TFC) regarding a congested destination.
When outbound traffic exceeds the total link capacity, transmission queues in the MTP 2 layer begin to build up. When the MTP 2 transmission queue for a link becomes full (layer 2 configuration parameter L2_TXQ_THRESH2), the MTP 3 layer ceases sending to the congested link, causing MTP 3 queues to begin building up. There are four configurable levels at which congestion indications of that priority are generated. The following table lists the parameters and defaults from the LINK section in the MTP 3 configuration file:
Parameter |
Default |
P0QUE_LENGTH |
16 |
P1QUE_LENGTH |
32 |
P2QUE_LENGTH |
64 |
P3QUE_LENGTH |
128 |
Note: These MTP 3 queues are in addition to the L2_TXQ_THRESH2 messages queued at layer 2.
Whether MTP 3 transmit queues are built due to network overload or because a transfer controlled (TFC) was received about a remote destination, the application is notified with an MTP status indication. This indication is the Natural Access event MTP3EVN_DATA with a message code of MTP3_STAT_IND and status of StatCongested. The indication contains the affected pointed code and the current congestion level (0 through 3). The application must reduce its traffic load toward the affected destination until the congestion abates.
In ANSI networks and in other national networks employing multiple congestion levels, the application must not generate any new traffic towards the affected destination with a priority lower than the current destination congestion level or it is discarded at the MTP layer.
For the international signaling network and other ITU-based networks without multiple congestion priorities, the application must reduce the traffic load toward the affected destination, as the MTP layer discards outgoing packets only in cases of excessive queuing of traffic to congested signaling links. If the application fails to reduce its traffic load toward the congested destination, the congestion condition can escalate.
When the network overload condition ceases, the application receives the Natural Access event MTP3EVT_DATA with a message code of MTP3_STAT_IND and status of StatCongestionEnds indicating that the application can resume normal traffic towards the affected destination.
MTP 3 service congestion occurs when an application generates traffic faster than the MTP layer can accept it, resulting in the MTP 3 service transmission queue building beyond pre-determined thresholds. This situation applies only to applications that use the MTP 3 service to directly communicate with MTP. The application is notified of congestion with an MTP3EVN_CONGEST Natural Access event that includes the current MTP 3 congestion level (0 through 3, where 0 indicates that congestion ceased). As the MTP 3 congestion level increases, the application is expected to reduce its traffic load proportionately until the congestion ceases.
By default, the MTP 3 service allocates a buffer pool for up to 256 requests to be queued to the MTP layer. If the application fails to reduce its traffic load enough to ease the congestion, eventually the MTP 3 service buffer pool becomes depleted and the MTP 3 send functions fail with a CTAERR_OUT_OF_MEMORY return code. The application can increase the number of buffers in the pool by setting service argument array element six to a number between 128 and 1024 when opening the MTP service. The increased number of buffers allows a larger burst of traffic to be absorbed without triggering congestion at the cost of using more host memory. For more information, refer to Using ctaOpenServices.
Congestion onset and abatement thresholds are always set to a fixed percentage of the buffers in use (queued to the MTP layer) regardless of the total size of the pool, as shown in the following table:
Congestion level |
Onset threshold |
Abatement threshold |
1 |
Greater than 75% of pool in use. |
Less than 50% of pool in use. |
2 |
Greater than 85% of pool in use. |
Less than 80% of pool in use. |
3 |
Greater than 95% of pool in use. |
Less than 90% of pool in use. |
MTP inbound congestion is caused by the inability of the MTP application to read incoming messages as fast as they are generated by the network, resulting in a build-up of the user SAP queue or a depletion of the layer 1 limited pools used to receive incoming messages on each link.
Unlike outbound congestion, the MTP application is not directly notified of inbound congestion level changes to prevent escalation of the congestion condition. However, an alarm is always generated when a change occurs in the inbound congestion level for an MTP user SAP.
For inbound congestion, the MTP layer cannot rely on the application to reduce its traffic load to ease the congestion, as the source of the traffic bursts is generally other network nodes. The MTP layer acts directly to control inbound congestion in two ways:
As the SAP queues to upper layers build up, configurable thresholds are crossed that set the SAP congestion priority. For national networks, MTP discards inbound messages with a priority lower than the current SAP congestion level, ensuring that a slow or dead upper layer cannot starve out other upper layers. No such discarding is done for international networks that have no message priorities. The limited pools ensure that all board memory is not used up in that case, but a slow or dead upper layer could starve out other upper layers.
After the level 1 limited pools reach a defined threshold of allocated buffers, MTP 2 begins generating SIBs (status indication busy) out that link. Enough buffers remain in the limited pool to handle an additional window of 127 messages after SIBs are started. These limited pools disallow a renegade link from using up all of MTP's memory.
There are four configurable levels at which discarding of lower priority national messages occurs. The following table lists the parameters and defaults from the SAP section in the MTP 3 configuration file:
Parameter |
Default |
P0QUE_LENGTH |
0 |
P1QUE_LENGTH |
512 |
P2QUE_LENGTH |
798 |
P3QUE_LENGTH |
896 |
For example, when there are 512 messages queued to an upper layer, the SAP congestion priority becomes 1 and thereafter national messages of priority 0 are discarded rather than being queued. Similarly, when the queue size reaches 798, messages of priority 0 and 1 are discarded.
The number of inbound messages discarded due to SAP congestion can be determined with Mtp3NSapStatus.
An application that cannot keep up with incoming messages can use flow control to force queuing of messages in the lower layers until it can again accept incoming data. The following illustration shows how MTP3Flow is used:
When flow control is in effect, messages build up in the MTP 3 SAP queue and are handled as described in Inbound congestion.