Early crossbar systems were slow in call processing as they used electromechanical components for common control subsystems. Efforts to improve the speed of control and signalling between exchanges led to the application of electronics in the design of control and signalling subsystems. In late 1940s and early 1950s, a number of developmental efforts made use of vacuum tubes, transistors, gas diodes, magnetic drums and cathode ray tubes for realising control functions. Circuits using gas tubes were developed and employed for timing, ring translation and selective ringing of party lines. Vacuum tubes were used in single frequency signalling and transistors in line insulation test circuits. These efforts were precursors to the modem electronic digital computers. Switching engineers soon realised that, in principle, the registers and translators of the common control systems could be replaced by a single digital computer.

Figure 1. Block Diagram of Common Control Switching System


1. Stored Program Control Exchanges

Modern digital computers use the stored program concept. Here, a program or a set of instructions to the computer is stored in its memory and the instructions are executed automatically one by one by the processor. Carrying out the exchange control functions through programs stored in the memory of a computer led to the nomenclature stored program control (SPC). An immediate consequence of program control is the full-scale automation of exchange functions and the introduction of a variety of new services to users including:

(a)   Common channel signalling (CCS),

(b)  Centralised maintenance

(c)   Automatic fault diagnosis,

(d)  Interactive human-machine interface


Introducing a computer to carry out the control functions of a telephone exchange is not as simple as using a computer for scientific or commercial data processing. A telephone exchange must operate without interruption, 24 hours a day, 365 days a year and for say, 30-40 years. This means that the computer controlling the exchange must be highly tolerant to faults. Fault tolerant features were unknown to early commercial computers and the switching engineers were faced with the task of developing fault tolerant hardware and software systems. In fact, major contributions to fault tolerant computing have come from the field of telecommunication switching.


Attempts to introduce electronics and computers in the control subsystem of an exchange were encouraging enough to spur the development of full-fledged electronic switching system, in which the switching network is also electronic. After about 10 years of developmental efforts and field trials, the world's first electronic switching system, known as No.1 ESS, was commissioned by AT&T at Succasunna, New Jersey, in May 1965. Since then, the history of electronic switching system and stored program control has been one of rapid and continuous growth in versatility and range of services. Today, SPC is a standard feature in all the electronic exchanges. However, attempts to replace the space division electromechanical switching matrices by semiconductor crosspoint matrices have not been greatly successful, particularly in large exchanges, and the switching engineers have been forced to return to electromechanical miniature crossbars and reed relays, but with a complete electronic environment. As a result, many space division electronic switching systems use electromechanical switching networks with SPC. Nonetheless, private automatic branch exchanges (PABX) and smaller exchanges do use electronic switching devices. The two types of space division electronic switching systems, one using electromechanical switching network and the other using electronic switching network, are depicted in Figure 1. Both the types qualify as electronic switching systems although only one of them is fully electronic. With the evolution of time division switching, which is done in the electronic domain, modern exchanges are fully electronic.



There are basically two approaches to organising stored program control: centralised and distributed. Early electronic switching systems (ESS) developed during the period 1970-75 almost invariably used centralised control. Although many present day exchange designs continue to use centralised SPC, with the advent of low cost powerful microprocessors and very large scale integration (VISI) chips such as programmable logic arrays (PIA) and programmable logic controllers (PLC), distributed SPC is gaining popularity.



Centralised control exchanges have all the control equipment is replaced by a single processor which must be quite powerful. It must be capable of processing 10 to 100 calls per second, depending on the load on the system, and simultaneously performing many other ancillary tasks. A typical control configuration of an ESS using centralised SPC is shown in Figure 2.


Figure 2. Organisation of Centralised SPC Exchange


A centralised SPC configuration may use more than one processor for redundancy purposes. Each processor has access to all the exchange resources like scanners and distribution points and is capable of executing all the control functions. A redundant centralised structure is shown in Figure 3.  Redundancy may also be provided at the level of exchange resources and function programs. In actual implementation, the exchange resources and the memory modules containing the programs for carrying out the various control functions may be shared by processors, or each processor may have its own dedicated access paths to exchange resources and its own copy of programs and data in dedicated memory modules.


Figure 3. Block Diagram of a Redundant Centralized SPC Exchange


Most electronic switching systems, using centralised control, use only a two-processor configuration. Dual processor architecture may be configured to operate in one of three modes:

1. Standby mode

2. Synchronous duplex mode

3. Load sharing mode.


1. Standby Mode

Standby mode of operation is the simplest of dual processor configuration operations. Normally, one processor is active and the other is on standby, both hardware and software wise. The standby processor is brought online only when the active processor fails. An important requirement of this configuration is the ability of the standby processor to reconstitute the state of the exchange system when it takes over the control, i.e. to determine which of the subscribers and trunks are busy or free, which of the paths are connected through the switching network etc. In small exchanges, this may be possible by scanning all the status signals as soon as the standby processor is brought into operation. In such a case, only the calls which are being established at the time of failure of the active processor are disturbed. In large exchanges, it is not possible to scan all the status signals within a reasonable time. Here, the active processor copies the status of the system periodically, say every five seconds, into a secondary storage. When a switchover occurs, the online processor loads the most recent update of the.system status from the secondary storage and continues the operation. In this case, only the calls which changed status between the last update and the failure of the active processor are disturbed. Figure 4. shows a standby dual processor configuration with a common backup storage. The shared secondary storage need not be duplicated and simple unit level redundancy would suffice.


Figure 4. Standby Dual Processor SPC Configuration.


2. Synchronous Duplex Mode

In synchronous duplex mode of operation, hardware coupling is provided between the two processors which execute the same set of instructions and compare the results continuously. If a mismatch occurs, the faulty processor is identified and taken out of service within a few milliseconds. When the system is operating normally, the two processors have the same data in their memories at all times and simultaneously receive all information from the exchange environment. One of the processors actually controls the exchange, whereas the other is synchronised with the former but does not participate in the exchange control. The synchronously operating configuration is shown in Figure 5. If a fault is detected by the comparator, the two processors P1 and P2 are decoupled and a check-out program is run independently on each of the machines to determine which one is faulty. The check-out program runs without disturbing the call processing which is suspended temporarily. When a processor is taken out of service on account of a failure or for maintenance, the other processor operates independently. When a faulty processor is repaired and brought into service, the memory contents of the active processor are copied into its memory, it is brought into synchronous operation with the active processor and then the comparator is enabled.

Figure 5. Synchronous Duplex Operation of Dual Processor SPC Exchange


It is possible that a comparator fault occurs on account of a transient failure which does not show up when the check-out program is run. In such cases, the decision as to how to continue the operation is arbitrary and three possibilities exist:


1. Continue with both the processors.

2. Take out the active processor and continue with the other processor.

3. Continue with the active processor but remove the other processor from service.


Strategy 1 is based on the assumption that the fault is a transient one and may not reappear. Many times the transient faults are the foreru~ers of an impending permanent fault which can be detected by an exhaustive diagnostic test of the processor under marginal voltage, current and temperature conditions. Strategies 2 and 3 are based on this hypothesis. The processor that is taken out of service is subjected to extensive testing to identify a marginal failure in these cases. A decision to use strategy 2 or 3 is . somewhat arbitrary.


3. Load Sharing

In load sharing operation, an incoming call is assigned randomly or in a predetermined order to one of the processors which then handles the call right through completion. Thus, both the processors are active simultaneously and share the load and the resources dynamically. The confiation is shown in Figure 6. Both the processors have access to the entire exchange environment which is sensed as well as controlled by these processors. Since the calls are handled independently by the processors, they have separate memories for storing temporary call data. Although programs and semi-permanent data can be shared, they are kept in separate memories for redundancy purposes. There is an inter-processor link through which the  processors exchange information needed for mutual coordination and verifying the 'state of health' of the other. If the exchange of information fails, one of the processors which detects the same takes over the entire load including the calls that are already set up by the failing processor. However, the calls that were being established by the failing processor are usually lost. Sharing of resources calls for an exclusion mechanism so that both the processors do not seek the same resource at the same time. The mechanism may be implemented in software or hardware or both. Figure 4.6 shows a hardware exclusion device which, when set by one of the processors, prohibits access to a particular resource by the other processor until it is reset by the first processor.

Figure 6. Load Sharing Configuration for SPC Exchanges


Under normal operation, each processor handles one-half of the calls on a statistical basis. The exchange operators can, however, send commands to split the traffic unevenly between the two processors. This may be done, for example, to test a software modification on one processor at low traffic, while the other handles majority of the calls. Load sharing configuration gives much better performance in the presence of traffic overloads as compared to other operating modes, since the capacities of both the processors are available to handle overloads. Load sharing configuration increases the effective traffic capacity by about 30 per cent when compared to synchronous duplex operation. Load sharing is a step towards distributed control.




One of the main purposes of redundant configuration is to increase the overall availability of the system. A telephone exchange must show more or less a continuous availability over a period of perhaps 30 or 40 years. We now compare the availability figures of a single processor and a dual processor system. The availability of a single processor system is given by:



Unavailability is given by :

If  MTBF >> MTTR then


For a dual processor system, the mean time between failures, MTBFD, can be computed from the MTBF and MTTR values of the individual pmcessors. A dual processor system is said to have failed only when both the processors fail and the system is totally unavailable. Such a situation arises only when one of the processors has failed and the second processor also fails when the first one is being repaired. In other words, this is related to the conditional probability that the second processor fails during the MTTR period of the first processor when the first processor has already failed.

                                                                                                                                                        ©  Prof. Ambani Kulubi May-Aug 2015