This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Mobile Voice, Origin To Extinction - Bath 12 April 2016: Summary & Comments

Peter Shiret set himself a considerable task in explaining that many of the concepts and constraints surrounding modern mobile telephony had their origins and parallels in fixed-line electrical telegraphy.

From the beginning there had been the desire to transmit the spoken word, albeit in symbolic form. Inventors such as Cooke and Wheatstone produced multi-wire, multi-needle ‘speaking telegraphs’. Morse came up with the idea of a binary ‘dot and dash’ system using a two-wire circuit, later simplified to a single line wire and earth return. As circuits got longer the signal got weaker but could be regenerated by repeating relays. Digital regeneration had preceded analogue amplification.

Other inventors found ways of using a single ‘circuit’ for two then four independent channels. Delaney took that a stage further by adding ‘time division’ multiplexors at each end, the combined techniques producing 24 channels over the single circuit.

The invention of the telephone by Elisha Gray and Alexander Bell enabled the ‘real time’ transmission of voice, without the need for a trained telegraph operator at each end. The rapid take-up of the telephone produced its own problems as the maze of overhead wires converged on the telephone exchanges. At that time overhead wires were also being used for power distribution and electric tramways. This resulted in interference in the telephone circuits.

Solutions to these problems included reversion to twin-wire balanced circuits, line-crossovers and twisted-pair cables run underground. New solutions produce new problems and it was explained how the increased capacitance between the wires of the pair caused the signal to attenuate rapidly as the circuits got longer. The theoretical work of Oliver Heaviside led to the use of the loading coil that enabled telephone circuits to extend to 1,600 miles without amplification. The downside was that the line now acted as a low-pass band filter, cutting off at about 4 kHz, and thereby setting the standard telephone speech standard for the next hundred years.

Our speaker then gave us a brief ‘digital primer’, showing that in line with the theories of Nyquist and Shannon, a waveform can be sampled at twice the channel bandwidth, quantised, coded, transmitted, regenerated, received, decoded and re-created. Once the signal is in the coded form it can be multiplexed for transmission and moved between time slots to achieve circuit switching. Many of these techniques owe their original conception to the early telegraph engineers.

For the second part of the talk we were given a comprehensive overview of the various techniques that were used in the mobile telephone networks. The first generation networks (1G) used analogue signals between the handset and the base station with a 25 kHz bandwidth, i.e. considerably better than the conventional telephone, however this was throttled to the ‘Heaviside standard’ beyond the base station. Second generation (2G) technology brought in digital coding between the handset and the base station. This provided eight PCM (pulse code modulation), either at ‘full-rate’ 22 kbs or ‘half-rate’, the later being of poor quality. Third generation (3G) technology moved away from the idea of channels, (analogue for 1G and digital for 2G), and introduced the concept of coded signals, all handsets working to a base station receiving the same signals but ignoring those without the appropriate header.

Fourth generation (4G) handsets provide a data-only channel, the ‘smart phone’, relying on 3G or 2G technology to make the voice connection. The extra computing power available in these handsets allows elaborate combinations of amplitude and phase modulation to be used, increasing the channel capacity.

Various techniques were described for improving the voice channels, such as switching rapidly between various codecs (coding/decoding algorithms) depending on signal conditions, not transmitting ‘silences’ but filling them in with ‘comfort noise’ at the receiving end and increasing the bandwidth to 14 kHz.

The talk ended with a brief discussion as to whether the mobile operators would be able to ‘keep the customer’ by offering better voice/music quality, possibly accompanied by video, or would merely provide a mobile data channel for third-parties to deliver the ‘experience’ that the customer wants.

I must say that this talk was unusual in the depth to which the history of the technology was covered. Personally I think that is no bad thing as it is good to be reminded, (and maybe inspired?), of the innovations of the past. The history of telecommunications is full of parallels with modern technology. In this country the railway companies simplified the telegraph even further, the block bell system using codes rather than letter-by-letter ciphers as in the Morse system. Not mentioned in the talk was the telex/teleprinter network, a lot closer to modern binary than Morse code and using routing codes to the head of messages, anticipating networks and coded multiplexes.

It was fascinating to hear about the various schemes used by mobile phones. I rather suspect that the unsung heroes here are the mathematicians!

Wither voice? Watching young users of smart phones, they browse, they ‘Facebook’, they text - they don’t talk. They spend 20 minutes doing something ‘online’ that they could have done on the phone in 20 seconds, except that maybe the number could be engaged and they would have to explain to a real person what they want! As my ‘communications’ lecturer said, communication systems have to be engineered to suit the characteristics of the channel and the physiology of the users. I think the changing psychology of the users could be just as important.
  • An excellent talk, during which Peter gave a comprehensive overview of the evolution of voice telecommunication. It is a subject that he clearly has deep knowledge and experience of. With regards to the future, it was also very timely given T-Mobile’s announced launch of their “next-gen” voice service in the States. Unfortunately Europe does appear to be lagging in the deployment of such services, with for example limited 4G VoLTE availability as compared to the States and Asia. An area that would have been interesting to discuss further is WiFi calling and the different operators approach to that (link). Services such as this and the operators’ ability to more closely manage a user’s quality of experience become very interesting in the debate on the alternative voice services offered by OTT providers.
  • Agreed Walter, Peter did a great job. However I wonder if mobile, (and fixed?),  telephony has had its day.


    Whatever it is called surely the way to go is IP to the mobile device. I say device because it is no longer just a phone.


    Some sort of 'universal' wireless IP delivery will be needed for the full implementation of 'the internet of things' and 'smart cities' and to piggy-back that onto a telephony service long-term is daft. In the fixed telephony model we have stuck to what I termed the 'Heaviside standard', it having been decided that it is just too expensive to change subscriber apparatus, line plant and, too some extent, switching and transmission equipment. The mobile telephony model is one of 'must have' the latest kit so it could be phased out without substantial user resistance.


    Is this new future desirable? Will we have the server/back bone capacity? Is it really 'smart' to put 'everything' onto one network? Yes it allows for optimisation, 'intelligent' methods etc. but the security and resilience implications are frightening. Optimised systems have no 'slack'. Imagine a local 'disaster', everyone reaches for their mobile device and the traffic light and utilitity systems promptly shut down as network demand is no longer 'random' or 'diverse' but highly correlated.