Parallel processing is a well-known approach to enhance the performance of communication subsystems. Several forms of parallelism embedded in communication protocols have been applied to the Open Systems Interconnection (OSI) protocol stack. However, the OSI protocol stack involves sequential processing due to the layered architecture. Thus, all the layers have been prevented from performing immediate processing as data arrives from the network. This becomes a limiting factor of performance.
The most time-consuming part of the OSI protocol stack is the encoding and decoding functions of Abstract Syntax Notation One (ASN.1)/Basic Encoding Rules (BER) performed in the presentation layer. The conventional Abstract Syntax Notation One (ASN.1)/Basic Encoding Rules (BER) decoding scheme runs on a whole Presentation Protocol Data Unit (PPDU) that has been completely reassembled through the session layer and below. The delay between receiving the beginning of the Presentation Protocol Data Unit and starting the Abstract Syntax Notation One (ASN.1)/Basic Encoding Rules (BER) decoding is a crutial factor in the performance of the OSI protocol stack.
This thesis proposes a new Abstract Syntax Notation One (ASN.1)/Basic Encoding Rules (BER) decoding scheme called Partial Decoding. The Partial Decoding scheme allows us to immediately start Abstract Syntax Notation One (ASN.1)/Basic Encoding Rules (BER) decoding on a Session Protocol Data Unit (SPDU), a Transport Protocol Data Unit (TPDU), or a Network Protocol Data Unit (NPDU) before a whole Presentation Protocol Data Unit has been received. This enables the presentation layer and one or more lower layers to be executed simultaneously on the same data unit. Therefore, the Partial Decoding scheme meets both the parallel processing and the immediate protocol processing. We explore two parallel approaches to OSI processing for the network layer through the presentation layer: pipelining and Multiple Instruction Single Data (...