Parallel processing has been researched by many computer scientists to provide faster execution environments for applications, and various kinds of parallel computers were developed with a great deal of technological and theoretical efforts. However, when we try to run an application on a parallel system, the overall performance is determined not only by the capacity of the parallel system but also the load balancing of program tasks. To task advantage of the inherent parallelism of these architectures, efficient allocation and scheduling methods have to be developed.
This thesis concerned allocating and scheduling program tasks on parallel systems. The tasks are the consumers, and they are represented through the use of directed graphs called data flow graphs. The processing elements are resources, and their interconnection networks can be represented through the use of undirected graphs. First, we modeled parallel programs, parallel systems, and communication cost. Then we developed algorithms that allocate and scheduling tasks to maximize a performance for a given program tasks and a target machine. In this thesis, we introduced four algorithms: (1) Optimal scheduling algorithm for cyclic synchronous tasks in fully-connected multiprocessors (2) Optimal task scheduling algorithm for cyclic synchronous tasks in general multiprocessor networks (3) Optimal algorithm for minimizing the computing period of control tasks in multiprocessors (4) Polynomial algorithm for minimizing the computing period of control tasks in multiprocessors.
We verified that our optimal scheduling algorithms provide the optimal solutions and shows that the processing time is reasonable for normal-size applications. To handle huge scheduling problems, we developed a heuristic polynomial-complexity algorithm, which divides the task allocation and scheduling process into four stages.
These algorithms can be applied directly to task schedulers in operating systems or compilers supporting m...