EmbLogic's Blog

P-THREAD

 

 An overview of Pthreads

 Introduction

Thread: A Thread is a ‘Light Weight Process’. A thread is a stream of instructions that can be scheduled as an independent unit. A thread exists within a process, and uses the process resources. Since threads are very small compared with processes, thread creation is relatively cheap in terms of CPU costs. As processes require their own resource bundle, and threads share resources, threads are likewise memory frugal. There can be multiple threads within a process. Multithreaded programs may have several threads running through different code paths “simultaneously”.

Shared Memory Programming

Shared-memory systems typically provide both static and dynamic process creation. That is, processes can be created at the beginning of program execution by a directive to the operating system, or they can be created during the execution of the program. The best-known dynamic process creation function is fork. A typical implementation will allow a process to start another, or child, process by a fork. Three processes typically manage coordinating among processes in shared memory programs. The starting, or a parent, process can wait for the termination of the child process by calling join. The second prevents processes from improperly accessing shared resources. The third provides a means for synchronizing the processes.

The shared-memory model is similar to the data-parallel model. It has a single address (global naming) space. It is similar to the message passing model in that it is multithreading and synchronous. However, data reside in a single, shared address space, thus does not have to be explicitly allocated. Workload can be either explicitly or implicitly allocated. Communication is done implicitly through shared reads and writes of variables. However, synchronization is explicit.

Shared variable programs are multi threaded and asynchronous, require explicit synchronizations to maintain correct execution order among the processes. Parallel programming based on the shared memory model has not progressed as much as message passing parallel programming. An indicator is the lack of a widely accepted standard such as MPI or PVM for message passing. The current situation is that shared-memory programs are written in a platform specific language for multiprocessors (mostly SMPs and PVPs). Such programs are not portable even among multiprocessors, not to mention multicomputers (MPPs and clusters). Three platform independent shared memory programming models are X3H5, Pthreads, and OpenMP.

 Pthreads:

The Pthreads library is a POSIX C API thread library that has standardized functions for using threads across different platforms. Historically, hardware vendors have implemented their own proprietary versions of threads. These implementations differed substantially from each other making it difficult for programmers to develop portable threaded applications. In order to take full advantage of the capabilities provided by threads, a standardized programming interface was required. For UNIX systems, this interface has been specified by the IEEE POSIX 1003.1c standard (1995). Implementations that adhere to this standard are referred to as POSIX threads, or Pthreads. Most hardware vendors now offer Pthreads in addition to their proprietary API’s. Pthreads are defined as a set of C language programming types and procedure calls. Vendors usually provide a Pthreads implementation in the form of a header/include file and a library that you link with your program.

Pthreads

  • The primary motivation for using Pthreads is to realize potential program performance gains.
  • When compared to the cost of creating and managing a process, a thread can be created with much less operating system overhead. Managing threads requires fewer system resources than managing processes.
  • All threads within a process share the same address space. Inter-thread communication is more efficient and in many cases, easier to use than inter-process communication.
  • Threaded applications offer potential performance gains and practical advantages over non-threaded applications in several other ways:
  • Overlapping CPU work with I/O: For example, a program may have sections where it is performing a long I/O operation. While one thread is waiting for an I/O system call to complete, other threads can perform CPU intensive work.
  • Priority/real-time scheduling: tasks that are more important can be scheduled to supersede or interrupt lower priority tasks.
  • Asynchronous event handling: tasks that service events of indeterminate frequency and duration can be interleaved. For example, a web server can both transfer data from previous requests and manage the arrival of new requests.
  • Multi-threaded applications will work on a uni-processor system; yet naturally take advantage of a multiprocessor system, without recompiling.
  • In a multiprocessor environment, the most important reason for using Pthreads is to take advantage of potential parallelism. This will be the focus of the remainder of this tutorial. Designing Threaded Programs.
  • In order for a program to take advantage of Pthreads, it must be able to be organized into discrete, independent tasks that can execute concurrently

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>