- It is the basic unit of CPU utilization also called light weight process (LWP) and is flow of execution through process code with its own program counter, registers and stack.
- They improve performance of application by parallelism.
Parallelism VS Concurrency : Parallelism is the techniques used to make a program run faster by performing certain computations in parallel.Example : GPU computations. It mainly focuses on reducing data dependencies so as to perform two calculations without communication between them.
Concurrency refers to technique that makes program more usable. If an operating system is called a multi-tasking operating system, this is a synonym for supporting concurrency.
- Each thread is unique to a process and cant belong to more than one process but a process can have more than one thread.
- In an application multiple threads allow for multiple requests to be satisfied simultaneously, without having to service requests sequentially or to fork off separate processes for every incoming request.
- It reduces context switching time hence responsiveness increases.
- It is economical as creating and managing thread is much faster as same for processes.
- By default, threads share common data, code and other resources hence multiple tasks can be performed at same address space.
- Unlike process switching, thread switching does not interact with CPU.
Types of threads :
- User level threads : They are supported above kernel and are implemented by a thread library at user level. Kernel is not aware of these threads. Multi threaded application cant take advantage of multiprocessing. It is generic and runs on any OS.
- Kernel threads : OS performs its creation, scheduling & management hence its slower. If one thread in a process is blocked, the Kernel can schedule another thread of the same process.It is specific to a OS.
Multithreading Models :
In a specific implementation, the user threads must be mapped to kernel threads, using one of the following strategies.
Many to one model :
- Many user level threads are all mapped to a single kernel level threads.
- Thread management is handled by thread library in user space.
- When a thread makes a blocking call, entire process blocks.
- Individual process can not split across multiple CPUs as single kernel thread can operate only on a single CPU.
- Solaris uses this type of threads.
One to one model :
- Separate kernel thread to handle each user thread.
- Overcome blocking and splitting issues over multiple CPUs.
- This slows down the system and also there is limit that how many these types of threads can be created.
- Linux and Win95 Win XP uses this.
Many to many model :
- Any number of user thread to equal or smaller number of kernel thread.
- NO restrictions on number of threads created and also no blocking or splitting issue.
- Individual processes may be allocated variable numbers of kernel threads, depending on the number of CPUs present and other factors.
- Tru64 UNIX uses this model.
Threading Issues :
- The fork( ) and exec( ) System Calls : fork() call is used to create processes that is exactly same as parent process. exec() call is used to create a process from program binary hence not same as parent process. While creating new process, whether entire process is copied , or the process is single threaded depend on system.
- Signal Handling : A signal is used to notify a process that a particular event has occurred. There are four major options to what thread a signal is delivered : deliver the signal to which it applies, deliver to every thread, deliver to a certain threads, deliver to a specific thread with special ID.
- Thread Cancellation : Threads no longer needed can be destroyed by either method using other thread :
- Asynchronous Cancellation cancels the thread immediately. Shared resources and data can cause problems.
- Deferred Cancellation sets a token indicating the thread should cancel itself when it is convenient.
- Thread Pools : For creating new threads or removing others , a very large no. (infinite) of threads are required to be created. Its better to create a number of threads and put it in a pool thread from where it can be used when required and sent back to pool. Win32 provides thread pools through the PoolFunction function. Java also provides support for thread pools through the java.util.concurrent package, and Apple supports thread pools under the Grand Central Dispatch architecture.