HP OpenVMS Systems Documentation

Content starts here

Guide to the POSIX Threads Library


Previous Contents Index

2.3.2.2 Setting the Scheduling Policy Attribute

The scheduling policy attribute describes how new threads are scheduled for execution relative to the other threads in the process.

A thread has one of the following scheduling policies:

  • SCHED_FIFO (first-in/first-out or FIFO)---The highest-priority thread runs until it blocks. If there is more than one thread with the same priority and that priority is the highest among other threads, the first thread to begin running continues until it blocks. If a thread with this policy becomes ready, and it has a higher priority than the currently running thread, then the current thread is preempted and the higher priority thread immediately begins running.
  • SCHED_RR (round-robin or RR)---The highest-priority thread runs until it blocks; however, threads of equal priority are time sliced. If a thread with this policy becomes ready, and it has a higher priority than the currently running thread, then the current thread is preempted and the higher priority thread immediately begins running.
    On a multiprocessor, threads of varying policy and priority may run simultaneously. A high priority thread is not guaranteed exclusive use of a multiprocessor system. You must use synchronization, not scheduling attributes, to ensure exclusive access.
  • SCHED_OTHER (Foreground or "throughput"; also known as SCHED_FG_NP )---This is the default scheduling policy. All threads are time sliced, and no thread with this policy will completely starve any other thread with this policy, regardless of any thread's priority. (Time slicing is a mechanism that ensures that every thread is allowed time to execute by preempting running threads at fixed intervals.) However, higher-priority threads tend to receive more execution time than lower-priority threads, if the threads behave similarly.
    Threads with this scheduling policy can be denied execution time by first-in/first-out (FIFO) or round-robin (RR) threads. Threads in this policy do not preempt other threads.

Section 2.3.6 describes and shows the effect of the scheduling policy on thread scheduling.

2.3.2.2.1 Techniques for Setting the Scheduling Policy Attribute

Use either of two techniques to set a thread attributes object's scheduling policy attribute:

  • Set the scheduling policy attribute in the attributes object, which establishes the scheduling policy of a new thread when it is created. To do so, call the pthread_attr_setschedpolicy() routine. This allows the creator of a thread to establish the created thread's initial scheduling policy. (Note that this value is used only if the attributes object is set so that the created thread does not inherit its priority from the creating thread as shown in Section 2.3.2.1. Inheriting scheduling policy is the default behavior.)
  • Change the scheduling policy of an existing thread (and, at the same time, the scheduling parameters) by calling the pthread_setschedparam() routine. This routine allows a thread to change its own scheduling policy and/or scheduling priority, but has no effect on the corresponding settings in the thread attributes object.

When you change the scheduling policy attribute, you must be sure the scheduling parameter attribute is compatible with the scheduling policy attribute before using the attributes object to create a thread.

2.3.2.2.2 Comparing Throughput and Real-Time Policies

The default throughput scheduling policy is intended to be an "adaptive" policy, giving each thread an opportunity to execute based on its behavior. That is, for a thread that does not execute often, the Threads Library tends to give it high access to the processor because it is not greatly affecting other threads. On the other hand, the Threads Library tends to schedule with less preference any compute-bound threads with throughput scheduling policy.

This yields a responsive system in which all threads with throughput scheduling policy get a chance to run fairly frequently. It also has the effect of automatically resolving priority inversions, because over time any threads that have received less processing time (among those with throughput scheduling policy) will rise in preference while the running thread drops, and eventually the inversion is reversed.

The FIFO and RR scheduling policies are considered "real-time" policies, because they require the Threads Library to schedule such threads strictly by the specified priority. Because threads that use real-time scheduling policies require additional overhead, the incautious use of the FIFO or RR policies can cause the performance of the application to suffer.

If relative priorities of threads are important to your application---that is, if a compute-bound thread really requires consistently predictable execution---then create those threads using either the FIFO or RR scheduling policy. However, use of "real-time" policies can expose the application to unexpected performance problems, such as priority inversions, and therefore their use should be avoided in most applications.

2.3.2.2.3 Portability of Scheduling Policy Settings

Only the SCHED_FIFO and SCHED_RR scheduling policies are portable across POSIX-conformant implementations. The other scheduling policies are extensions to the POSIX standard.

Note

The SCHED_OTHER identifier is portable, but the POSIX standard does not specify the behavior that it signifies. For example, on non-Compaq platforms, the SCHED_OTHER scheduling policy could be identical to either the SCHED_FIFO or the SCHED_RR policy.

2.3.2.3 Setting the Scheduling Parameters Attribute

The scheduling parameters attribute specifies the execution priority of a thread. (Although the terminology and format are designed to allow adding more scheduling parameters in the future, only priority is currently defined.) The priority is an integer value, but each policy can allow only a restricted range of priority values. You can determine the range for any policy by calling the sched_get_priority_min() or sched_get_priority_max() routines. The Threads Library also supports a set of nonportable symbols designating the priority range for each policy, as follows:

Low High
PRI_FIFO_MIN PRI_FIFO_MAX
PRI_RR_MIN PRI_RR_MAX
PRI_OTHER_MIN PRI_OTHER_MAX
PRI_FG_MIN_NP PRI_FG_MAX_NP
PRI_BG_MIN_NP PRI_BG_MAX_NP

Section 2.3.6 describes how to specify a priority between the minimum and maximum values, and it also discusses how priority affects thread scheduling.

Use either of two techniques to set a thread attributes object's scheduling parameters attribute:

  • Set the scheduling parameters attribute in the thread attributes object, which establishes the execution priority of a new thread when it is created. To do so, call the pthread_attr_setschedparam() routine. This allows the creator of a thread to establish the created thread's initial execution priority. (Note that this value is used only if the thread attributes object is set so that the created thread does not inherit its priority from the creating thread. Inheriting priority is the default behavior.)
  • Change the scheduling parameters of an existing thread by calling the pthread_setschedparam() routine and requesting the current policy with the new parameters. This routine allows a thread to change its own scheduling policy or scheduling priority, but has no effect on the corresponding settings in the thread attributes object.

Note

On Tru64 UNIX Systems:
There are system security issues for threads running with system contention scope. High priority threads may prevent other users from accessing the system. A system contention scope thread cannot have a priority higher than 19 (the default user priority). A system contention scope thread with SCHED_FIFO policy, because it will prevent execution by other threads of equal priority, cannot have a priority higher than 18.

2.3.2.4 Setting the Stacksize Attribute

The stacksize attribute represents the minimum size (in bytes) of the memory required for a thread's stack. To increase or decrease the size of the stack for a new thread, call the pthread_attr_setstacksize() routine and use the specified thread attributes object when creating the thread and stack. You must specify at least PTHREAD_STACK_MIN bytes.

After a thread has been created, your program cannot change the size of the thread's stack. See Section 3.4.1 for more information about sizing a stack.

2.3.2.5 Setting the Stack Address Attribute

The stack address attribute represents the location or address of a region of memory that your program allocates for use as a thread's stack. The value of the stack address attribute represents the origin of the thread's stack (that is, the initial value to be placed in the thread's stack pointer register). However, please be aware that the actual address you specify, relative to the stack memory you have allocated, is inherently nonportable.

To set the address of the stack origin for a new thread, call the
pthread_attr_setstackaddr() routine, specifying an initialized thread attributes object as an argument, and use the thread attributes object when creating the new thread. Use the pthread_attr_getstackaddr() routine to obtain the value of the stack address attribute of an initialized thread attributes object.

After a thread has been created, your program cannot change the address of the thread's stack.

Code using this attribute is nonportable because the meaning of "stack address" is undefined and untestable. Generally, implementations likely assume, as does the Threads Library, that you have specified the initial stack pointer; however, this is not required by the standards. Even so, some machines' stacks grow up while others grow down, and many may modify the stack pointer either before or after writing (or reading) data. In other words, one system may require that you pass the base, another base - sizeof(int) , another base + size , another base + size + sizeof(long) . Furthermore, the system cannot know the size of the stack, which may restrict the ability of debuggers and other tools to help you. As long as you are using an inherently nonportable interface, consider using pthread_attr_setstackaddr_np() .

You cannot create two concurrent threads that use the same stack address. The amount of storage you provide must be at least PTHREAD_STACK_MIN bytes.

The system uses an unspecified (and varying) amount of the stack to "bootstrap" a newly created thread.

2.3.2.6 Setting the Guardsize Attribute

The guardsize attribute represents the minimum size (in bytes) of the guard area for the stack of a thread. A guard area can help a multithreaded program detect overflow of a thread's stack and the stack. A guard area is a region of no-access memory that is allocated at the overflow end of the thread's writable stack. When the thread attempts to access a memory location within the guard area, a memory addressing violation occurs.

A new thread can be created using a thread attributes object with a default guardsize attribute value. This value is platform dependent, but will always be at least one "hardware protection unit" (that is, at least one page; non-zero values are rounded up to the next integral page size). For more information, see this guide's platform-specific appendixes.

The Threads Library allows your program to specify the size of a thread stack guard area for two reasons:

  • For a thread that allocates large data structures on the stack, a large guard area might be required to detect stack overflow.
  • Overflow protection of a thread's stack is otherwise a waste of system resources. An application that creates a large number of threads that will never overflow their stacks can conserve system resources by "turning off" guard areas---that is, by specifying a guardsize attribute of zero for each such thread. In this case, no guard area or overflow warning area are allocated.

To set the guardsize attribute of a thread attributes object, call the pthread_attr_setguardsize() routine. To obtain the value of the guardsize attribute in a thread attributes object, call the pthread_attr_getguardsize() routine.

2.3.2.7 Setting the Contention Scope Attribute

When creating a thread, you can specify the set of threads with which this thread competes for processing resources. This set of threads is called the thread's contention scope.

A thread attributes object includes a contention scope attribute. The contention scope attribute specifies whether the new thread competes for processing resources only with other threads in its own process, called process contention scope, or with all threads on the system, called system contention scope.

Use the pthread_attr_setscope() routine to set an initialized thread attributes object's contention scope attribute. Use the pthread_attr_getscope() routine to obtain the value of the contention scope attribute of an initialized thread attributes object. You must also set the inheritsched attribute to PTHREAD_EXPLICIT_SCHED to prevent a new thread from inheriting its contention scope from the creator.

In the thread attributes object, set the contention scope attribute's value to PTHREAD_SCOPE_PROCESS to specify process contention scope, or set the value to PTHREAD_SCOPE_SYSTEM to specify system contention scope.

The Threads Library selects at most one thread to execute on each processor at any point in time. The Threads Library resolves the contention based on each thread's scheduling attributes (for example, priority) and scheduling policy (for example, round-robin).

A thread created using a thread attributes object whose contention scope attribute is set to PTHREAD_SCOPE_PROCESS contends for processing resources with other threads within its own process that also were created with PTHREAD_SCOPE_PROCESS . It is unspecified how such threads are scheduled relative to threads in other processes or threads in the same process that were created with PTHREAD_SCOPE_SYSTEM contention scope.

A thread created using a thread attributes object whose contention scope attribute is set to PTHREAD_SCOPE_SYSTEM contends for processing resources with other threads in any process that also were created with PTHREAD_SCOPE_SYSTEM .

Whether process contention scope and system contention scope are available for your program's threads depends on the host operating system. Attempting to set the contention scope attribute to a value not supported on your system will result in a return value of [ENOTSUP]. The following table summarizes support for thread contention scope by operating system:

Table 2-1 Support for Thread Contention Scope
Operating System Available Thread Contention Scopes Default Thread
Contention Scope
Tru64 UNIX Process
System
Process
     
OpenVMS Process Process

Note

On Tru64 UNIX systems:

When a thread creates a system contention scope thread, the creation can fail with an [EPERM] error condition. This is because system contention scope threads can only be created with priority above "default" priority if the process is running with root privileges.

2.3.3 Terminating a Thread

Terminating a thread means causing a thread to end its execution. This can occur for any of the following reasons:

  • The thread returns from its start routine (this is the usual case). The value returned by the routine indicates the thread's exit status to a thread that joins with this thread.
  • The thread calls the pthread_exit() routine. This routine accepts a status value in its value_ptr argument. The value returned by the routine indicates the thread's exit status to a thread that joins with this thread.
  • The thread is canceled, by being specified in a call to the pthread_cancel() routine. This routine requests the thread's termination if the thread permits cancelation. See Section 2.3.7 for more information on canceling threads and on controlling whether or not cancelation is permitted.

When a thread terminates, the Threads Library performs these actions:

  1. It writes a return value into the terminated thread's thread object:
    • If the thread has been canceled, the value PTHREAD_CANCELED is written into the thread's thread object.
    • If the thread terminated by returning from its start routine, the return value is copied from the start routine into the thread's thread object. Alternatively, if the thread explicitly called pthread_exit() , the value received in the value_ptr argument (from pthread_exit() ) is stored in the thread's thread object.

    Another thread can obtain this return value by joining with the terminated thread (using pthread_join() ). See Section 2.3.5 for a description of joining with a thread.

    Note

    If the thread terminated by returning from its start routine normally and the start routine does not provide a return value, the results obtained by joining with that thread are unpredictable.
  2. If the termination results from either a cancelation or a call to pthread_exit() , the Threads Library calls, in turn, each cleanup handler that this thread declared (using pthread_cleanup_push() ) that had not yet been removed (using pthread_cleanup_pop() ). (It also transfers control to any appropriate CATCH , CATCH_ALL , or FINALLY blocks, as described in Chapter 5. You can also use Compaq C's structured handling (SEH) extensions.)
    The Threads Library calls the terminated thread's most recently pushed cleanup handler first. See Section 2.3.3.1 for more information about cleanup handlers.
    For C++ programmers: At normal exit from a thread, your program will call the appropriate destructor functions. You can also catch the exit or cancel exception using the catch(...) .
    To exit the terminated thread due to a call to pthread_exit() , the Threads Library raises the pthread_exit_e exception. To exit the terminated thread due to cancelation, the Threads Library raises the pthread_cancel_e exception.
    Your program can use the exception package to operate on the generated exception. (Note that the practice of using CATCH handlers in place of pthread_cleanup_push() is not portable.) Chapter 5 describes the exception package. The name of the native system extension, or that seen by C++, varies by platform.
  3. For each of the terminated thread's thread-specific data keys that has a non-NULL value and a non-NULL destructor function:
    • The thread's value for the corresponding key is set to NULL.
    • The thread-specific data destructor function is called.

    This step is repeated until all thread-specific data values in the thread are NULL, or for up to a number of iterations equal to PTHREAD_DESTRUCTOR_ITERATIONS (4). This destroys all thread-specific data associated with the terminated thread. See Section 2.6 for more information about thread-specific data. Note that if after 4 iterations through the thread's thread-specific data values, there are still non-NULL values, they will be ignored. This may result in an application memory leak, and should be avoided.
  4. The thread (if there is one) that is currently waiting to join with the terminated thread is awakened. That is, the thread that is waiting in a call to pthread_join() is awakened
  5. If the thread is already detached or if there was a thread waiting in a call to pthread_join() , its storage is destroyed Otherwise, the thread continues to exist until detached or joined with. Section 2.3.4 describes detaching and destroying a thread.

After a thread terminates, it continues to exist as long as it is not detached. This means that storage, including stack, may remain allocated. This allows another thread to join with the terminated thread (see Section 2.3.5).

When a terminated thread is no longer needed, your program should detach that thread (see Section 2.3.4).

Note

For Tru64 UNIX systems:

When the initial thread in a multithreaded process returns from the main routine, the entire process terminates, just as it does when a thread calls exit() .

For OpenVMS systems:

When the initial thread in a multithreaded image returns from the main routine, the entire image terminates, just as it does when a thread calls SYS$EXIT.


Previous Next Contents Index