|
|||||||||
Home >> All >> org >> ematgine >> utils >> [ concurrent overview ] | PREV CLASS NEXT CLASS | ||||||||
SUMMARY: ![]() ![]() ![]() |
DETAIL: FIELD | CONSTR | METHOD |
org.ematgine.utils.concurrent
Class PooledExecutor

java.lang.Objectorg.ematgine.utils.concurrent.ThreadFactoryUser
org.ematgine.utils.concurrent.PooledExecutor
- All Implemented Interfaces:
- Executor
- public class PooledExecutor
- extends ThreadFactoryUser
- implements Executor
- extends ThreadFactoryUser
A tunable, extensible thread pool class. The main supported public
method is execute(Runnable command)
, which
can be called instead of directly creating threads
to execute commands.
Thread pools can be useful for several, usually intertwined reasons:
- To bound resource use. A limit can be placed on the maximum number of simultaneously executing threads.
- To manage concurrency levels. A targeted number of threads can be allowed to execute simultaneously.
- To manage a set of threads performing related tasks.
- To minimize overhead, by reusing previously constructed Thread objects rather than creating new ones. (Note however that pools are hardly ever cure-alls for performance problems associated with thread construction, especially on JVMs that themselves internally pool or recycle threads.)
- Queueing
- By default, this pool uses queueless synchronous channels to
to hand off work to threads. This is a safe, conservative
policy that avoids lockups
when handling sets of requests that might have
internal dependencies. (In these cases, queuing one task
could lock up another that
would be able to continue if the queued task were to run.)
If you are sure that this cannot happen, then you can
instead supply a queue of some sort (for example,
a BoundedBuffer or LinkedQueue) in the constructor.
This will cause new commands to be queued in cases where
all MaximumPoolSize threads are busy. Queues are sometimes
appropriate when each task is completely independent of
others, so tasks cannot affect each others execution.
For example, in an http server.
When given a choice, this pool always prefers adding a new thread rather than queueing if there are currently fewer than the current getMinimumPoolSize threads running, but otherwise always prefers queuing a request rather than adding a new thread. Thus, if you use an unbounded buffer, you will never have more than getMinimumPoolSize threads running. (Since the default minimumPoolSize is one, you will probably want to explicitly setMinimumPoolSize.)
While queuing can be useful in smoothing out transient bursts of requests, especially in socket-based services, it is not very well behaved when commands continue to arrive on average faster than they can be processed. Using bounds for both the queue and the pool size, along with run-when-blocked policy is often a reasonable response to such possibilities.
Queue sizes and maximum pool sizes can often be traded off for each other. Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artifically low throughput. Especially if tasks frequently block (for example if they are I/O bound), a JVM and underlying OS may be able to schedule time for more threads than you otherwise allow. Use of small queues or queueless handoffs generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.
- Maximum Pool size
- The maximum number of threads to use, when needed.
The pool does not by default preallocate threads.
Instead, a thread is created, if
necessary and if there are fewer than the maximum, only
when an
execute
request arrives. The default value is (for all practical purposes) infinite --Integer.MAX_VALUE
, so should be set in the constructor or the set method unless you are just using the pool to minimize construction overhead. Because task handoffs to idle worker threads require synchronization that in turn relies on JVM scheduling policies to ensure progress, it is possible that a new thread will be created even though an existing worker thread has just become idle but has not progressed to the point at which it can accept a new task. This phenomenon tends to occur on some JVMs when bursts of short tasks are executed. - Minimum Pool size
- The minimum number of threads to use, when needed (default 1).
When a new request is received, and fewer than the
minimum number of threads are running, a new thread is
always created to handle the request even if other
worker threads are idly waiting for work. Otherwise,
a new thread is created only if there are fewer than the
maximum and the request cannot immediately be queued.
- Preallocation
- You can override lazy thread construction
policies via method createThreads, which establishes
a given number of warm threads. Be aware that these preallocated
threads will time out and die (and later be replaced
with others if needed) if not used within the
keep-alive time window. If you use preallocation, you
probably want to increase the keepalive time.
The difference between setMinimumPoolSize and createThreads
is that createThreads immediately establishes threads,
while setting the minimum pool size waits until requests
arrive.
- Keep-alive time
- If the pool maintained references to a fixed set of
threads in the pool,
then it would impede garbage collection of otherwise
idle threads. This would defeat the resource-management
aspects of pools. One solution would be to use weak references.
However, this would impose costly and difficult
synchronization issues.
Instead, threads are simply allowed to terminate
and thus be GCable if they have been idle for the
given keep-alive time. The value of this parameter
represents a trade-off between GCability and construction
time. In most current Java VMs, thread
construction and cleanup overhead
is on the order of milliseconds. The
default keep-alive value is one minute, which means that
the time needed to construct and then GC a thread is expended
at most once per minute.
To establish worker threads permanently, use a negative argument to setKeepAliveTime.
- Blocked execution policy
- If the maximum pool size or queue size is
bounded, then it is possible for incoming
execute
requests to block. There are three supported policies for handling this problem, and mechanics (based on the Strategy Object pattern) to allow others in subclasses:- Run (the default)
- The thread making the
execute
request runs the task itself. This policy helps guard against lockup. - Wait
- Wait until a thread becomes available.
- Discard
- Throw away the current request and return.
(Again, these cases can never occur if the maximum pool size is unbounded or the queue is unbounded. In these cases you instead face potential resource exhaustion.) The execute method does not throw any checked exceptions in any of these cases since any errors associated with them must normally be dealt with via handlers or callbacks. (Although in some cases, these might be associated with throwing unchecked exceptions.) You may wish to add special implementations even if you choose one of the listed policies. For example, the supplied Discard policy does not inform the caller of the drop. You could add your own version that does so. Since choice of policies is normally a system-wide decision, selecting a policy affects all calls to
execute
. If for some reason you would instead like to make per-call decisions, you could add variant versions of theexecute
method (for example,executeIfWouldNotBlock
) in subclasses. - Thread construction parameters
-
A settable ThreadFactory establishes each new thread.
By default, it merely generates a new instance of
class Thread, but can be changed to use a
Thread subclass, to set priorities, ThreadLocals, etc.
- Interruption policy
- Worker threads check for interruption after processing each command, and terminate upon interruption. Fresh threads will replace them if needed. Thus, new tasks will not start out in an interrupted state due to an uncleared interruption in a previous task. Also, unprocessed commands are never dropped upon interruption. It would conceptually suffice simply to clear interruption between tasks, but implementation characteristics of interruption-based methods are uncertain enough to warrant this conservative strategy. It is a good idea to be equally conservative in your code for the tasks running within pools. Normally, before shutting down a pool via method interruptAll, you should make sure that all clients of the pool are themselves terminated, in order to prevent hanging or lost commands. Additionally, if you are using some form of queuing, you may wish to call method drain() to remove (and return) unprocessed commands from the queue after shutting down the pool and its clients. If you need to be sure these commands are processed, you can then run() each of the commands in the list returned by drain().
Usage examples.
Probably the most common use of pools is in statics or singletons accessible from a number of classes in a package; for example:
class MyPool { // initialize to use a maximum of 8 threads. static PooledExecutor pool = new PooledExecutor(8); }Here are some sample variants in initialization:
- Using a bounded buffer of 10 tasks, at least 4 threads (started only
when needed due to incoming requests), but allowing
up to 100 threads if the buffer gets full.
pool = new PooledExecutor(new BoundedBuffer(10), 100); pool.setMinimumPoolSize(4);
- Same as (1), except pre-start 9 threads, allowing them to
die if they are not used for five minutes.
pool = new PooledExecutor(new BoundedBuffer(10), 100); pool.setMinimumPoolSize(4); pool.setKeepAliveTime(1000 * 60 * 5); pool.createThreads(9);
- Same as (2) except clients block if both the buffer is full and
all 100 threads are busy:
pool = new PooledExecutor(new BoundedBuffer(10), 100); pool.setMinimumPoolSize(4); pool.setKeepAliveTime(1000 * 60 * 5); pool.waitWhenBlocked(); pool.createThreads(9);
- An unbounded queue serviced by exactly 5 threads:
pool = new PooledExecutor(new LinkedQueue()); pool.setKeepAliveTime(-1); // live forever pool.createThreads(5);
Usage notes.
Pools do not mesh well with using thread-specific storage via java.lang.ThreadLocal. ThreadLocal relies on the identity of a thread executing a particular task. Pools use the same thread to perform different tasks.
If you need a policy not handled by the parameters in this class consider writing a subclass.
Version note: Previous versions of this class relied on ThreadGroups for aggregate control. This has been removed, and the method interruptAll added, to avoid differences in behavior across JVMs.
[ Introduction to this package. ]
Nested Class Summary | |
protected class |
PooledExecutor.BlockedExecutionHandler
Class for actions to take when execute() blocks. |
protected class |
PooledExecutor.DiscardWhenBlocked
Class defining Discard action |
protected class |
PooledExecutor.RunWhenBlocked
Class defining Run action |
protected class |
PooledExecutor.WaitWhenBlocked
Class defining Wait action |
protected class |
PooledExecutor.Worker
Class defining the basic run loop for pooled threads. |
Nested classes inherited from class org.ematgine.utils.concurrent.ThreadFactoryUser |
ThreadFactoryUser.DefaultThreadFactory |
Field Summary | |
protected PooledExecutor.BlockedExecutionHandler |
blockedExecutionHandler_
The current handler |
static long |
DEFAULT_KEEPALIVETIME
The maximum time to keep worker threads alive waiting for new tasks; used if not otherwise specified. |
static int |
DEFAULT_MAXIMUMPOOLSIZE
The maximum pool size; used if not otherwise specified. |
static int |
DEFAULT_MINIMUMPOOLSIZE
The minimum pool size; used if not otherwise specified. |
protected Channel |
handOff_
The channel is used to hand off the command to a thread in the pool |
protected long |
keepAliveTime_
|
protected int |
maximumPoolSize_
|
protected int |
minimumPoolSize_
|
protected java.lang.Object |
poolLock_
Lock used for protecting poolSize_ and threads_ map |
protected int |
poolSize_
Current pool size. |
protected java.util.Map |
threads_
The set of active threads, declared as a map from workers to their threads. |
Fields inherited from class org.ematgine.utils.concurrent.ThreadFactoryUser |
threadFactory_ |
Constructor Summary | |
PooledExecutor()
Create a new pool with all default settings |
|
PooledExecutor(Channel channel)
Create a new pool that uses the supplied Channel for queuing, and with all default parameter settings. |
|
PooledExecutor(Channel channel,
int maxPoolSize)
Create a new pool that uses the supplied Channel for queuing, and with all default parameter settings except for maximum pool size. |
|
PooledExecutor(int maxPoolSize)
Create a new pool with all default settings except for maximum pool size. |
Method Summary | |
protected void |
addThread(java.lang.Runnable command)
Create and start a thread to handle a new command. |
int |
createThreads(int numberOfThreads)
Create and start up to numberOfThreads threads in the pool. |
void |
discardWhenBlocked()
Set the policy for blocked execution to be to return without executing the request |
java.util.List |
drain()
Remove all unprocessed tasks from pool queue, and return them in a java.util.List. |
void |
execute(java.lang.Runnable command)
Arrange for the given command to be executed by a thread in this pool. |
protected PooledExecutor.BlockedExecutionHandler |
getBlockedExecutionHandler()
Get the handler for blocked execution |
long |
getKeepAliveTime()
Return the number of milliseconds to keep threads alive waiting for new commands. |
int |
getMaximumPoolSize()
Return the maximum number of threads to simultaneously execute New requests will be handled according to the current blocking policy once this limit is exceeded. |
int |
getMinimumPoolSize()
Return the minimum number of threads to simultaneously execute. |
int |
getPoolSize()
Return the current number of active threads in the pool. |
protected java.lang.Runnable |
getTask()
get a task from the handoff queue |
void |
interruptAll()
Interrupt all threads in the pool, causing them all to terminate. |
void |
runWhenBlocked()
Set the policy for blocked execution to be that the current thread executes the command if there are no available threads in the pool. |
void |
setKeepAliveTime(long msecs)
Set the number of milliseconds to keep threads alive waiting for new commands. |
void |
setMaximumPoolSize(int newMaximum)
Set the maximum number of threads to use. |
void |
setMinimumPoolSize(int newMinimum)
Set the minimum number of threads to use. |
void |
waitWhenBlocked()
Set the policy for blocked execution to be to wait until a thread is available. |
protected void |
workerDone(PooledExecutor.Worker w)
Called upon termination of worker thread |
Methods inherited from class org.ematgine.utils.concurrent.ThreadFactoryUser |
getThreadFactory, setThreadFactory |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
DEFAULT_MAXIMUMPOOLSIZE
public static final int DEFAULT_MAXIMUMPOOLSIZE
- The maximum pool size; used if not otherwise specified.
Default value is essentially infinite (Integer.MAX_VALUE)
- See Also:
- Constant Field Values
DEFAULT_MINIMUMPOOLSIZE
public static final int DEFAULT_MINIMUMPOOLSIZE
- The minimum pool size; used if not otherwise specified.
Default value is 1.
- See Also:
- Constant Field Values
DEFAULT_KEEPALIVETIME
public static final long DEFAULT_KEEPALIVETIME
- The maximum time to keep worker threads alive waiting for new
tasks; used if not otherwise specified. Default
value is one minute (60000 milliseconds).
- See Also:
- Constant Field Values
maximumPoolSize_
protected volatile int maximumPoolSize_
minimumPoolSize_
protected volatile int minimumPoolSize_
keepAliveTime_
protected long keepAliveTime_
handOff_
protected final Channel handOff_
- The channel is used to hand off the command
to a thread in the pool
poolLock_
protected java.lang.Object poolLock_
- Lock used for protecting poolSize_ and threads_ map
poolSize_
protected volatile int poolSize_
- Current pool size. Relies on poolLock_ for all locking.
But is also volatile to allow simpler checking inside
worker thread runloop.
threads_
protected final java.util.Map threads_
- The set of active threads,
declared as a map from workers to their threads.
This is needed by the interruptAll method.
It may also be useful in subclasses that need to perform
other thread management chores.
All operations on the Map should be done holding
synchronization on poolLock.
blockedExecutionHandler_
protected PooledExecutor.BlockedExecutionHandler blockedExecutionHandler_
- The current handler
Constructor Detail |
PooledExecutor
public PooledExecutor()
- Create a new pool with all default settings
PooledExecutor
public PooledExecutor(int maxPoolSize)
- Create a new pool with all default settings except
for maximum pool size.
PooledExecutor
public PooledExecutor(Channel channel)
- Create a new pool that uses the supplied Channel for queuing,
and with all default parameter settings.
PooledExecutor
public PooledExecutor(Channel channel, int maxPoolSize)
- Create a new pool that uses the supplied Channel for queuing,
and with all default parameter settings except
for maximum pool size.
Method Detail |
getMaximumPoolSize
public int getMaximumPoolSize()
- Return the maximum number of threads to simultaneously execute
New requests will be handled according to the current
blocking policy once this limit is exceeded.
setMaximumPoolSize
public void setMaximumPoolSize(int newMaximum)
- Set the maximum number of threads to use. Decreasing
the pool size will not immediately kill existing threads,
but they may later die when idle.
getMinimumPoolSize
public int getMinimumPoolSize()
- Return the minimum number of threads to simultaneously execute.
(Default value is 1).
If fewer than the mininum number are running upon reception
of a new request, a new thread is started to handle this request.
setMinimumPoolSize
public void setMinimumPoolSize(int newMinimum)
- Set the minimum number of threads to use.
getPoolSize
public int getPoolSize()
- Return the current number of active threads in the pool.
This number is just a snaphot, and may change immediately
upon returning
addThread
protected void addThread(java.lang.Runnable command)
- Create and start a thread to handle a new command.
Call only when holding poolLock.
createThreads
public int createThreads(int numberOfThreads)
- Create and start up to numberOfThreads threads in the pool.
Return the number created. This may be less than the
number requested if creating more would exceed maximum
pool size bound.
interruptAll
public void interruptAll()
- Interrupt all threads in the pool, causing them all
to terminate. Assuming that executed tasks do not
disable (clear) interruptions, each thread will terminate after
processing its current task. Threads will terminate
sooner if the executed tasks themselves respond to
interrupts.
drain
public java.util.List drain()
- Remove all unprocessed tasks from pool queue, and
return them in a java.util.List. It should normally be used only
when there are not any active clients of the pool (otherwise
you face the possibility that the method will loop
pulling out tasks as clients are putting them in.)
This method can be useful after
shutting down a pool (via interruptAll) to determine
whether there are any pending tasks that were not processed.
You can then, for example execute all unprocessed commands
via code along the lines of:
List tasks = pool.drain(); for (Iterator it = tasks.iterator(); it.hasNext();) ( (Runnable)(it.next()) ).run();
getKeepAliveTime
public long getKeepAliveTime()
- Return the number of milliseconds to keep threads
alive waiting for new commands. A negative value
means to wait forever. A zero value means not to wait
at all.
setKeepAliveTime
public void setKeepAliveTime(long msecs)
- Set the number of milliseconds to keep threads
alive waiting for new commands. A negative value
means to wait forever. A zero value means not to wait
at all.
workerDone
protected void workerDone(PooledExecutor.Worker w)
- Called upon termination of worker thread
getTask
protected java.lang.Runnable getTask() throws java.lang.InterruptedException
- get a task from the handoff queue
getBlockedExecutionHandler
protected PooledExecutor.BlockedExecutionHandler getBlockedExecutionHandler()
- Get the handler for blocked execution
runWhenBlocked
public void runWhenBlocked()
- Set the policy for blocked execution to be that
the current thread executes the command if
there are no available threads in the pool.
waitWhenBlocked
public void waitWhenBlocked()
- Set the policy for blocked execution to be to
wait until a thread is available.
discardWhenBlocked
public void discardWhenBlocked()
- Set the policy for blocked execution to be to
return without executing the request
execute
public void execute(java.lang.Runnable command) throws java.lang.InterruptedException
- Arrange for the given command to be executed by a thread in this pool.
The method normally returns when the command has been handed off
for (possibly later) execution.
|
|||||||||
Home >> All >> org >> ematgine >> utils >> [ concurrent overview ] | PREV CLASS NEXT CLASS | ||||||||
SUMMARY: ![]() ![]() ![]() |
DETAIL: FIELD | CONSTR | METHOD |