Last Updated:

What is Java thread?

For some tasks, it is convenient to organize the parallel execution of several parts of the program. Each of these separate subtasks is called a thread. There is a system mechanism that enables the sharing of the processor.

The threading model in the Java language is a software mechanism that makes it easy to perform multiple operations simultaneously in the same program. The processor periodically allocates a period of time to each thread. For each thread, it looks as if the processor is used in exclusive mode, but in fact, the processor time is divided between all the threads that exist in the program. Acceleration can be obtained on a multiprocessor computer. When using threads, there is no need to take into account these subtleties - the code does not depend on how many processors will be executed. Thus, threads provide a mechanism for scaling performance - if the program runs too slowly, you can achieve acceleration using a multiprocessor system, while not rewriting the program again.

When programming parallel executable threads, you need to consider the following points:

  • The program can be divided into several independent tasks.
  • It is necessary to foresee in advance all sorts of problems that arise when completing tasks.
  • Tasks that work with shared resources can interfere with each other. The primary means of preventing conflicts is blocking.
  • Sloppily designed multitasking systems can have deadlocks.

The main reasons for using parallel execution are:

  • management of several subtasks, the simultaneous execution of which allows you to more efficiently manage the resources of the computing system (including the ability to distribute these tasks across multiple processors);
  • Improved code organization
  • user-friendliness.

 

The Worker Thread (Background Thread, Thread Pool) processing pattern is designed to improve throughput and minimize the average latency when implementing parallel execution.

In order to implement multithreading support tools correctly, you must have certain skills. One way to maximize the efficiency of a multithreaded application is to take advantage of the fact that not all threaded application tasks have the same priority. For some tasks, execution time comes first. Others just have to be done, and when exactly that fulfillment happens is not so important.

A developer can separate such tasks from the application and use the Worker Thread pattern. The flow handler created according to this template will select tasks from the queue and execute them on a separate thread. At the end of the next task, the processor selects the next task from the queue and repeats everything again.

Creating a multithreaded application using the Worker Thread pattern is greatly simplified, because in cases where it does not matter how soon the task is completed, the developer simply needs to queue it, and the thread handler will do the rest. The code of such an application is also simplified because all objects that work with threads are hidden inside the thread handler and queue.

The Worker Thread pattern is recommended when:

  • You want to increase the throughput of your application.
  • you want to ensure that different code snippets run at the same time.

To implement threads in your application, you can create a new Thread object and run it for execution. The thread represented by this object will do all the work assigned to it and automatically terminate. However, instantiating a thread is a wasteful process in terms of performance, time-consuming, and only one task. A more efficient way is to create a "long-lived" object, a special flow handler that will perform one task after another.

This is the essence of the Worker Thread template. A flow processor implemented according to this pattern performs many unrelated tasks one by one. You don't need to create a new thread every time you start a new task—you just need to pass the task to an existing thread handler who will take care of the rest.
There may be a situation where the flow processor is busy performing the next task, and the application has already prepared the next task. In this situation, you can offer one of the following solutions:

  • The application waits until the flow handler is free from the current task. This solution is obvious, but it practically negates all the benefits provided by multithreading.
  • The application creates a new instance of the flow handler whenever the current flow handler is unavailable. Such a solution is essentially a return to traditional technology, since there may be situations in which a separate thread will be created for each new task.

The best solution to the problem of a temporarily unavailable flow handler is to keep tasks in the queue until the thread handler is available. The application places each new task in the queue, and the flow processor, when it has finished performing the next task, checks whether there are any new tasks in the queue and, if any, starts the next task to be executed. This does not provide an advantage over the speed at which tasks are completed, but it does free the application from having to wait for the thread handler to become available.

If there are no tasks to perform, the handler periodically checks the queue. Queuing a task is a much less productive process than creating a new thread.
The Worker Thread pattern affects performance in several ways.

  • The client does not need to create multiple thread objects to run different jobs. You only need to queue the task, which requires significantly less overhead than creating a flow object.
  • An existing but not running thread degrades performance because the scheduler allocates some of the machine time to execute a thread that is in a ready to run state. Creating and running a flow for each task means that the scheduler must allocate resources to each thread individually. In the sum, the loss of time for such planning is much greater than the losses that occur when there is a constantly working thread handler. In other words, the more threads, the higher the overhead of scheduling them. If the job is in the queue and, accordingly, does not run, it does not consume computer time at all.
  • When tasks are interdependent and if the queue is consistent, this situation can lead to system lockdown. To solve this problem, you can use several approaches:
    • You create as many thread handlers as you want to run at the same time. You want to create an extensible thread pool in your application.
    • Only tasks that are independent of other tasks can be queued. In such cases, the client should not queue the task, but rather create an instance of its own thread or start a separate queue with a flow handler.
    • An intelligent queue is created that can establish which tasks are working together and decide when to pass a task to the flow processor. This approach should be used only when there are no other options left, because such an intelligent queue must be closely related to the application, and maintaining its program code can be very time-consuming.