Page Content

Tutorials

Understanding Python Multithreading Structure With Example

Python Multithreading

Python Multithreading
Python Multithreading

Multithreading is a programming technique in Python that enables a single program to run several threads sequences of instructions concurrently. Conceptually, threads can be thought of as “mini-processes” that operate concurrently inside a “main thread” or main process. Every thread has a beginning, a course of action, and an end. Additionally, it features an instruction pointer that manages the sequence of execution and records its present state.

Multithreading is the capacity of a process to run several threads concurrently. A thread, which has its own separate set of instructions, is regarded as the lowest unit of execution in computer science. As a component of a process, threads share the program’s runnable resources, such as memory, and function in the same environment. Since processes normally operate in distinct memory spaces, this shared memory space is a crucial distinction between threads and processes.

Multithreading, especially on systems with several processors or cores, can boost software speed by managing multiple threads at once. Even with a single Central processing unit, python multithreading can keep a program responsive. This is useful in systems like graphical user interfaces (GUIs), because a long work on one thread can continue without impeding the main GUI thread from updating or responding to user input.

Decoupling tasks that are not sequentially dependent is the main goal of python multithreading. This is particularly helpful in situations when a program may have to wait for something that is frequently far slower than the CPU, such data from a file or a network service. The main thread or other threads can continue to operate by executing these potentially blocking activities in different threads. This prevents “busy waiting” situations and may even reduce the program’s overall duration, which is also referred to as enhancing throughput. Common uses for python multithreading include.

  • Long-running Global Interpreter Lock tasks.
  • Putting non-blocking operations into practice.
  • Doing computations and I/O concurrently.
  • Managing concurrent execution in blocking programs (e.g., waiting for database or input/output results).
  • Programming the server to manage several client requests at once.

Python 3’s outdated thread module features a lower-level multithreading interface than the threading module, recommended for beginners.

Multithreading in Python requires the Global Interpreter Lock . A GIL mutex limits Python object access to prevent several native threads running Python bytecode simultaneously. This implies that only one thread can run Python code at a time, even on a system with several CPUs. For CPU-intensive operations where you want to achieve real parallelism across multiple CPUs, ordinary python multithreading are typically ineffective due to the GIL.

On the other hand, the GIL is released around potentially blocking operations, such input/output (network communication, reading/writing files). While one thread waits for an input/output operation to finish, other threads can execute Python code. For I/O-bound operations, Python multithreading are therefore far more appropriate. The multiprocessing module is usually a better option for computationally demanding operations that require the utilisation of many CPUs because it employs distinct processes that avoid the GIL.

Creating and Running Threads

By creating an instance of the Thread class and giving the function or other callable object you wish to run in the thread to the target parameter, you may use the python multithreading module to construct a thread. The args and kwargs arguments can also be used to pass arguments to the target function. Calling the thread’s start() method initiates it.

This is a straightforward example that demonstrates the idea. To mimic waiting or work, use sleep(). To illustrate the difference in overall time, this example will first demonstrate sequential execution and then multithreaded execution.

Single-Threaded

Loop 0 must finish in the single-threaded example before loop 1 can start. The entire elapsed time, including a small overhead, will be roughly 6 seconds if loop0 sleeps for 4 seconds and loop1 sleeps for 2 seconds.

Example:

import time
def loop0():
    print('start loop 0 at:', time.ctime(time.time())) 
    time.sleep(4) # Simulate work/waiting 
    print('loop 0 done at:', time.ctime(time.time())) 
def loop1():
    print('start loop 1 at:', time.ctime(time.time())) 
    time.sleep(2) # Simulate work/waiting 
    print('loop 1 done at:', time.ctime(time.time())) 
def main_single_threaded():
    print('starting single-threaded…') 
    loop0() # Executes fully
    loop1() # Executes only after loop0 finishes
    print('all DONE at:', time.ctime(time.time())) 
if __name__ == '__main__':
    main_single_threaded()

Output:

The start and finish times of loop0 and loop1 would be displayed in a single-threaded execution, with the sum of the two sleeps representing the overall duration.

starting single-threaded…
start loop 0 at: Wed May 28 05:01:13 2025
loop 0 done at: Wed May 28 05:01:17 2025
start loop 1 at: Wed May 28 05:01:17 2025
loop 1 done at: Wed May 28 05:01:19 2025
all DONE at: Wed May 28 05:01:19 2025

Multithreaded Example

Thread.start new thread(), also known as python multithreading, is used in the multithreaded example.Using the selected threading module, Thread(target=…).start() starts loop0 and loop1 to operate simultaneously. Even though loop0 was begun first, loop1 is likely to end first because it only sleeps for two seconds while loop0 sleeps for four. In this instance, the longest-running thread which is roughly 4 seconds will decide the total elapsed time instead of adding up the times required by each loop in turn. To prevent the main thread from ending before the spawning threads, which could cause them to terminate prematurely, time.sleep(6) is included in the main multithreaded method.

Example:

import _thread as thread 
import time
def loop0():
    print('start loop 0 at:', time.ctime(time.time())) 
    time.sleep(4) # Simulate work/waiting 
    print('loop 0 done at:', time.ctime(time.time())) 
def loop1():
    print('start loop 1 at:', time.ctime(time.time()))
    time.sleep(2) # Simulate work/waiting 
    print('loop 1 done at:', time.ctime(time.time())) 
def main_multithreaded():
    print('starting threads…') 
    thread.start_new_thread(loop0, ()) # Start loop0 in a new thread 
    thread.start_new_thread(loop1, ()) # Start loop1 in another new thread 
    # The main thread needs to wait for the other threads to finish
    # before it exits, otherwise it might kill the other threads 
    time.sleep(6) # Sleep long enough for both threads to complete 
    print('all DONE at:', time.ctime(time.time())) 
if __name__ == '__main__':
    main_multithreaded()

Output:

When a thread module is used for multithreaded execution, the loops begin nearly simultaneously and the shorter loop ends first, cutting down on overall time. Keep in mind that without explicit synchronisation, print statements from many threads may be confused together.

starting threads…
start loop 0 at: Wed May 28 05:10:45 2025
start loop 1 at: Wed May 28 05:10:45 2025
loop 1 done at: Wed May 28 05:10:47 2025
loop 0 done at: Wed May 28 05:10:49 2025
all DONE at: Wed May 28 05:10:51 2025

The tasks (loop0 and loop1) really take the maximum of their durations (4s), not the sum (6s). The benefit of threads is demonstrated by the individual loop completion times in relation to when they were begun, while the main thread’s sleep(6) controls when the “all DONE” message prints.

Concurrency Issues and Synchronization

Concurrent programming is “fraught with potential peril” since threads in the same process share memory. If shared resources (such as global variables or shared data structures) are accessed or modified by different threads, it might result in race situations and unpredictable behaviour if not handled appropriately. Mechanisms for synchronisation are employed to handle these problems. Typical synchronisation instruments consist.

Locks (Mutexes): Critical code segments that access shared resources are protected with locks, sometimes known as mutexes. Before using a shared resource, a thread obtains a lock, which it then releases. A thread waits for the lock to be released if it attempts to obtain a resource that is locked. This stops the resource from being accessed by several threads at once. Synchronising access to standard output (print statements) to avoid overlapping lines is one example.

Queues: Modules such as queue (or Queue in Python 2.6) offer queues. These data structures are thread-safe and were created especially to allow for secure data transfer between threads. Without requiring manual locking for the queue operations themselves, threads can safely add items to a queue (using the put() function) and retrieve items from a queue (using the get() method). This is frequently employed in producer-consumer relationships.

In conclusion, Python multithreading enables the sharing of resources, such as memory, and the concurrent execution of tasks within a single process. Because it lets other threads execute while one waits, it is especially well-suited for handling I/O-bound operations and enhancing application responsiveness. python multithreading is still a useful tool for organising programs and controlling concurrent processes, even though the GIL restricts genuine CPU parallelism for conventional python multithreading. This is especially true when paired with synchronisation techniques like locks and queues to securely handle shared resources.

Kowsalya
Kowsalya
Hi, I'm Kowsalya a B.Com graduate and currently working as an Author at Govindhtech Solutions. I'm deeply passionate about publishing the latest tech news and tutorials that bringing insightful updates to readers. I enjoy creating step-by-step guides and making complex topics easier to understand for everyone.
Index