Page Content

Tutorials

What are the Synchronization in Python With Code Example

Synchronization in Python

Synchronization in Python
Synchronization in Python

A program may require access to shared resources or data when several components operate simultaneously, for example, by using processes or threads. The process of regulating access to these common resources in order to avoid problems like race situations or data corruption is called synchronisation. The outcomes of concurrent operations accessing the same data can be uncertain if they are not properly synchronised.

Python’s threading and multiprocessing modules facilitate concurrency for threads and processes, respectively. The Global Interpreter Lock (GIL), a limitation of threads in the standard Synchronization in Python interpreter (CPython), prevents concurrent processes from using multiple CPU cores and running in parallel. By blocking several native threads from running Python bytecode concurrently in a single process, the GIL functions as a mutex, a form of lock, protecting access to Synchronization in Python objects. In other words, Python threads are using the interpreter’s main loop to execute one after the other, even if they appear to be running concurrently.

Click here to know about Thread lifecycle

But when I/O-bound actions (such waiting for data from a file or network) occur, the GIL is released. This makes Synchronization in Python threads useful for jobs that spend a lot of time waiting for external resources since it enables other threads to operate while one thread waits. Threads cannot achieve true parallelism for CPU-bound tasks because of the GIL. When several threads access shared mutable Python objects, synchronisation is still essential despite the GIL because, if left unchecked, they can still interleave actions in destructive ways.

synchronisation Mechanisms

Locks (Mutexes): A lock, often known as a mutex, is a basic synchronisation primitive. Both locked and unlocked are its states. A thread cannot access a shared resource until it has obtained the lock. The thread making the request waits until the lock is released if it is already being held by another thread. The thread releases the lock when it is done using the resource.

This guarantees that the code accessing the shared resource is the critical piece, and that only one thread can access it at a time. Lock objects are provided for this purpose by the threading module. Another feature of the thread module is lock allocation. When using locks, the with statement can be used to automatically handle obtaining the lock at the beginning of the block and releasing it at the end of the block, even in the event of a mistake.

Queues: The multiprocessing module’s queue facilitates secure data flow between processes. Additionally, queues are employed in multithreading, namely in the Producer-Consumer pattern, in which one or more producers add items to a queue while other consumers remove them. For safe access, the queue manages the synchronisation that is required. Redis can also be used for network-wide queueing.

Example:

import threading
import time
count = 0 
def adder(addlock): 
    global count
    with addlock: 
        count = count + 1
    time.sleep(0.01)
    with addlock: # acquire lock again automatically
        count = count + 1 
addlock = threading.Lock() 
threads = []
for i in range(5): 
    # Pass the lock object to the thread function
    thread = threading.Thread(target=adder, args=(addlock,))
    thread.start()
    threads.append(thread)
# Wait for all threads to complete
for thread in threads:
    thread.join()
print("Final count:", count)

Output:

Final count: 10

In this example

  • One shared resource that is a global variable is count.
  • Multi-thread operation is intended for the adder function. The count variable is altered within the adder. If several threads attempted to change the count at the same time without coordinating, the outcome might be inaccurate (a race situation).
  • One threading.Addlock is created as a lock object.
  • When this lock object is generated, it is supplied as an argument to every thread.
  • The with addlock: statement is utilised within the adder function. It automatically acquires the lock when you enter the block and ensures that it is released when you depart it, even in the event of an error. This is the recommended method of using locks.
  • This code is inserted inside the with block and accesses or changes the shared count variable (count = count + . The source states that “only 1 thread updating at once” is guaranteed within this block.
  • In order to prevent race situations and guarantee that the final count is the expected value in this simplified example, 10 since each of the five threads increments count twice we wrap the count modifications within the locked sections, limiting the number of threads that can perform these crucial operations at once.

The operation is thread-safe and the integrity of the shared data is guaranteed by the use of a lock, which synchronises thread access to the shared variable. The underlying necessity and technique of synchronisation when concurrent activities interact with shared resources in Python are demonstrated by this straightforward example.

Kowsalya
Kowsalya
Hi, I'm Kowsalya a B.Com graduate and currently working as an Author at Govindhtech Solutions. I'm deeply passionate about publishing the latest tech news and tutorials that bringing insightful updates to readers. I enjoy creating step-by-step guides and making complex topics easier to understand for everyone.
Index