Skip to main content

Concurrency in Java – Part 3, State Visibility Guarantees

This post is the part 3 story in the Concurrency in Java – Series that touches the core concepts of concurrency in Java and provides a balanced view from the JVM memory model specifications as well as from the programmer's perspective. To see the list of posts in this series, please visit here.

----------------********----------------

In the previous post, we discussed at length about the object behaviour heuristics. In this installment of the series we shall try to examine the common State Visibility Guarantees that we get from the JVM specification and JVM implementations.

Locking idioms not just help us in controlling access around program ‘critical paths’ but also allows us to coordinate access and guarantee consistency and durability of state visibility operations. This is an important consideration that JVM specification offers as a subtle guarantee. Without this guarantee, there would actually be little or no sense in actually employing synchronization to control thread access operations on shared data. 

The JVM specs guarantee that while a thread is blocked waiting for the lock since another thread is using it, it does not see any changes to the state invariants until the ‘block’ is opened. Once the blocking thread owns the lock, it sees all and complete changes that the previous thread made to the shared invariants.

Essentially this is just a more technical wording for a plain simple common sense operation. If you write something to a variable and read it afterward then, presuming no other write operation happened or is in progress, you should see the same value in the variable which was earlier written out. Though this seems common sense enough, it is true only in a single-threaded model. In a multithreaded setup without synchronization, there can be many variations that can make it totally untrue. So locking is not just about making a group of actions atomic with respect to thread interaction (mutual exclusion) but also about ensuring that threads (other waiting threads and even itself) see the most up-to-date state of shared data (visibility).

Java Memory Model offers a ‘happens before’ relationship guarantee which ensures that state changes happen correctly and new changes overwrite the old ones consistently and in an idempotent manner. Java Memory Model however stresses on, two important state guarantees which the class designers must understand.

  • Word Tearing Phenomenon
  • Volatile modifier

Word Tearing Phenomenon

Word tearing is a significant aspect of JVM implementations that every developer needs to be aware of. Essentially the java specification and memory model explicitly talk about this pitfall that is directly related to processor architectures and behaviors.

From a JVM implementation perspective (and not a Java Program’s perspective) the specification states that every field (a memory address and its associated offset), including array elements placements, are distinct and mutually exclusive. This means that any update operation to a field address is mutually exclusive to any other update operation that precedes or succeeds this operation. Two threads that update adjacent elements of an array separately must not interfere or interact and do not need synchronization to ensure sequential consistency. In essence, every field operation is atomic in itself.

However, some processor architectures don’t provide a single atomic machine set instruction to write to a single byte if the variable length is more than that. On such platforms an update to a byte array is actually a compound operation involving:

  • Reading the whole word first
  • Updating the appropriate byte offset
  • Writing the whole word back

This operation is illegal as per JVM specs because, in a multithreaded setup where unsynchronized code is updating the array in 2 threads, one thread may see the stale array state while another thread is updating it. This is termed as word tearing. Most of the 32-bit and some 64-bit processor platforms suffer from this anomaly. 

All java types which require at the most 32 bits of storage space for correctly holding their values can be safely written and read in one atomic operation. However, types with a length of more than 32 bits e.g. Long and Double, and array types which are of 64-bit length, are not safe as their write operations constitute two 32 bit writes in quick succession. JVM specs call this non-atomic writes. Though modern server platform processor architectures don’t suffer from this problem but still, JVM specs try to be on the safe side here and explicitly states that:

  • Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
  • Writes to and reads of primitives with size up to 32-bits is atomic.
  • Writes to and reads from primitives and arrays with a size of more than 32-bits are non-atomic unless they are explicitly stated as volatile.

Volatile Modifier

Volatile is an alternate weaker form of synchronization that Java provides to supplement intrinsic locking. While intrinsic locking works and is used at a macro level in the Java programming model i.e. in high-level Java source code, volatile performs the same function partially at a micro level i.e. at the processor or machine set execution level. That’s why volatile is not used with methods or classes but only with variables or fields.

When a field is declared as volatile, the JVM and runtime are directed that:

  • The said variable is a shared variable and most likely would be involved in a multi-threaded setup.
  • The said variable should not be cached in any processor L1 or L2 caches or local registers but instead, always re-read directly from the memory location where it is saved.
  • The JVM and runtime should not reorder instruction execution of statements around the volatile variable.

The following code example can fail.

The current thread checks for the variable isReady and if not true goes to sleep for 1 sec and retries again. Though this would work perfectly on a single processor platform (even with multiple threads in code) but on a multi-processor platform with multiple threads running in code the call to t3.isReady at line 7, can actually be cached in processor local registers. So if another thread updated isReady field the waiting thread may not know about it and could loop forever since it prefers the view of the variable value in processors instead of what’s there in the backing state tables. Making the variable volatile solves this problem. Thus volatile modifier ensures visibility guarantees for the variable on which it is declared.

Volatile modifier ensures that all competing threads see the most up to date value of the variable that was last written to it. If a thread wants to read the variable but it is being written at the moment the thread waits and once the write finishes the read proceeds to read the latest value. If the writing thread is queued up after read, then the read proceeds and sees the value that is up to date at the moment. In this essence writes to a volatile variable mimic a monitor exit action (exiting a synchronized block) and reads to a volatile variable mimic a monitor entry action (entering a synchronized block). That’s why sometimes volatile is described as half synchronization. This nature of volatile makes it an excellent tool to prevent word tearing problems. Since reads would always see the most updated writes so volatile doubles and longs are safe to be used in multi-threaded setups even on 32-bit platforms because the JVM runtime would not move ahead until the whole word is read in or modified in one go.

Conclusion

This post covered two important consideration that can impact the safety of shared state between threads in your Java application. The next post in the series shall discuss in depth about the major principles/guidelines to keep in mind while Designing Thread Safe Classes and Components in Java.

To check the previous post in the series please click here. To see the list of posts in this series, please visit here.

Happy Coding!! 👍


Comments

Popular posts from this blog

Does Linux Need a Defragmentor?

In computing terminology, file system fragmentation sometimes also called file system aging, is the inability of a file system to lay out related data pieces sequentially or contiguously on the disk storage media. This is an inherent problem in storage-backed file systems that allow in-place or live modification of their contents. This is one of the primary use cases of data fragmentation.  File system fragmentation increases disk head movement or seeks required for fetching the whole file, which are known to hinder throughput and performance of disk reads and writes. As a remedial action, the motive is to reorganize files and free space back into contiguous areas or blocks so as to keep all pieces of a file data together at one place, a process called defragmentation. Ok, give me an example please? let's attempt at explaining in a simple, non-technical manner as to why some file systems suffer more from fragmentation than others. Rather than simply stumble through lots of dry te

Are we DevOps, or SRE?

SRE Vs. DevOps has been a ranging battle for quite sometime now. Ask any operations or infra team in today's bubbling tech shops and, they will tell you, without a flutter, that they are SRE. Scratch them a bit more and, they will tell you how they follow the SRE principles with stories from their daily grind and link them to Google's SRE handbooks. Then drop the big question, "But isn't the same thing DevOps too?" and see them getting confused and incoherent a bit. Now, if you ask, "Or maybe, yours is more of a hybrid model than pure DevOps/SRE?". Now, you might have turned a few heads and even made some ponder further away. Managing "Operations" as a disciplined practice has always been an arduous task. Many companies today have dedicated operations departments engaged in planning and executions, but more often than not, they fail badly. The tech operations landscape is no different. There are always, generally unsolved questions about how t