//
you're reading...
Java/Cloud

An Alternative to Clustering

In my blog post “Is scaling with clustering sufficient for cloud offering” I have questioned if clustering is the right solution to scale in the cloud. Can there be other methods of scaling which allows us to scale easily? In this blog post I want to explore alternatives to scaling and explain the solution we are using with SMART.
One option to clustering is using Distributed JVM. Distributed JVM has been in research for some time now. For eg., JESSICA is a distributed JVM that supports parallel execution of multi-threaded Java applications. These JVMs distribute threads across different cluster nodes. The distribution is transparent to the programmer and can migrate a running thread from one cluster node to another without the knowledge of the developer. In these JVMs, the heap is also distributed and transparent to the developer. The data and I/O from one cluster node can be accessed on other cluster nodes.
Application server characteristics
From Wikipedia“An application server can be either a software framework that provides a generalized approach to creating an application-server implementation, without regard to what the application functions are or the server portion of a specific implementation instance. In either case, the server’s function is dedicated to the efficient execution of procedures (programs, routines, scripts) for supporting its applied applications.”
An Application server is heterogeneous in nature. This means, if a cross section of threads running in an application server is examined, in all probability, the threads are executing different functions. When examining these functions, we can see that some of them are processing external requests; some of them are processing internal queue related tasks while some others are just doing housekeeping related tasks.  If such a heterogeneous java application is run on a distributed JVM, the JVM is likely to distribute work across cluster nodes without any knowledge of the work being performed. In reality it would have been better to just distribute the work done for processing external requests and internal queue messages.
An Application server is a container that executes functions and procedures written by application developers. Here distribution needs to be applied to the execution of the application as opposed to the application server functions itself. This environment does not require an automatic distribution of work and heap, but a targeted distribution of work and heap for the applications running in it. The Application Server knows what can be distributed and what not and this cannot be a property of the JVM. What is required is a library using which the Application server can specify what execution and data can be distributed.
Library that distributes – An Alternative to Clustering
If we were to write a library that distributes work across JVM nodes using underlying java constructs how would it look. Diffused JVM Library extends concepts of java to work in a distributed environment.
We start with the thread pool concept. The thread pool is a concept that is already used by developers to achieve multi-threading and gives the developer the flexibility of controlling what functions are threaded. If this concept can be extended so that the threads contained in a thread pool are from multiple JVM nodes rather just one, then we should typically be able to achieve clustering at the thread level which is controlled by the programmer. We call this the “Network Thread Pool”. Refer to the diagram below:
Network Thread Pool

Some of the major questions that have to be answered to implement such a thread pool are:

  1. Who will decide how to distribute the work across the threads in the different JVM nodes?
  2. Data present in one JVM can be accessed by threads executing in another. How can data be shared across JVMs?
  3. Data shared between threads can be synchronized. How can we synchronize across JVMs?

Who will decide how to distribute the work across the threads in the different JVM nodes? 

Work can be distributed across threads by a single controller, while other threads are worker threads. But, we are writing this library as a substitute for clustering. Hence we cannot limit it to being a controller/worker relation because we have introduced a bottleneck in the controller. What we do require is a peer to peer working of the nodes. Any node should be able to pick up the responsibility of deciding which node to assign a work to. This implies that all nodes in the cluster should know about the other nodes in the cluster. The node that is trying to assign work for execution to the thread pool should have enough information to decide which cluster node to assign the work. Hence, the core of the network thread pool that distributes work is also distributed.

Distributed core for Network Thread Pool
 
Data present in one JVM can be accessed by threads executing in another. How can data be shared across JVMs?
 
This implies that data has to be distributed and available to the JVM where the thread is executing. Data to a thread can be provided as a local variable in the Runnable created or from a global static variable. In our implementations we have limited the data to be present as a local  variable in the Runnable. The programmer has to code the data used by threads as distributed data. We call this the “Network Data”. Network Data can be either a “shared” or “diffused”.
“Shared” data is when only one master copy exists on the node that created it. All the other JVMs have references to the master. Any operation on the data is translated to a network call and is executed on the master. For eg., if the execution uses an AtomicInteger, this has to be a shared data. When two JVMs access the integer, one JVM has to wait till the other releases it. Hence both the JVMs have to operate on the same master data. Such Shared data have the obvious problem where the master is weak link. To overcome this, the data is replicated in the other JVMs though it is not used. When the master goes down, one of the other JVMs, picks up to become the master and continues the operation of the data further.
“Diffused” data exists in parts across all JVMs which have used this data. For eg., if the execution uses a Map, then the data added into the map from thread in JVM1 is stored in JVM1, while data added into the map from thread in JVM2 is stored in JVM2. The “Diffused Map” is distributed between JVM1 and JVM2. This brings out an interesting point. We can have data that is “Write less read more” and data that is “Write more read less”. Optimizations can be done appropriately based on what kind of data is being used. Optimizations and replications ensure we have a high available data when one or the other JVM fail.
The control of which data is distributed and whether it should be a shared or diffused is given to the programmer. For eg., in SMART, we queue requests for the same object into a JITQueue and process this using a Network Thread Pool. Here we have created the Queue as a “diffused queue”. Requests that arrive at channels in JVM1 create and add the events to the data queue in JVM1, while requests that arrive at channels in JVM2 create and add events to the data queue in JVM2. The JITProcessor, picks up the data appropriately based on the sequence in which the data arrived. Please refer the diagram for a pictorial representation of a diffused data.
Diffused Queue
 
Data shared between threads can be synchronized, how can we synchronize across JVMs?
 
Synchronization in java is achieved using locks or semaphores. Given that we already have the concept of shared data where a master exists and other nodes are just replications of the master, we can easily extend the concept to apply for locks. The master is created in the JVM that creates the lock. All other JVMs have to wait on this master data before executing the synchronized code. 
 
From the above, we find that a lot of network related operations can occur and we do not want threads to be tied up waiting for responses on for requests. To overcome this we can easily use the Reentrant Thread Pool concept that I have described here
 
Using this approach instead of clustering gives a number of advantages:
  • We can easily code into the library, starting up of new JVMs based the current state of the JVMs. If the current JVMs are found to be busy, a new JVM can be created and added to the cluster.
  • We can code in distribution at optimal points and expose them as simple configurations to application developers. For eg., in SMART, we automatically distribute the functionality of the “Message Manager” and the “Message Processor“. We expose to the application developer a configuration that allows developers to specify the transitions that can run in parallel. This allows SMART to easily distribute a “single atomic transaction” across JVMs.
In the above, we have taken the existing concepts in java such as thread pool, map, queues etc and extended the concept to apply and work across a network of JVMs. This gives us the flexibility of creating a java application program that can easily convert from a non-distributed program to a distributed program, by just varying the object constructed from a standard java object to a network object. 
Advertisements

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: