Spring Dynamic Language Support with Groovy

Groovy is a dynamic and object-oriented programming language running on JVM. It uses a syntax like Java, can be embedded in Java and is compiled to byte-code. Java code can be called from Groovy, and vice versa. Some of Groovy features are Meta and Functional programming, Dynamic typing (with the def keyword), Closures, GroovyBeans, Groovlets, integration with Bean Scripting Framework(BSF), Generics, Anotation and Collection Support.

This article explains fundamental Spring Dynamic Language Support for Groovy via the following ways :

1) By using Java syntax and Spring Stereotype,
2) By using Groovy syntax and Spring Stereotype,
3) By using inline-script feature,
4) By using Spring Groovy language support(lang:groovy).

Used Technologies :

JDK 1.7.0_09
Spring 3.2.0
Groovy 2.0.4
Maven 3.0.4

  • Facebook
  • Twitter
  • Reddit
  • LinkedIn
  • DZone
  • Add to favorites
  • Email
  • RSS
  • Delicious
  • Live

Chunk Oriented Processing in Spring Batch

Big Data Sets’ Processing is one of the most important problem in the software world. Spring Batch is a lightweight and robust batch framework to process the data sets.

Spring Batch Framework offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, Chunk Oriented Processing Model is explained. Also, TaskletStep Oriented Processing in Spring Batch Article is definitely suggested to investigate how to develop TaskletStep Oriented Processing in Spring Batch.

Chunk Oriented Processing Feature has come with Spring Batch v2.0. It refers to reading the data one at a time, and creating ‘chunks’ that will be written out, within a transaction boundary. One item is read from an ItemReader, handed to an ItemProcessor, and written. Once the number of items read equals the commit interval, the entire chunk is written out via the ItemWriter, and then the transaction is committed.

Basically, this feature should be used if at least one data item’ s reading and writing is required. Otherwise, TaskletStep Oriented processing can be used if the data item’ s only reading or writing is required.

Chunk Oriented Processing model exposes three important interface as ItemReader, ItemProcessor and ItemWriter via org.springframework.batch.item package.

ItemReader : This interface is used for providing the data. It reads the data which will be processed.

ItemProcessor : This interface is used for item transformation. It processes input object and transforms to output object.

ItemWriter : This interface is used for generic output operations. It writes the datas which are transformed by ItemProcessor. For example, the datas can be written to database, memory or outputstream (etc). In this sample application, we will write to database.

Let us take a look how to develop Chunk Oriented Processing Model.

Used Technologies :

JDK 1.7.0_09
Spring 3.1.3
Spring Batch 2.1.9
Hibernate 4.1.8
Tomcat JDBC 7.0.27
MySQL 5.5.8
MySQL Connector 5.1.17
Maven 3.0.4

  • Facebook
  • Twitter
  • Reddit
  • LinkedIn
  • DZone
  • Add to favorites
  • Email
  • RSS
  • Delicious
  • Live

Hazelcast Distributed Execution with Spring

The ExecutorService feature had come with Java 5 and is under java.util.concurrent package. It extends the Executor interface and provides a thread pool functionality to execute asynchronous short tasks. Java Executor Service Types is suggested to look over basic ExecutorService implementation.

Also ThreadPoolExecutor is a very useful implementation of ExecutorService ınterface. It extends AbstractExecutorService providing default implementations of ExecutorService execution methods. It provides improved performance when executing large numbers of asynchronous tasks and maintains basic statistics, such as the number of completed tasks. How to develop and monitor Thread Pool Services by using Spring is also suggested to investigate how to develop and monitor Thread Pool Services.

So far, we have just talked Undistributed Executor Service implementation. Let us also investigate Distributed Executor Service.

Hazelcast Distributed Executor Service feature is a distributed implementation of java.util.concurrent.ExecutorService. It allows to execute business logic in cluster. There are four alternative ways to realize it :

1) The logic can be executed on a specific cluster member which is chosen.
2) The logic can be executed on the member owning the key which is chosen.
3) The logic can be executed on the member Hazelcast will pick.
4) The logic can be executed on all or subset of the cluster members.

This article shows how to develop Distributed Executor Service via Hazelcast and Spring.

Used Technologies :

JDK 1.7.0_09
Spring 3.1.3
Hazelcast 2.4
Maven 3.0.4

  • Facebook
  • Twitter
  • Reddit
  • LinkedIn
  • DZone
  • Add to favorites
  • Email
  • RSS
  • Delicious
  • Live

TaskletStep Oriented Processing in Spring Batch

Many enterprise applications require batch processing to process billions of transactions every day. These big transaction sets have to be processed without performance problems. Spring Batch is a lightweight and robust batch framework to process these big data sets.

Spring Batch offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, TaskletStep Oriented Processing Model is explained.

Let us investigate fundamental Spring Batch components :

Job : An entity that encapsulates an entire batch process. Step and Tasklets are defined under a Job

Step : A domain object that encapsulates an independent, sequential phase of a batch job.

JobInstance : Batch domain object representing a uniquely identifiable job run – it’s identity is given by the pair Job and JobParameters.

JobParameters : Value object representing runtime parameters to a batch job.

JobExecution : A JobExecution refers to the technical concept of a single attempt to run a Job. An execution may end in failure or success, but the JobInstance corresponding to a given execution will not be considered complete unless the execution completes successfully.

JobRepository : An interface which responsible for persistence of batch meta-data entities. In the following sample, an in-memory repository is used via MapJobRepositoryFactoryBean.

JobLauncher : An interface exposing run method, which launches and controls the defined jobs.

TaskLet : An interface exposing execute method, which will be a called repeatedly until it either returns RepeatStatus.FINISHED or throws an exception to signal a failure. It is used when both readers and writers are not required as the following sample.

Let us take a look how to develop Tasklet-Step Oriented Processing Model.

Used Technologies :

JDK 1.7.0_09
Spring 3.1.3
Spring Batch 2.1.9
Maven 3.0.4

  • Facebook
  • Twitter
  • Reddit
  • LinkedIn
  • DZone
  • Add to favorites
  • Email
  • RSS
  • Delicious
  • Live

Coherence Event Processing by using Map Trigger Feature

This article shows how to process Coherence events by using Map Triggers. Basically, Distributed Data Management in Oracle Coherence is suggested to look over basic configuration and implementation of Oracle Coherence API.

Map Triggers are one of the most important features of Oracle Coherence to provide a highly customized cache management system. MapTrigger represents a functional agent that allows to validate, reject or modify mutating operations against an underlying map. Also, they can prevent invalid transactions, enforce security, provide event logging and auditing, and gather statistics on data modifications.

For example, we have code that is working with a NamedCache, and we want to change an entry’s behavior or contents before the entry is inserted into the map. This change can be made without modifying all the existing code by enabling a map trigger.

There are two ways to add Map Triggers feature to application :

1) A MapTriggerListener can be used to register a MapTrigger with a Named Cache
2) The class-factory mechanism can be used in the coherence-cache-config.xml configuration file

In the following sample application, MapTrigger functionality is implemented by following the first way. A new cluster called OTV, is created and User bean is distributed by user-map NamedCache object used among two members of the cluster.

Used Technologies :

JDK 1.6.0_35
Spring 3.1.2
Coherence 3.7.1
Maven 3.0.2

  • Facebook
  • Twitter
  • Reddit
  • LinkedIn
  • DZone
  • Add to favorites
  • Email
  • RSS
  • Delicious
  • Live