Tag Archives: java

Java Tips: Process Object Based On Its Type Without if-then-else Solution

I want to share my answer for this question on StackOverflow.

Say you want to process several objects with different types. Each type must be processed differently but some concerns are:

  1. You don’t want if-then-else solution which is obviously not great for long-term
  2. Configuration is also bad for the same reason

So how is the solution? This is the solution that is using library from Reflections.

public class A {

public class B {

import java.lang.reflect.ParameterizedType;

public abstract class Processor<T> {

	private final Class<T> processedClass;

	public Processor() {
		ParameterizedType parameterizedType =
			(ParameterizedType) getClass().getGenericSuperclass();
		processedClass =
			(Class<T>) parameterizedType.getActualTypeArguments()[0];

	public Class<T> getProcessedClass() {
		return processedClass;

	protected abstract void process(T message);

public class ProcessorA extends Processor<A> {

	protected void process(A message) {
		System.out.println("Processing object A");

public class ProcessorB extends Processor<B> {

	protected void process(B message) {
		System.out.println("Processing object B");

import java.lang.reflect.Constructor;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;

import org.reflections.Reflections;

public class Adapter {

	private Map<Class<?>, Processor<Class<?>>> mapping
		= new HashMap<Class<?>, Processor<Class<?>>>();

	public Adapter() throws Exception {
		Reflections r = new Reflections("");
		Set<Class<? extends Processor>> subTypesOf =

		for (Iterator iterator = subTypesOf.iterator();
                     iterator.hasNext();) {
			Class<? extends Processor> c =
				(Class<? extends Processor>) iterator.next();
			Constructor<? extends Processor> constructor =
			Processor p = constructor.newInstance();
			mapping.put(p.getProcessedClass(), p);

	public <T> Processor<T> getProcessor(
			Class<? extends T> c) {
		return (Processor<T>) mapping.get(c);
public class Main {

	public static void main(String[] args)
			throws Exception {
		Adapter adapter = new Adapter();

		A a = new A();


		B b = new B();



The console after running main method:

14:01:37.640 [main] INFO  org.reflections.Reflections - Reflections took 375 ms to scan 4 urls, producing 222 keys and 919 values
Processing object A
Processing object B

It’s kind of magic, isn’t it?

Limit Your Access to Java API for More Productivity

Simplify your work environment to get more jobs done. One example here is to limit your access to the Java API so you won’t be distracted by proposals that you don’t need.

RCP/SWT developers are facing this all the time. Point is a class in org.eclipse.swt.graphics, but it is also a class in java.awt. When you start to autocomplete, you will get at least two proposals of Point and alas, the Point from java.awt is usually put in the first place. The same with MouseListener. You will get at least one from java.awt.event and one from org.eclipse.swt.events.

You can solve the problem by limiting your access to Java API. To do this, you need to open the Build Path properties of your project.

Here you can double click the Access rules to get the Type Access Rules dialog.

Click Add… to add a rule. For the example I gave before, you may need to set rule to access type java/awt/** as forbidden. You can set many rules as you like and after that close the dialog.

Now if you try to get proposal for Point, you’ll get only the class from SWT and nothing from AWT. This will surely help you choose the right class that you intended.

Guava: Using ListenableFuture

Google Guava has many interesting classes which we can use on our application. The ones from collection package have been already used by many developers and this blog has tutorial on how to use the computing map.

I want to move to the other package. This one is com.google.common.util.concurrent, specifically I want to introduce ListenableFuture. The documentation of the class is as follow:

This interface defines a future that has listeners attached to it, which is useful for asynchronous workflows. Each listener has an associated executor, and is invoked using this executor once the Future’s computation is Future#isDone() complete}. The listener will be executed even if it is added after the computation is complete.

Consider following example. If we have task T1, T2, and T3. T2 can only be done when T1 is finished and T3 can only be done once T2 is ended. The diagram below shows the dependency.

The easiest solution without any concurrency is of course to just run each task one after another. But consider that we have 5 set of these operation. Without thread we can end up with serial solution depicted by the following picture.

ListenableFuture made it easy to create the concurrent version of the solution.

This is a code example of this solution which will print “1”, then pause for a second before printing “2”, and another stop for 1 second before finally printing “3”. Note that ListenableFutureTask extends ListenableFuture.

import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import com.google.common.util.concurrent.ListenableFutureTask;

public class TestListenableFuture {

	static class SimpleTask extends
			ListenableFutureTask<Void> {
		SimpleTask(final String message) {
			super(new Callable<Void>() {

				public Void call() throws Exception {
					return null;

	public static void main(String[] args) {
		ListenableFutureTask<Void> task1 =
			new SimpleTask("1");
		ListenableFutureTask<Void> task2 =
			new SimpleTask("2");
		ListenableFutureTask<Void> task3 =
			new SimpleTask("3");

		ExecutorService exec =
		task1.addListener(task2, exec);
		task2.addListener(task3, exec);

		try {
		} catch (InterruptedException e) {

Pretty easy, isn’t it? And we can probably extend this solution to create a full workflow framework solution.

UPDATE: Remember that the API is still in Beta version. I’ll try to update it once the release version is released

Java Tips: Initializing Collection

Especially in unit test, it is common case that we have to initialize an array or a collection.

Well, for array, it’s OK… A simple code that we know can solve the problem:

String[] s = new String [] {"1", "2"};

But how about Collection? Normal way to initialize collection is something like this (which pretty ugly):

List<String> s = new ArrayList<String>();

I hardly find an elegant solution until I see this post. There are at least three better solution for the case.

First solution:

List<String> s = new ArrayList<String>() {{ add("1"); add("2"); }};

Which unfortunately, doesn’t pass Java Code Convention (that is, if you format the code, it will become uglier than the original).

List<String> s = new ArrayList<String>() {

Second solution:

List<String> s = Arrays.asList(new String[]{"1", "2"});

This solution is the best if you use Java 1.4 or before. But if you use Java 5, the third is more elegant:

List<String> s = Arrays.asList("1", "2");


EDIT: this solution will create a fixed size BUT modifiable collection, so you may want to wrap it with ArrayList (or other Collection class) to make it writable.

Maven Plugin: Java Code Formatter

Enforcing code format in an organization needs discipline. And the best way to do that is by doing some automation. A plugin for Maven will be a easy decision if your organization has used Maven to manage the build.

This plugin is one plugin that exactly does that. The backbone on the plugin comes from Eclipse Java Formatter, which I still endorse as the best in the industry.

Compared to Jalopy, which is now not free, I find Eclipse formatter has the flexibility we often needed. Although it is not bug-free, for most cases it is more than enough.

However, I concerned by the fact that official maven doesn’t have the latest jar of Eclipse JDT. It still only has version 3.3 of it. Hopefully they will have it soon, especially after Helios is released, which bring a lot of improvement to the Java formatter.

op4j: Bending the Java spoon

The tagline of op4j is very interesting: ‘Bending the Java spoon’, which implies that the library offer magic to Java programming. And indeed it does.

The basic idea of the library is to use Fluent Interface to a much greater use. To do this, the developer basically try to provide as much general functions as possible. It says that the current version of op4j has already more than 1000 functions.

If you read some examples from the website and from the blog, you can find several absolutely genuine idea how programming with Java can be enjoyable. One example:

Calendar date = Op.onListFor(1492, 10, 12).exec(FnCalendar.fieldIntegerListToCalendar()).get();

which if done without op4j will be something like:

Calendar date = Calendar.getInstance();
date.set(Calendar.DAY_OF_MONTH, 12);
date.set(Calendar.MONTH, Calendar.OCTOBER);
date.set(Calendar.YEAR, 1492);

Although on this particular case I can see some people will say that the first code is unclear because there the order of the integer can somehow confusing the reader, the fact that it saves a lot of program code is absolutely beautiful.

I love the fact that lately there are many Java libraries with a goal to make programming much enjoyable.

Computing Map on Google Collections


Google always makes interesting projects. My toy nowadays is Google Collections. I don’t think I need to reintroduce it as it has been nicely covered on several blog posts:

Of course, two videos from GTUG are also nice.

Now I want to discuss one functionality from Google Collections which is not really covered by those previous articles. This functionality is called Computing Map. No.. no.. you won’t find a class with such name in their Javadoc.

It is basically a map, where the keys are parameters for a calculation, and the values are results of the calculation. Probably you ever faced such scenario where you need to do a lot of computations using complex algorithm? Fortunately, says that many of those calculations are done using the same parameters. So, instead of doing the same operation over and over again, would it be better to just cache the result and using it later?

That is basically the idea and you can even implement it without using Google Collections. You can code something like this:

private final Map<Parameter, Result> cache = new HashMap<Parameter, Result>(100000);

public Result getResult(Parameter p) {
        if (!cache.containsKey(p)) {
            prepareCache(p); // the complex calculation

        return cache.get(p);

Easy, yes?

But wait… there is a problem with such code. First, how if two calculations are done at almost the same time? You won’t get the wrong result, but the code will still do the calculation twice due to thread problem. No easy solution for such case, double-checked locking is just failed.

And more problems may arise. With time it is possible that there are so many parameters used and your Map will grow without limit. This is standard problem for any cache implementation and using soft reference or any third-party cache implementation may solve the problem.

At the end, our solution is not just so simple anymore.

Here Google Collections may help us. The MapMaker is a very powerful factory-class that allow you to combine features of a Map you can think of. Need a Map with soft reference key and weak reference value? Need a map with strong key and soft reference value? MapMaker will allow you to do that… the easy way.

And it provides us with a Computing Map. A computing map is created with MapMaker by calling the method ‘makeComputingMap’ and defining a function that will transform Parameter to Result.

Our example before will be something like this:

private final Map<Parameter, Result> cache;

public Cache {
    cache = new MapMaker().makeComputingMap(new Function<Parameter, Result>() {

            public Result apply(Parameter from) {
                return prepareCache(from);

public Result getResult(Parameter p) {
        return cache.get(p);

That is basically all. The documentation of the method is like this:

Builds a map that supports atomic, on-demand computation of values.
Map#get either returns an already-computed value for the given key,
atomically computes it using the supplied function, or, if another thread
is currently computing the value for this key, simply waits for that thread
to finish and returns its computed value. Note that the function may be
executed concurrently by multiple threads, but only for distinct keys.

If an entry’s value has not finished computing yet, query methods
besides get return immediately as if an entry doesn’t exist. In
other words, an entry isn’t externally visible until the value’s
computation completes.

Map#get on the returned map will never return null. It
may throw:

  • NullPointerException if the key is null or the computing
    function returns null

  • ComputationException if an exception was thrown by the
    computing function. If that exception is already of type
    ComputationException, it is propagated directly; otherwise it is

Note: Callers of get must ensure that the key
argument is of type K. The get method accepts
Object, so the key type is not checked at compile time. Passing an object
of a type other than K can result in that object being unsafely
passed to the computing function as type K, and unsafely stored in
the map.

If put is called before a computation completes, other
threads waiting on the computation will wake up and return the stored
value. When the computation completes, its new result will overwrite the
value that was put in the map manually.

This method does not alter the state of this MapMaker instance,
so it can be invoked again to create multiple independent maps.

So you’ll get synchronization freely. And best of all, the synchronization doesn’t lock the whole Map, only threads that access the same key.

But there is still a problem with that code… It’s still using strong reference for both keys and values. That’s the default implementation if you don’t specify anything in the MapMaker. Your map will still grow limitless and you will eventually get an OutOfMemoryException.

Well, it’s easy… you can just add a call (softValues) to the creation.

private final Map<Parameter, Result> cache;

public Cache {
    cache = new MapMaker().softValues().makeComputingMap(new Function<Parameter, Result>() {

            public Result apply(Parameter from) {
                return prepareCache(from);

public Result getResult(Parameter p) {
        return cache.get(p);

Now you have a proper implementation of computing map. The values and keys will be hold as long as you have enough memory, but once it need more memory, GC will remove the entries from the Map. Your application will need to calculate the complex calculation again but I think it’s the best achievement of what we can get. Of course you can increase the JVM memory easily anytime you want.

Note that you don’t want to use softKeys. Look at the Javadoc of softKeys.

Note: the map will use identity ({@code ==}) comparison
to determine equality of soft keys, which may not behave as you expect.
For example, storing a key in the map and then attempting a lookup
using a different but {@link Object#equals(Object) equals}-equivalent
key will always fail.

Hmm… that means that your key will be considered equals if it is the same object. If you recreated the Parameter with the same value and even if you override the equals and hashCode correctly, you will not using the pre-computed value. On the other hand, using just softValues is enough, because once the values is GC-ed, the keys will be removed as well. See this bug entry for more information: http://code.google.com/p/google-collections/issues/detail?id=250 or this dialog in the groups: http://groups.google.com/group/google-collections-users/browse_frm/thread/8e4bd19f5cfa9adb/24e9d9de34fadb6f?lnk=gst&q=soft+reference+identity#24e9d9de34fadb6f.

And if you still think that you have a use case for equality soft reference, I have a patch to the MapMaker you can use. It’s not nice and pretty hacky, but it works as far as I can say. I personally don’t use it anymore but maybe I will in the future (if I find a strong use case for that, which I doubt).


Eclipse Tips: (Debugging) Ignoring certain classes from being stepped into

Some of us may have encountered a not so nice experience where you basically got an strange result from a method and decide to step into that method as deep as possible. The problem is somehow you are lost in track and you can’t debug furthermore except if you’re exiting the debug mode or resuming the debug. Both are not the best solution because you have to inspect the application one more time.

The symptom of the problem is usually you a “Source not found” on your editor. If you are trying to ‘Step Return’ you are basically lost in track because you can not return to a class with source code.

For testing, you may try to write a log, for example:


If you’re trying to step into this method, you may get (since this obviously depends on your Eclipse configuration, you may also not get this screenshot):


Now try to ‘Step Return’ several times… do you still get the ‘Source not found’ message?


It’s actually easy to understand if you look at the debug view.

You are basically step into a long method calls, which most of them doesn’t have line information. Eclipse will automatically ignore these methods when you are stepping into. But then suddenly, it touched a method equals from class ‘Class’ which does have line information, but no source attached for it.

So, if you are patient enough, you can just do ‘Step Return’ several times (in this case about 16 times) and you’ll return to your class (with nice source code 🙂 ). This is obviously too much work.

One other alternative is to ignore certain classes to be stepped into. In Eclipse, the name of the functionality is called ‘Step Filters’. To use this functionality to ignore JDK classes, you have to configure Eclipse.

Go to the Preferences dialog, and open Java → Debug → Step Filtering.


Just ‘Select All’ and click ‘OK’. Now… If you are still trapped in the equals method, you can simply ‘Step Return’ and you’ll be back to your source code.

Once you’ve set the configuration in the Preferences, you can enable and disable this Step Filters anytime from your Debug perspective by clicking this button on Debug view.


Now… there is no reason to redo your entire debug session if you’re trapped in such situation.

Happy debugging!