Digital Edition

Using JNDI...
. . . to build flexible, technology-independent enterprise systems

A challenge of software architecture is to create software that can grow with the business and withstand changes to the technology with minimal redevelopment costs.

Business growth usually means increased loads on enterprise computer systems. As more customers and staff come on board, they put more transactions through, and may also require more complex security and access control requirements. To meet these demands, the well-architected enterprise system should be able to morph and scale as much as possible without a major redevelopment effort. Therefore it seems a good idea to design flexible and scalable multitier enterprise systems from the start. However, at the beginning, projects are often neither able nor willing to invest in the extra complexities required to implement such systems. All too often enterprise systems start life as a simple two-tier JSP + database solution and are later reengineered or completely rewritten as a three-tier or even four-tier architecture at great additional cost.

This article describes a simple approach to enterprise systems design and shows how the simple customization of the Java Naming and Directory Interface (JNDI) mechanism can be used to build highly adaptable and flexible applications where system features such as logging, caching, distribution, transaction management, asynchronous or synchronous invocations, and access control are bound into the application after it was built without the need to reengineer or even rebuild the application code.

The approach described in this article is an integral part of the MetaBoss Target Software Model and it has been used very successfully by the users of the MetaBoss. MetaBoss is an integrated suite of tools for the design, development, and management of software systems through modeling. It utilizes Model Driven Architecture concepts and is primarily oriented toward enterprises using Java-based tools and technologies. MetaBoss is a part of the growing family of Professional Open Source products. It is dual licensed and can be used under the Open Source GPL license or MetaBoss Commercial license. More details are available from

Danger of Mixing Technology and Business Code Together
There are many fine enterprise technologies out there and this article does not discuss the merits of using one rather than another. Almost all of them, however, have one dangerous feature - if they're used in the business code layer without extra precautions, they tend to impact the application code to a point where the application becomes inflexible from the technological refactoring point of view. The business application developer who is coding to the standard recommended (or even enforced) by any one of these platforms will likely mix the code dealing with technology-specific issues with pure business logic code. The resulting software is inflexible and too strongly coupled with the chosen technology.

This impact can be observed in practically all areas where technology touches the application code. Here are some examples offered by remote invocation technologies.

The typical requirement is to use prescribed super interfaces to represent remote objects. Most of the technologies require that Java classes and interfaces representing remote objects implement certain interfaces or extend particular abstract classes. In CORBA we must use org.omg.CORBA.Object as a super interface of the remote service object. In J2EE we must use various super interfaces to build enterprise beans. The Web service interface too (at least when using JAX-RPC) requires you to extend java.rmi.Remote. I do acknowledge that most of these prescribed interfaces are very simple to implement, and typically there is absolutely nothing to do except specify that the class implements the interface. However, a side effect is that a number of technology-specific classes, interfaces, and their methods are visible to the business programmer.

The typical requirement is to use prescribed value types to represent remote call parameters. Most of the technologies document a list of supported Java value types that can be used as parameters passed in and out of the remote services. These lists are typically very rich and most of the commonly used types are supported. Each technology, however, still limits the number of Java types that can be used "as is" and leaves it up to the application code to deal with the others. This leads to technology limitations creeping up into the remote services interface design.

The typical requirement is to catch and process special exception types. Most of the technologies communicate network failures to the application layer via special types of exceptions specific to the particular technology. This means the application code may need to catch these technology-specific exceptions.

The typical requirement is to use prescribed coding patterns. Most of the technologies require the use of very specific coding patterns, especially when it comes to connecting to or disconnecting from the remote service, using pervasive services, etc. For example, to make a remote call to the J2EE enterprise bean, the client needs to obtain an instance of the bean home interface via JNDI, then use it to obtain the instance of the bean remote interface and finally make the remote call. For a CORBA client to do the same remote call, it has to use the ORB singleton to connect to the naming service, then use the naming service to obtain a remote object reference, then use the Helper class to narrow the reference, and finally make a call.

To illustrate what may happen when these technologies are not insulated from the application code, consider the following examples.

An application programmer of a CORBA-based system has decided to use an array of org.omg.CORBA.Object elements to keep or pass around the list of previously called business services. This somewhat short-sighted decision was made because org.omg.CORBA.Object was a very convenient common super interface for all of the services. This decision entrenches CORBA technology into the business code and makes it more difficult to move this implementation to any other technology.

An application programmer of a J2EE-based system needs to create an entity bean dealing with value types from the java.awt.geom package. The types in this package are value objects that help to represent geometric shapes such as Arc, Rectangle, etc. However, for a reason unknown to us they don't implement a Serializable interface (at least as of JDK 1.4.2). This means the application programmer will probably have to make the entity bean accept individual attributes comprising these values (i.e., separate X, Y, Height, and Width attributes instead of a Rectangle2D instance). The net effect is that the business application component design is impacted by the deficiencies of the technology.

An application programmer of a simple single-tier system has not given any attention to the future scalability and distribution requirements and has created a simple application using plain Java classes, without any logical layering or split of responsibilities between them. This decision makes it harder to split the system into the separate distributed components at a later stage.

Looking for a Better Approach
When we started to devise the MetaBoss Target Programming Model and adopt coding patterns for our code generators to conform to, we decided to aim for the ideal business "friendly" programming model. Such a model must be as technology-independent as possible, to the point where application code looks exactly the same irrespective of how the system will be deployed, and the actual deployment topography and technological mechanisms can be chosen after the application code is built. At the same time we wanted to stay within the standard facilities offered by the Java language as it exists today.

To achieve these goals we designed the programming model based on technology-independent business components and the use of the JNDI mechanism for application assembly (i.e., connecting components with their clients).

JNDI is an abstract framework that provides naming and directory functionality for Java programs. We chose JNDI because it has a number of advantages:

  • The mechanism is native to Java because the JNDI framework is contained inside J2SE and therefore available in any JVM. The Java community is familiar with JNDI usage patterns, and the component lookup pattern, which is used the most, is relatively easy to learn.
  • It offers complete separation of interface and implementation. The client has to only be aware of the component interface and some kind of location string for use in the lookup operation. The job of finding or creating the instance of the interface is delegated to the JNDI framework and the underlying naming and directory mechanisms.
The Basics of the MetaBoss Target Programming Model
From the component user point of view, the programming model has the following features.

Application code is split into components. Each component has a certain well-defined set of responsibilities. This division is driven entirely by the business logic, business needs, and logical layering of the application. Most important, it's not limited or dictated by technology issues. As such, the decision to have an entity-like component or service-like component can be made in any application, not necessarily a J2EE one. This means the component boundaries are the only places where platform mechanisms, such as remote invocation or caching, can be plugged in.

Each component is exposed to its clients via the component interface - the plain Java interface that's not required to have any properties beyond those required by business logic. Methods in these interfaces are able to use any Java class as a parameter and are able to declare and throw any number of any kind of exceptions. As before, there are no technology-motivated requirements to include anything in the input or output parameter list or to throw any particular kind of exception.

Each component interface is identified by the unique identifier - we call it the component URL. The component URL is a string that's formed as follows:

component:/<fully qualified name of component interface>

Each component interface exposes a string constant named COMPONENT_URL that contains the unique identifier of the interface. Listing 1 contains the sample of the typical component interface with COMPONENT_URL string constant. Apart from the prefix, COMPONENT_URL is simply the fully qualified name of the Java interface exposed by this component.

To obtain an instance of the component, client code has to do a simple JNDI lookup (see Listing 2, lines 10 and 11). Note, the client code only needs to import and be aware of the component interface and nothing else. The actual instantiation or lookup of the particular component implementation occurs at runtime.

It may appear that this pattern looks similar to J2EE. It does; however, there are several key differences. MetaBoss components expose pure business logic interfaces, which are not polluted by technology-related "small print" as occurs with enterprise beans. Moreover, the designer of these components is not concerned about which remote invocation mechanism, if any, will be used. This means application components promise to perform logical operations without disclosing where the operation will be executed and how request and response signals will be transmitted (if indeed such a need to transmit exists).

From the component implementer point of view, the programming model has the following features:

  • Each component can have an unlimited number of different implementations. Implementations may be of the "proxy" kind or the "real implementation" kind. The proxy implements some secondary feature and calls another implementation of the same component to do the actual work. The real implementation fulfills the main business task of this component.
  • Each component implementation consists of two classes: the component implementation class and the implementation factory class.
  • The component implementation class must implement either the component or the java.lang.reflect.InvocationHandler interface:
    - Component interface: This approach produces the "strongly typed" dedicated implementation, which can only be used as the implementation of this particular component type. It's more typically used for the real implementations. Listing 3 shows an example of this kind of implementation.
    - Generic java.lang.reflect InvocationHandler interface: This approach produces the "loosely typed" component implementation, which may be used as the implementation for many different component types. It's more often used to produce reusable proxy implementations. Listing 4 shows an example of the simple logging proxy implementation.
  • The implementation factory class is a simple class that must implement the JNDI standard javax.naming.spi.ObjectFactory interface. The factory is only required to implement a single getObjectInstance() method that should return a new or cached component implementation. Listing 5 shows a simple factory that returns new implementation instances every time. More complex factories may cache implementation objects and return them multiple times.
A Few Practical Examples
Having created an application that follows this programming model, what can now be done with it, and how can the promised flexibility be used in practice?

Figure 1 shows a simple way to deploy our application where the component client and the component implementation code run in the same JVM without any additional mechanisms plugged in. It works well for the initial deployment of a simple application, perhaps a basic JSP application deployed entirely in a Java servlet engine such as Tomcat. Our experience has shown that this configuration is favored by business application developers because it offers them a way to test the business logic without the complexities of a full distributed deployment. In this scenario, JNDI is configured to return the actual component implementation as a result of the component lookup operation.

Figure 2 shows the introduction of the "invisible" architectural feature through the use of a special proxy. It works well for pluggable security, logging, or caching mechanisms. JNDI is configured to return an instance of the special proxy instead of the actual component implementation. Once the special proxy is invoked, it can do anything it likes before and/or after invoking the actual underlying implementation. The process of invoking the underlying implementation from the special proxy implementation is also based on JNDI lookup. Thanks to this use of the JNDI lookup pattern inside proxies, chains of proxies can be built. This means that the proxies created are simple, single purpose classes.

Figure 3 shows the introduction of the "invisible" remote invocation mechanism, again through use of a special proxy. In this case the special proxy consists of two parts: client and server. At the client side JNDI is configured to return the instance of a remote proxy client instead of the actual component implementation. This proxy client makes remote calls to the proxy server, which in turn obtains the underlying actual implementation, again via JNDI. Similar to the previous example, this offers an opportunity to build chains of proxies. As an example, the logging proxy could be configured to run on the client side, server side, or even both sides of the remote invocation proxy.

A key point to notice is that in all the above examples the original component implementation code and the component client code were not modified or rebuilt in any way.

Under the Hood
The smarts to this approach are hidden inside the JNDI provider mechanism. JNDI is a well-supported standard and many application servers provide a Java objects repository facility with a JNDI interface, allowing it to store and retrieve many different types of Java objects. These repositories, however, are not quite what is needed for this mechanism to work. The unique feature required is the ability to serve different kinds of implementations of the same component to different callers. This facility is fundamental for proxy chaining, where lookup from the client must return a proxy implementation, but lookup from the proxy must return the "real" one or the proxy that is next in the chain.

This is why MetaBoss includes a special JNDI service provider implementation. This implementation is packaged in a single MetaBossComponentNamingProvider Java archive, available from It can be used as a standalone library, totally separate from the rest of the MetaBoss suite. It has the following features:

  • A URL context factory implementation that looks after the component scheme (i.e., all URLs starting with component: prefix).
  • It only supports lookup operations. Most important, it doesn't support the bind operation. During lookup, after the decision to return a certain implementation is made, the JNDI Object Factory corresponding to the chosen implementation type is loaded and invoked "on the fly" in order to obtain the instance of this implementation.
  • It uses the set of client/interface/implementation mapping proper-ties to understand which implementation needs to be returned from the particular lookup operation.
  • It can search the set of directories for JAR files with required implementation classes and dynamically load them into the isolated class loaders.
  • As with every JNDI service provider, it is in itself a plug-in, which can easily be replaced without any impact on the application code. For example, one of our clients has replaced this implementation with the one that gets mapping instructions from the database instead of system properties.
  • As with most other JNDI service providers, to plug it in you only need to put it on the main application class path.
For interested readers who wish to read more, I suggest downloading the source from and taking a look at it. I also recommend studying the basics of how to build a JNDI service provider before delving into the source.

The Last Piece of the Puzzle - The Implementation Mapping
This article has shown how to develop dedicated implementations of our components as well as generic ones. Thus far we have not really done much over and above what is required when building a well-architected solution. Now all that is left (apart from downloading the MetaBoss Component Naming JNDI Service Provider plug in) is to configure the mapping rules defining "who gets what," in other words what kind of implementation or chain of implementations has to be served when the particular client or family of clients is looking up the particular interface. The "out of the box" implementation supports reading these mapping rules as a set of provider-specific JNDI environment properties. JNDI allows for these properties to come from a number of locations. However, in our experience we tend to favor "out of code" locations such as application resource files.

The mapping entry is a key=value pair where the key describes the lookup operation (in terms of who is looking up what) in the form:

com.metaboss.naming.component.<interface match expression>[/<client match expression>]

and the value describes what has to be returned from the lookup in the form:

<implementation match expression>[(<implementation match expression >[?])]

In more detail:

  • com.metaboss.naming.component: A constant prefix used to distinguish particular provider-specific properties.
  • Interface match expression: A mandatory expression used to identify single or multiple component interfaces. It may contain wildcard characters.
  • Client match expression: An optional expression identifying the place in the code from which the interface is being looked up. This place in the code can be identified with any precision up to and including the name of the method from which the lookup has come. It can contain wildcard characters. The mapping entry where the client match expression is not specified is used for all clients not individually configured.
  • Implementation match expression: A mandatory expression identifying the object factory to be invoked in order to obtain an implementation. As shown above, these expressions can be chained in order to define the chain of implementations. Wildcards are not allowed since one and only one implementation must match. However, a small number of predefined keywords makes it possible to reference parts of the interface and/or client names in this expression.
Listing 6 shows a few sample mapping entries. As you can see, the mapping syntax offers considerable flexibility. The matching of the particular lookup operation tries the more explicit mapping entries (i.e., entries where the key is less vague) before attempting the less explicit ones.

The mechanism described here offers an architecturally neutral, component-oriented approach to writing the business applications. It protects software development investment by separating business and architectural concerns. While providing many features found in Aspect Oriented Programming frameworks, it stays within the boundaries of the core JDK. I have found this mechanism to be very useful and successful on a number of complex enterprise projects. It's available from and can be used with or without the rest of the MetaBoss suite.


  • Raw JNDI knowledge:
  • MetaBoss Component framework:
  • Burke, B. "It's the Aspects: A new paradigm." JDJ, Vol. 8, issue 12.
  • AspectJ, the Aspect Oriented Programming framework, part of Eclipse project:


    A Few Words About Aspect-Oriented Programming
    The last few years have seen the emergence of a new paradigm called Aspect-Oriented Programming (AOP). This mechanism allows the "injecting" of new behavioral features into certain points of existing application classes. This approach allows the separation of secondary functionality that the main implementation code shouldn't be concerned about.

    AOP is a great approach and it can be used to separate the business code from technology. However, my experience is that it has a few weak points.

    First, the AOP paradigm is not native to the Java language. Most of the AOP frameworks include the need to have an XML document or Javadoc tags that define the join points in the main code and the need to post process the Java byte code in order to "implant" the callbacks from the main code to the secondary code. Some other frameworks take a different approach and extend the Java language, which adds quite a bit of complexity and requires a special compiler. The bottom line is that AOP is not native to Java.

    Another important weakness is that the main body of code has no idea or control over where join points will be located. Most of the AOP frameworks allow you to place join points almost anywhere without any limitations, such as entry to or exit from any method or access to any variable. This approach can present a problem if unsuspecting main code is impacted by something occurring in the advice code or visa versa. Examples of this are multithreading or locking, long execution time, and unexpected exceptions.

    To illustrate why this lack of control may be dangerous, imagine my car's owner's manual. When talking about changing flat tires, it says "Be sure to use designated jacking positions provided on the car." When talking about towing it says "Vehicles fitted with IRS (Independent Rear Suspension) should always be tray towed." The manufacturer of my car has provided certain, well-defined join points for the lifting device and has not provided join points for tow cables simply because this particular car cannot be pulled. No doubt, attempts to ignore these original design limitations will be very damaging to my car.

  • About Rost Vashevnik
    Rost Vashevnik is a principal architect of MetaBoss - an Open Source MDA Tool Suite. He works as a consultant software architect in Australia.

    In order to post a comment you need to be registered and logged in.

    Register | Sign-in

    Reader Feedback: Page 1 of 1

    Subscribe to the World's Most Powerful Newsletters


    Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple ...
    "Cloud computing is certainly changing how people consume storage, how they use it, and what they us...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: D...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As au...
    In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the te...
    "We were founded in 2003 and the way we were founded was about good backup and good disaster recover...
    The now mainstream platform changes stemming from the first Internet boom brought many changes but d...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: D...
    "We began as about five years ago as a very small outfit. Since then we've transiti...
    "DivvyCloud as a company set out to help customers automate solutions to the most common cloud probl...
    Enterprises are striving to become digital businesses for differentiated innovation and customer-cen...
    Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how t...
    "Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and h...
    Adding public cloud resources to an existing application can be a daunting process. The tools that y...
    Organizations planning enterprise data center consolidation and modernization projects are faced wit...
    CI/CD is conceptually straightforward, yet often technically intricate to implement since it require...
    Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often ...
    Fact: storage performance problems have only gotten more complicated, as applications not only have ...
    "We do one of the best file systems in the world. We learned how to deal with Big Data many years ag...
    "We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basi...