Digital Edition

Best Practices for JDBC Programming
Improving maintainability and code quality

While many new database persistence methods for Java programmers have been developed in recent years (e.g., entity beans, JDO, Hibernate, and many others), most database access code is still native JDBC. This statement doesn't express a preference, just an observation. Reasons for JDBC's popularity include: (1) it was first, (2) it works, and (3) most developers already know it.

I first documented "best practices" for using the JDBC libraries for JDJ in April 2000 (Vol. 5, issue 4). For the purposes of this article, the "best practices" goals for JDBC programming are maintainability, portability, and performance. Maintainability refers to the ease with which developers can understand, debug, and modify JDBC code that they didn't write. Portability refers to the ease with which JDBC code can be used with alternate databases. It turns out that JDBC does not make database programming as platform independent as I would like. In addition, I consider portability a noble goal even if you have no current plans to switch database vendors. Who knows how long your code will be around and what kinds of changes will have to be made to it? Performance refers to optimizing the time and/or memory needed to run JDBC code.

Best Practices for JDBC Programming
Newer recommendations since my first article on the subject are the following.

Utilize Connection Pooling Techniques
Establishing database connections, depending upon platform, can take from 30 to 1,000 ms. This can be a meaningful amount of time for many applications if done frequently. Fortunately, all EJB containers and most servlet engines provide connection pooling features. Connection pooling provides a way for database connections to be established separately before your application needs them. They are then used and reused throughout the lifetime of the application. Furthermore, they're usually not difficult to configure and use. Listing 1 contains code that illustrates the use of connection pooling.

Listing 1 assumes the presence of a J2EE implementation. Applications without access to J2EE constructs can implement connection pooling features using one of many open source products. I recommend Commons-DBCP from Apache-Jakarta ( There is no need for developers to write their own connection pooling packages these days.

Connection pooling enhances performance by reducing the number of physical database connects and disconnects. Furthermore, it's common for connection pooling to have testing features that asynchronously test connections before your application needs them. In this way, pools provide applications with resilience to database outages.

It's very important that all connections created are closed. Note that with connection pooling, issuing a close() merely returns the connection to the pool. Usually, it doesn't result in a database disconnect.

Be Diligent About Closing All JDBC Objects
This practice is a reiteration of my previous article, but not closing JDBC objects after use is the most common mistake I see by far. Many developers don't understand that they are supposed to close ResultSet, Statement, PreparedStatement, and CallableStatement objects as well as Connection objects. Many are under the assumption that closing the connection will cascade into a close for these other types of objects. Some JDBC drivers do, but many don't.

Resource leaks caused by not closing JDBC objects are particularly aggravating because they may not surface until the code is run under load. In development, you may not generate enough resource leakage to cause a problem. Furthermore, some JDBC vendors override finalize() to release database resources after objects are garbage collected. If leaked objects are properly closed via finalize(), it's much harder to see the leak in development as the garbage collector corrects the leak. Under high load, JDBC objects may not be garbage collected soon enough to avoid exceeding database resources.

I guard against JDBC resource leaks by creating and closing JDBC objects within the same method. For example, the method that creates a connection will also be the method to close it. Furthermore, I close these objects in a finally block to ensure that they get closed in error conditions as well.

The reason many developers don't close JDBC objects in a finally block is that it's programmatically inconvenient. The fact that JDBC objects throw a checked exception on close() will cause you to encapsulate the closes in try/catch logic nested within the finally block. An example of how to effectively close JDBC objects is presented in Listing 2.

To make closing a JDBC object more palatable, I usually create generic "close" utilities, like those I've included in the open source project CementJ (, which make this close logic a one liner. For example, using a generic close utility, the close of the PreparedStatement and ResultSet objects in Listing 1 are one line:

DatabaseUtility.close(results, pStmt);

CementJ will check for nulls before attempting to close. It will also log exceptions received on the close, but not throw an exception (as there is typically nothing to be done anyway). Using a generic close utility, Listing 2 can be rewritten so that it's considerably shorter and less complex; Listing 2a illustrates this.

For those who wish to apply this concept but don't want to add an additional product dependency, Listing 3 illustrates an implementation for a generic close utility.

Check Stored Procedure Use
One of the most common questions I'm asked is if SQL should be embedded in stored procedures instead of the Java source. There's also a common perception that stored procedures always perform better than SQL statements executed directly within Java. Unfortunately, the answers to these questions aren't as simple as some would like.

There are really two things to consider with stored procedures: portability and performance. From a portability standpoint, stored procedures written in a proprietary language (such as Oracle's PL/SQL or DB2's SQL Procedure Language) make it much more difficult to migrate the application to another database should that become a business priority. These proprietary languages have unique features that might be difficult to replicate in other proprietary languages.

If you do write stored procedures, put them in a nonproprietary language. Some database vendors are supporting stored procedures written in Java and other third-generation languages. This makes them easier to move.

As to performance ramifications, the specifics can differ between databases. Stored procedures don't always perform better than embedded SQL. The soundest advice on this point is to comparatively measure stored procedure performance and embedded SQL for your database. As a general rule, CPU-intensive operations are bad candidates for stored procedures. For most types of databases, most of the performance gain comes from reducing the number of network transmissions, not from being parsed and physically stored in the database.

When considering whether or not to make something a stored procedure, I ask the question: How many network transmissions will be saved by making this process a stored procedure? If the answer is zero, performance will most likely not be improved.

Utilize Generated Key Value Retention Feature
One of the classic problems in database programming is how to handle generated unique key values. Most database vendors supply ways to automatically generate the key value and make it available when new rows are inserted. However, if an application needs that key value for other processing, it would have to read the row just inserted to get the generated value. For example, say purchase order numbers are dynamically generated by the database. A purchasing application would need the generated order number to put on the line items associated with that order.

The JDBC 3.0 specifies that the JDBC driver should return generated values with the insert operation. This means that you don't have to code and execute an additional select statement to retrieve the key value. Use of this feature can streamline code, but may or may not reduce the overhead of issuing an additional select depending upon the specific JDBC driver implementation. A word of caution, some database vendors do not yet support this feature of the JDBC specification.

To utilize the generated key feature using JDBC, execute a SQL statement that performs an insert. After execution, issue a getGeneratedKeys() from the Statement or PreparedStatement. The return will be a ResultSet with all generated keys (see Listing 4).

Separate JDBC Code from Business Logic
This is a design-level practice as opposed to a coding recommendation. I usually separate JDBC code into separate classes I call data access objects (DAOs). Data access objects manage access to relational databases as well as other types of persistent storage.

For convenience, I separate DAO objects in the package hierarchy (e.g., or com.acme.appname.dao). Some developers also add a DAO suffix to data access object names; for example, a customer DAO might be named CustomerDAO.

The primary reasons to separate data access from the rest of the application is that it's easier to switch data sources and it's easier to share DAOs between functions or applications. Medium- to large-sized businesses in particular are likely to have multiple applications using the same data access logic. For example, it's common for an application to need customer information in several different functions. The act of logging into a Web site is different than e-mailing a purchase receipt. However, both these processes need customer information. Separating the access makes the access easier to reuse.

Since JDBC code is usually in separate packages, it's easier to locate should you wish to change database vendors or support multiple database vendors. As data access code is easier to find, the separation allows you to more easily determine the impact of database structure changes as well.

Consider Query Fetch Size for Large Result Sets
This practice is aimed at improving performance. The fetch size is the number of rows physically retrieved from the database at one time by the JDBC driver as you scroll through a query ResultSet with next(). For example, you set the query fetch size to 100. When you retrieve the first row, the JDBC driver retrieves the first 100 rows (or all of them if fewer than 100 rows satisfy the query). When you retrieve the second row, the JDBC driver merely returns the row from local memory - it doesn't have to retrieve that row from the database. This feature improves performance by reducing the number of calls (which are frequently network transmissions) to the database.

To set the query fetch size, set the fetchSize field on the Statement (or PreparedStatement or CallableStatement) before execution. Listing 5 provides an example of setting the query fetch size. (Listings 5-6 can be downloaded from

As a general rule, setting the query fetch size is only effective for large result sets. If you set the fetch size much larger than the number of rows retrieved, it's possible that you'll get a performance decrease, not increase. Furthermore, the benefit you get from increasing the fetch size diminishes the higher it's set. I typically set this value to 100 for large result sets.

The performance improvement gained from setting the query fetch size varies widely depending upon the database platform and JDBC driver being used. I've seen performance improvements as large as 50%. Performance increases vary depending upon the speed of the network. Generally, the slower the network, the more performance increases can be gained by manipulating the fetch size.

Consider Update Batching
This practice is aimed at improving performance. In situations where you want to issue several inserts, updates, or deletes in the same unit of work, update batching allows you to group those statements together and transmit them to the database as one set of instructions. Like setting the query fetch size, update batching works by reducing the number of network transmissions between the application and the database.

For example, consider a Web site for online sales. When customers create orders, they often order multiple items. When the order is recorded, usually the items on the order are recorded at the same time. Update batching allows the multiple inserts for the order to be transmitted to the database at once.

Update batching is supported for SQL issued via PreparedStatement, CallableStatement, and Statement objects. An example of update batching is presented in Listing 6.

As with manipulating the query fetch size, the amount of performance improvement with batching statements varies between database vendors. I've seen performance improvements as large as 92% from batching statements. Also, the slower the network, the greater the opportunity for performance improvement.

Changes on the Horizon
The largest advance in the JDBC 3.0 specification is the addition of RowSet support. RowSets are ResultSets that eliminate the need for you to explicitly declare and use Statements and PreparedStatements for SQL queries. They were explicitly added to support the JavaBean specification. The supporting interfaces for RowSets are being included in Tiger. Optionally, you can download the JDBC RowSet Implementations 1.0 JWSDP 1.4 Co-Bundle 1.0 for use with v1.4 of the JDK.

RowSets can be connected or disconnected. Connected RowSets maintain an underlying connection to the database. Disconnected RowSets allow query results to be serialized and transmitted between JVMs, possibly on different servers. An interesting feature of disconnected RowSets is that you can update the RowSet and post those changes to the originating database at a later time. Another interesting feature is that RowSets can be serialized as XML documents.

As this is a new feature, providing "best practices" regarding the use of RowSets is premature. As with support for the generated keys feature, I would expect support for the RowSet feature to differ among the database vendors.

We've discussed several ways to make JDBC code more performant, maintainable, and portable on an individual basis. I always recommend team code reviews and documented coding standards as ways to develop more "best practices" and consistently apply existing coding techniques. Furthermore, team code reviews help further the goals of "best practices" by improving the maintainability and general quality of code within an application.


  • JDBC Technology Page:
  • Commons DBCP - Connection Pooling package:
  • JDBC RowSet Implementation Download:
  • JDBC Performance Tips:
  • The J2EE Architect's Handbook:


    Core Best Practices
    The practices recommended in my April, 2000 article were the following:

  • Use host variables for literals - avoid hardcoding them: This practice involves using the java.sql.PreparedStatement instead of java.sql.Statement in cases where you need to supply values for "where" clauses in SQL statements. This eliminates database overhead in figuring out how to access your data. One noted exception is Oracle. Oracle, in recent releases, has tuned Statement processing to the point that it slightly outperforms PreparedStatement processing for small numbers of statement executions.
  • Always close Statements, PreparedStatements, and connections: This practice involves always closing JDBC objects in a finally block to avoid resource limitations found in many databases.
  • Consolidate formation of SQL statement strings: This practice involves placing the SQL statement text in a field that is declared static final to reduce string processing as well as make SQL statements easy to identify and read.
  • Use delegate model for a database connection: This practice involves consolidating any database-specific tuning in a custom implementation of connection so that it's possible to take advantage of database-specific tuning features without sacrificing too much in the way of portability.
  • Use Date, Time, and Timestamp objects as host variables for temporal fields (avoid using strings): This practice eliminates conversion overhead in the database and often the application.
  • Limit use of column functions: This practice makes it easier to switch database vendors.
  • Always specify a column list with a select statement (avoid "select *"): This practice insulates your code against tuning activities of database administrators.
  • Always specify a column list with an insert statement: This practice insulates your code against tuning activities of database administrators.
  • About Derek Ashmore
    Derek Ashmore is a consultant and the author of the J2EE
    Architect's Handbook, available at

    In order to post a comment you need to be registered and logged in.

    Register | Sign-in

    Reader Feedback: Page 1 of 1

    Very nice article, but as per my knowledge and experience for each instance of the program that accesses DB, the connection would remain open even after the program completion. For a single opened connection, it may have not affect on the DB, but if there are hundreds of connections that are open. This is where we come across a situation where no new connections may be available for new instances of the program an thus no access to DB. Hence, it is always better to use either connection pooling or we close connections before the completion of the program. For resultset and statments, it is not required to be closed.


    Senior Database Administrator
    Gulf E-Solutions

    I wonder how was this article ever published (including in the printed version where I read it first) without being edited.

    Just two examples are the phrase that developers "... are under the assumption that closing the connection will cascade into a close..." for ResultSets, Statements and PreparedsStatements. Of course developers assume this since this is part of the JDBC spec and drivers that we use in production would be JDBC compliant and the JDBC spec mandates that closing the Connection would close any Statements (be it Prepared or Callable) when the close method is called on the Connection used to create them. The same goes for ResultSets when the Statement close method is closed.

    It is probably a good idea to close the Statements and ResultSets anyway but this is hardly mandatory if the Connection is closed in a timely manner.

    The second issue that I noticed was that the listing of the DatabaseUtility class would not even compile plus it has some dead code parts in it ("
    else if(dbObj instanceof CallableStatement) ..." -- this would never be true since CallableStatement is an instance of Statement and we've already done this check and is unnecessary since the "close" method is defined at the Statement interface and CallableStatement doesn't define a close method on its own).

    At this point I stopped reading.

    Subscribe to the World's Most Powerful Newsletters


    "Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical ...
    Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing...
    Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cl...
    Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS...
    In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web...
    Every organization is facing their own Digital Transformation as they attempt to stay ahead of the c...
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with exp...
    Most modern computer languages embed a lot of metadata in their application. We show how this goldmi...
    Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the U...
    Moving to Azure is the path to digital transformation, but not every journey is effective. Organizat...
    Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS...
    DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), wh...
    Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures....
    Intel is an American multinational corporation and technology company headquartered in Santa Clara, ...
    Now is the time for a truly global DX event, to bring together the leading minds from the technology...
    Artifex Software began 25-years ago with Ghostscript, a page description language (PDL) interpreter ...
    On-premise or off, you have powerful tools available to maximize the value of your infrastructure an...
    In an age of borderless networks, security for the cloud and security for the corporate network can ...
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with exp...
    As the digitization of business accelerates the move of critical applications and content to the clo...