Digital Edition

Connecting the Java World to Grid-Enabled Databases
Consolidate IT resources and optimize usage

Grid computing is not necessarily a new concept; however, its adoption within the enterprise has given birth to a new concept called enterprise grid computing, which is being embraced by the entire IT industry. Enterprise grid computing aims to consolidate IT resources - including both infrastructure software and applications - and optimize their usage, cutting costs substantially along the way. Since Java and J2EE are widely used as enterprise software platforms, how do they align with this vision?

This article outlines a set of challenges that JDBC faces as the database connectivity layer within enterprise grid environments and illustrates how the Oracle Database 10g JDBC driver addresses these challenges. First, I'll introduce the concept of enterprise grid computing; then I'll examine how Java and J2EE operate in grid environments and identify their database connectivity requirements; and finally, I'll discuss the features of the Oracle Database 10g JDBC driver that address those requirements.

Enterprise Grid Computing
Commercial software vendors use an assortment of terms, such as utility computing, on-demand computing, and autonomic computing, to describe enterprise grid computing. Regardless of which term is used, products that support enterprise grid computing all have the same set of common functional denominators:

  • Resource consolidation
  • Resource virtualization
  • Policy-based resource management
  • Provisioning or allocation
The chief aim of enterprise grid computing, as noted above, is to help enterprises consolidate IT resources and optimize usage, reducing infrastructure and labor costs. This is a slight but important divergence from the original concept of grid computing now classified as academic grid computing, in which massive arrays of computational power are constructed from a network of many small and widespread computers, and used to perform large calculations and operations broken in autonomous chunks that can't be achieved even on today's largest supercomputers. A good example of this is the SETI@home project.

The convergence of recent hardware and software advances has made resource virtualization possible and allowed the enterprise grid to be constructed. On the hardware side, these advances include networked storage devices (like storage grids) and low-cost, modular hardware components (such as blades); on the software side, they include improvements in networking, Web services, databases, application servers, and management frameworks.

Although no specific enterprise grid computing standards have been established, there is a general move toward the concept of service-oriented architectures (SOA), which are based on and make extensive use of existing Web services standards and specifications. SOA makes it possible to construct architectures where client applications can simply register, discover, and use the services deployed over the grid. This move is spearheaded by academic and research proposals such as the Open Grid Service Architecture (Global Grid Forum) and middleware vendors through participation in Web services standards bodies. A new consortium, Enterprise Grid Alliance, has been formed with the goals of developing enterprise grid solutions and accelerating the deployment of grid computing in enterprises.

Figure 1 illustrates a typical enterprise grid platform. While a complete enterprise grid computing discussion is beyond the scope of this article, I hope I've given enough information to encourage you to learn more about it.

Java and J2EE Containers in the Enterprise Grid
Very few enterprises run their business with just stand-alone Java or J2EE servers. Typical business applications employ more complex architectures consisting of load balancers, HTTP listeners, Web caching servers, J2EE containers, directory servers, resource managers and monitors, and management services.

In recognition of this, vendors, such as Oracle with its Oracle Application Server 10g, are actively supplementing their infrastructure with provisioning, monitoring, registering, discovering, and manageability mechanisms. This allows them to be used in the more complex system architectures required by modern SOA-based business applications.

Similarly, the J2EE world, realizing that it needs to broaden its support to interoperate with other applications and environments in a loosely coupled manner, is aligning with enterprise grid computing and SOA concepts. It's doing this by including core Web services standards (such as SOAP, WSDL, UDDI, and JAX-RPC) and numerous other Web services specifications including WS-I, WSIF, security, transaction, reliable messaging, events, peer-discovery, policy, orchestration, choreography, provisioning, and service-level agreement.

In the scientific and research worlds, the Globus Toolkit, from the Globus Alliance, is a reference implementation of the Open Grid Services Infrastructure (OGSI). OGSI lets Java components be exposed as OGSI services. OGSA-DAIS (Data Access and Integration) is another scientific and research proposal aimed at addressing data services requirements in grid environments; leading database vendors are participating in this initiative.

Enterprise grid computing creates an extremely dynamic environment where resources are automatically augmented or diminished based on business priorities captured in allocation policies. The current version of the JDBC specification (and implementations based on it) makes working in such environments impractical and reduces the ability of Java and J2EE applications to achieve all the benefits of enterprise grid computing. To be more explicit, let's look at some of the issues faced when working in an enterprise grid computing environment:

  • Most JDBC applications today use connect strings that explicitly specify the hostname or IP address of the target database. This won't work when processing nodes are dynamically added to or retrenched from the cluster of database servers.
  • Enterprise grid environments must scale well and serve thousands of concurrent clients. This makes efficient load-balancing and database connection pooling mandatory; traditional connection pooling mechanisms that cache only connections with the same identity are useless.
  • With databases and other resources being dynamically managed and allocated for service, connection retention and management is critical. Mechanisms to resize or refresh stale connections or to search for and reclaim abandoned ones are required.
  • Database clients, whether they're Java programs or J2EE containers, must be notified about changes in the database server configuration as quickly as possible. Application servers (and J2EE containers) that rely on unpredictable timeout mechanisms will not be able to provide a reliable, guaranteed service.
Java Database Connectivity Within the Enterprise Grid
Virtualizing the Database as a Service
As shown in Figure 1, clusters are pervasive at every level within the enterprise grid: disk and network storage servers, database servers, and application servers. Most database vendors now allow a single database to be concurrently managed by multiple servers or instances.

Although database clusters are easily understood, there are fundamental differences between the architectures supported by the various current implementations; for example, a shared nothing approach and a shared disk approach provide different levels of capabilities. This article focuses on the Oracle Database Real Application Cluster (RAC) model in which any database server instance can address the entire database space, hosted on shared networked storage.

The typical JDBC approach to specifying how to connect to a database is to provide the details for the target database in the form of a tuple containing <host>:<port>:<sid>. This is set as the URL attribute of a data source definition or directly in the connect string when establishing the JDBC connection.

String connString="jdbc:oracle:thin:@prodHost:1521:ORCL";

In the Oracle Database RAC model, the physical server hosts and database instances that make up the cluster are represented as a single logical entity known as a service. A client application connects to the logical database service without any real knowledge of the actual host or database server instance that's used in the cluster. By connecting to the service, as illustrated by the Enterprise Grid Model in Figure 2, and not to an actual physical server, clients can be insulated from changes made to the physical elements of the cluster as well as from node failures.

To use a logical database entity in Java applications, JDBC connection strings must be able to accept service-based connect strings instead of physical host details. In the Oracle Database RAC model, services are represented as "/<service-name>".

String connString="jdbc:oracle:

The Connection Cache Manager
In the enterprise grid world, where applications are dynamically provisioned to run on different servers and even on instances of Java Virtual Machines (JVMs), it's imperative that the application's connection caches can be managed to permit the most efficient use of available resources. One approach to this is a new component called the Connection Cache Manager, which offers a centralized way to manage one or more connection caches. A single instance of Connection Cache Manager per JVM manages all of the connection caches used by applications running on that JVM.

The Connection Cache Manager plays two major roles: it manages cache and binds a connection to the data source.

Managing and Maintaining Cache
The Connection Cache Manager is responsible for creating the cache, maintaining its state, and terminating it. It's aware of the existence of each connection cache, managed independently. A rich set of APIs is provided to perform the Connection Cache Manager tasks.

The Connection Cache Manager supports the coexistence of more than one cache. Each cache is identified by a unique name and is then tightly bound to a data source. Each cache is either created transparently when getConnection() requests are made on a cache-enabled data source, or is created explicitly in the middle tier via the Connection Cache Manager API. Once a cache is created, it may either be explicitly removed via the Connection Cache Manager, or when the data source is closed.

Caches can be monitored using the Cache Manager APIs, enabling access to information such as the number of connections checked out and the number of connections available in the cache.

Binding a Connection Cache to the Data Source
As illustrated in Figure 3, the Connection Cache Manager makes sure a connection cache is associated with its data source object every time a cache is created, removed, or reinitialized. This ensures an efficient way to access the connection cache and retrieve connections from it every time the getConnection() method is invoked on a data source.

Fast Application Notification (FaN)
As I alluded to earlier, the dynamic nature of an enterprise grid environment results in nondeterministic system changes that may have flow-on effects on applications that are executing.

To allow application clients in an enterprise grid environment to function in the most efficient manner, a mechanism must be provided that performs the following tasks:

  • Monitor changes in database configuration and notify clients as fast as possible.
  • Balance connection requests across all active instances.
  • Fail-over established connections to surviving instances
Oracle Database 10g JDBC offers these capabilities by combining the Connection Cache Manager, Oracle Database RAC, and notification events through a notification service to quickly inform affected components of changes occurring in the grid.

Let's look at how these pieces work together. When a new instance is added to a database cluster, or when an instance is retrenched from a cluster (instance stopped or node dies), Oracle Database RAC generates an event that indicates what happened. The Oracle Notification Service (ONS) detects and distributes this event to components that have registered as interested in such events.

In Oracle Database 10g JDBC, Fast Application Notification is important: the connection caches are alerted when database instances are affected and can take preventive courses of action.

The mechanism is enabled on a cache-enabled data source by setting the data source property FastConnectionFailoverEnabled to true (see Listing 1).

With Fast Application Notification, event detection and notification is nearly instantaneous, occurring in seconds rather than minutes with traditional timeout mechanisms.

Adding Database Instances: Load Balancing
Adding new database instances to an Oracle Database RAC cluster generates UP events. These automatically trigger the balancing of all allocated connections over all active RAC instances without waiting for application connection retries/requests.

Let's consider a basic example in which we have:

  • A cluster with two nodes in a database service, one instance per node
  • A cache size of 150 total connections, resulting in 75 connections per instance
Adding a new node to the service will trigger an UP event, which is detected and propagated via the FaN service. This will cause the connection cache to automatically rebalance the existing 150 connections in use in the cache over the three instances, resulting in 50 connections being established per database instance. This process involves removing some connections (to existing instances) and creating new ones (to the new instance).

Retrenching Database Instances (or Node Failure): High-Availability
Retrenching database instances from an Oracle Database RAC cluster generates DOWN events, which automatically trigger a detect-and-fix mechanism in the connection caches. This mechanism quickly removes connections belonging to the failed instance, preventing invalid connections from being handed out on connection request.

Continuing with the previous example:

  • We now have a three-node database service with 50 connections per instance in the caches, for a total of 150 connections.
If a node in the database service fails or is removed by the resource provisioning system, then a DOWN event is created and propagated via the FaN service. The 50 connections belonging to the failed instance/node will be quickly removed from the cache, ensuring that the cache is clean and consistent. A clean cache guarantees that connection requests will be routed only to surviving instances. Clients with a stale connection must retry connection requests, unless a container intersperses such calls and is able to take corrective action.

The Fast Connection Fail-over mechanism transparently ensures reliable, highly available Java database connections in RAC and grid environments.

In-Flight Transactions
If an application is midtransaction when an instance fails, it will be thrown an appropriate SQL exception and the transaction will be rolled back. It's the responsibility of the application or the container to retry the connection request and reestablish session state.

Simplifying JDBC Connection Caching
As summarized in Table 1, the new Oracle Database10g JDBC Implicit Connection Cache has been designed to overcome existing JDBC Connection Caching limitations as listed in Table 1, by providing:

  • Transparent access to the cache
  • Support for multiple identities
  • The ability to retrieve connections based on user-defined attributes and weights
  • The ability to refresh or recycle stale connections from the cache
To take advantage of these capabilities, simply customize your environment by explicitly setting properties on the connection cache properties or connection.

Transparent Access to the Cache
By default, the getConnection() method in the standard OracleData-Source API creates a new database session and a physical database connection, thus incurring performance and scalability penalties. With Implicit Connection Caching, once the DataSource property ConnectionCachingEnabled has been set to true, the getConnection() method will service all connection requests from the connection cache.

ods.setConnectionCacheName("MyCache"); // optional
ods.setConnectionCacheProperties(cp); // optional
ctx.bind("MyDS", ods);
ods =(OracleDataSource)
// lookup DataSource

A JDBC 10g Connection cache can be created either implicitly by the first invocation of the getConnection() method or explicitly by using the Connection Cache Manager API. The Connection Cache Manager is especially useful for developers working with J2EE containers and ERP frameworks since it shields the infrastructure from the complexity of managing connection cache.

// This call would create a "MyCache" cache and a
// connection from "MyDS"data source will be created and returned
conn = ods.getConnection();

// This call would create a "MyCache" cache and a
// connection to "MyDS", authenticated by "Scott" will be created
conn = ods.getConnection("SCOTT","TIGER");

Subsequent getConnection() invocations will either create a new connection (if the cache was not previously initialized) or retrieve an existing connection from the cache. Once the connection is retrieved, you can proceed with statement creation.

	 // Create a Statement
  Statement stmt = conn.createStatement ();

  // Close the Statement
  stmt = null;

The cache can be populated in one of two ways: by preinitializing it using the Cache Manager APIs or, incrementally, upon the release of connection(s) back to the cache. When returning a connection to the cache, applications can save current connection attributes settings for future use (see attribute-based retrieval, below).

// return this connection to the cache
conn = null;

Caching Multiple Identities
While a database does not impose any specific restrictions on the connection authentication, a traditional cache might impose a limitation on the connections it can manage, requiring that they all use the same username/password combination, for example.

The Implicit Connection Cache can handle any combination of user-authenticated connections. For example, a joe.johnson connection can coexist very well with a sue.miller connection in the same connection cache.

Connection Retrieval Based on User-Defined Attributes
One of the valuable new features in the Implicit Connection Cache is connection striping.

Connection striping, or labeling, consists of applying user-defined attributes to a connection and making their state persist when the connection is returned to the cache. This speeds up future connection retrieval since cached connections don't have to reinitialize a block of state every time. These attributes can later be used to retrieve the same connection from the cache, as follows.

Example 1: Retrieving a connection based on the NLS_LANG attribute:

// get a connection from the cache with NLS_LANG attribute
java.util.Properties connAttr = null;
connAttr.setProperty("NLS_LANG", "ISO-LATIN-1");
conn = ds.getConnection(connAttr);

Example 2: Retrieving a connection based on the isolation-level attribute:

java.util.Properties connAttr = null;

// retrieve a connection that matches Transaction Isolation
conn = ds.getConnection(connAttr);
// this call will preserve attr settings for this connection

Example 3: Retrieving a connection based on the connection tag and connection attribute:

java.util.Properties connAttr = null;
connAttr.setProperty("CONNECTION_TAG", "JOE'S_CONNECTION");
// retrieve connection that matches Joe's connection
conn = ds.getConnection(connAttr);

// apply attributes to the connection

// This will retrieve Joe's connection
conn = ds.getConnection(connAttr);

Applying Connection Attributes to a Cached Connection
A connection attribute can be applied to a connection in the cache in two ways.

One approach is to call the applyCon-nectionAttributes( connAttr) API on the connection object. This simply sets the supplied attributes on the connection object. It's possible to apply attributes incrementally using this API, letting users apply connection attributes over multiple calls. For example, NLS_LANG may be applied by calling this API from module A. The next call from module B can then apply the TXN_ISOLATION attribute, and so on.

A second approach is to call the close( connAttr) API on the connection object. This API closes the logical connection and then applies the supplied connection attributes on the underlying PooledConnection (physical connection). The attributes set via this close() API override any attributes set using the applyConnectionAttri-butes() API.

The following example shows a call to the close(connectionAttributes) API on the connection object that lets the cache apply the matched connectionAttributes back on the pooled connection before returning it to the cache. This ensures that when a subsequent connection request with the same connection attributes is made, the cache will find a match.

// Sample connection request and close connAttr = null;
connAttr.setProperty("NLSLANG", "ISO-LATIN-1");
// request connection based on attributes
conn = ds.getConnection(connAttr);

// apply attributes to connection

Connection Retrieval Based on Attributes and Weights
Connections may be selectively retrieved from the connection cache based on a combination of ConnectionAttributes and attribute weights.

Weights are assigned to each key in a ConnectionAttribute in a one-time operation that also changes cache properties. The cache property CacheAttributeWeights is one of the java.util.Properties that allows the setting of attribute weights. Each weight is an integer value that defines how expensive the key is in terms of resources.

Once the weights are specified on the cache, connection requests are made on the data source by calling getConnection(connectionAttributes). The connectionAttributes argument refers to keys and their associated values.

The connection retrieval from the cache involves searching for a connection that satisfies a combination of the following:

  • A key/value match on a connection from the cache
  • The maximum total weight of all the keys of the connection-Attributes that were matched on the connection
Consider the following example in which a cache is configured with CacheAttributeWeights (see Listing 2). (Listings 2-3 can be downloaded from

Once the weights are set, a connection request could be made as in Listing 3.

The getConnection() request tries to retrieve a connection from the MyCache cache. In connection matching and cache retrieval, one of two things can happen:

  • An exact match is found. As in the above example, an exact match is a connection that satisfies the same attribute values and all the keys defined by Keys (NLS_LANG, SecurityGroup, and Application).
  • An exact match is not found. In this case, the closest match based on the attribute key/value and its associated weights is used (but only if the ClosestConnectionMatch property is set).
Once the connection is returned, the user can invoke the getUnMatchedConnectionAttributes() API on the connection object to return a set of attributes (java.util.Properties) that did not match the criteria. The unmatched attribute list can then be used by the caller (or application) to reinitialize these values before using the connection.

This article highlighted a new set of Java data access requirements that have emerged when executing in an enterprise grid environment while also outlining how Oracle Database 10g JDBC tackles the challenges that these requirements present.

Whether you're using JDBC directly in applications via a run-time container such as an EJB container or through an O/R mapping framework such as Toplink, the latest Oracle JDBC drivers offer reliable and highly available data sources in RAC and grid environments and significantly simplify a range of data source connectivity issues for Java developers.

Oracle has proposed these new features to the JSR 221 (JDBC 4.0 specification) expert group. It hopes to see JDBC Connection Pool management and high-availability support for data sources within enterprise grid environments become part of the JDBC 4.0 standard.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

sorry every body. this sample is damaged.
pleas too see this sample and sevral othersample go to
in technichal forum
good luck

I am alen smith.I am a java developer.
why when exist powerful j2ee frameworks you try to use this?

A few days agoI see a document about a new J2EE FRAMEWORK with name shine.
that`s support team say :
this frame work is a powerful framework with easyest way to use.
they say: shine include MVA,Service Oriented Scope,Jwms(java web model service) and ...

I thirsty for what is JWMS? and I want to know his FRAMEWORK how support SERVICE ORIENTED SCOPE?

this questions Intrested me about attaining more information about shine framework.
when I searching about shine in internet and,I sow sevral document about that
,but one of them was more intresting for me ,that was Learning shine Frame work.
I can`t remember exactly but I think that writen via SHINE FRAMEWORK`s supprt team.

this document have a SIMPLE HELLO WORD project that Implemented with shine framework.

it`s realy intrest for me.
when I read it and try to deploy that sample with shine ,I understand that I find my favorite framework for
Implementing my J2EE projects.

but I can`t use it now.because of shine is not yet reliable because it`s new and untried.

but I thing this framework will be one of powerfull J2EE FRAMEWORKs as soon as soon.maybe shine
is the best J2EE Framework in near FUTURE.

this framework is very intrested for me and I write this document for all developers that want to find
a powerfull J2EE framewok for his/her projects.

I say All of my information about shine to you.
please if you finde a new document about shine or other new J2EE framework send it to my email for me.
I will happy when I see new information about shine or other new J2EE frameworks.

shines framework Address in is:

I find this information in this Address:

this is a hello word sample with shine:

Hi! I'm Mehdi Mousavi, member of Shine Framework's support team.
Today I'll teach you developing Hello world sample.
I want teach & show you this powerful technology.
We use (jdeveloper10g) IDE for develop this project by shine framework.

In start , make new application with name "learning-shine" like this step :
1- Right click on application
2- choice new application workspace
3- complete this step follow up picture 2

As you see , new application have 2 part (model and view).
Now in continue, we should add Shine library to our project as below:
For do this , do this step:
Double-click on view → library → add shine, jsp runtime & servlet runtime
If the shine library isn't in your list, add it as below:
Click on New button in the bottom of the page → Edit → Add Entry → find Shine.jar & click on Select → ok
As you see Shine library has added to your library list.

Create request page
in next step we should create a JSP request page :
Right-click on view → New → jsp page → ok → name your page as "Request.jsp" → ok

As you see, your page has created.

now , we must design this page .
For do this , in source tab write this code:

<%@ page contentType="text/html;charset=windows-1252"%>




We design one HTML form with two text field for give Fname (First name) & Lname (Last Name) .
also if you want, you can design your page without write .
component pallet will help you to Create your HTML form

Create response page :
Now we need create one response page (a service) .
Right-click on view → New → jsp page → ok → name your page as "ResponseServer.jsp" → ok
As you see, your page has created.
Now we write this code in that page like beneath code:

<%@ taglib uri="/WEB-INF/shine.tld" prefix="shine"%>

String x=(String) request.getParameter("Fname");
String y=(String) request.getParameter("Lname");
out.print("welcome Mr/Mrs "+x+" "+y);

<%out.print("Hello.I am "+x+" "+y);

<%out.print("Email Address:"+" ");%>

As you see in the source we have a taglib named "shine.tld" that is at "shine-framework/lib/tld". You should copy it at this address:
After this tag, we have "Header" & "Footer" in our source. Use Header & Footer to write the codes & massages you want to be shown on the top & bottom of all pages. Like welcome massage you see on the top of the pages while mail checking. As you see, we put welcome massage & the codes of getting two variables of the form and store them in two public variables, in header & the mail address in footer in this sample.

You can use all of the programming codes in header & footer.

** Interesting tip for the developers who had worked with other frameworks like Struts:
If you want, you can use all struts codes throw the tags. (Although after working with Shine, you'll understand that Shine is more strong that you won’t need to do like that.)
In the source continuation we have "tag" & "shine:tag" that are related to the Shine's own architecture, JWMS.
In Shine, instead of having a lot of response pages we can have one or more services that include our response pages as some tags.
e.g. / The Bank is the service & its operations are the tags.
Indeed in this ways, shine create all response pages in one response page (package).
In this sample, we have a tag named "tag1" that shows this output massage:
"Hello . I am fname (First name) lname (Last name)"

*developer saved with tags of shine in create more and work with many response pages.*

Now let's create the Mapping class:
MappingClass is the engine of the Shine framework. To create it do the instructions as below:
Right-click on view → New → java class→ ok → name your class as "ShineMappingClass" → ok

Now you have a class named "ShineMappingClass".
Ok, write these codes in it now:

package view;
import com.evermind.client.clientassembler.ClientAssembler;
import j2sos.shine.controller.Mapping;
import j2sos.shine.script.Server;

public class ShineMappingClass extends Mapping
public void rater()
String b=(String) getRequest().getParameter("Fname");
String c=(String) getRequest().getParameter("Lname");
System.out.print("Hello"+b+c+"How are you?");

As you see all of the executive functions implements in "rater ()" method.
In this sample first we get Fname & Lname of the input then cast them to String mode and store them in "b" & "c" variables.
The "System.out.print("Hello "+b+" "+c+". How are you?!")" code makes just a test output that is visible only in IDE.
"forward("ResponseService","tag1")" code says go to "ResponseService" & run "tag1".

Now we just need to set the properties of "web.xml" file.
Open "web.xml" file & write this source in it:

<?xml version = '1.0' encoding = 'windows-1252'?>

Empty web.xml file for Web Application



text/html; charset=utf-8






As you see, we set colored parts as below:

Blue part: say the name & the address of "ShineMappingClass".
Red part: set the name of "ResponseService.jsp" as ResponseService. (Indeed it is the same name that had used in "forward" in "ShineMappingClass".)
* An applicable tip:
If we had more services, we should do this definition for each one, separately.
Brown part: introduce ShineMappingClass as executive class.
Now us project is ready , and you must run it.
For do this , you must run request.jsp page .
After run , one page be open with two text field like this picture:

Now if you insert fname (First name) and lname (Last name) , after push submit you see this Response:

Now, You can deploy one simple project by shine
I hope you would enjoy yourself!

good lock.
your friend: alen smith

Subscribe to the World's Most Powerful Newsletters


The explosion of new web/cloud/IoT-based applications and the data they generate are transforming ou...
CI/CD is conceptually straightforward, yet often technically intricate to implement since it require...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple ...
Enterprises are striving to become digital businesses for differentiated innovation and customer-cen...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As au...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't com...
The now mainstream platform changes stemming from the first Internet boom brought many changes but d...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, wi...
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO ...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in developm...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: D...
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with e...
In this presentation, you will learn first hand what works and what doesn't while architecting and d...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitori...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use ...
If your cloud deployment is on AWS with predictable workloads, Reserved Instances (RIs) can provide ...
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear...
We build IoT infrastructure products - when you have to integrate different devices, different syste...
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling ...