Enhanced EJB Configuration and Deployment

This page list the improvements done to EJB configuration and deployment on Payara Server.

Controlling Stateless EJB Concurrent Instances

Since Payara Server

In prior releases, it was not possible to control the maximum number of concurrent Stateless EJB instances in Payara Server. It was, however, possible to control the number of pooled Stateless EJB instances, as well as concurrent MDB instances.

These features were available in Oracle GlassFish Server 3.1 and earlier but not in the GlassFish Open Source editions (3.1.2.x and 4.x).

It is now possible to limit concurrent Stateless EJB instances that are dispatched, allowing fine-grained control of resources, limiting surface area for DDOS attacks and making applications run more smoothly and efficiently.

The key difference is that previously it was not possible to limit the number of concurrent threads dispatched for the same Stateless EJB, the only limit was the number of pooled beans. When the pool ran out, new EJB instances were created, with no upper bound. In the current release, this upper bound can be established so the maximum number of threads is not exceeded beyond this value.

These limits are controlled on a per-EJB basis, using bean-pool elements in the glassfish-ejb-jar.xml deployment descriptor file:

Table 1. bean-pool elements in glassfish-ejb-jar.xml
Element Behaviour


This element controls the maximum concurrent instances that are dispatched for this EJB (number of threads). The default is configured in the domain; this has not changed from previous versions.


This element controls what to do when the number of requests exceeds the amount of beans available in the pool. Possible values of this element include:

  • -1 (default)

    Current behaviour is unchanged: when the pooled number is exceeded, a new EJB instance is created, there is no upper bound

  • 0

    This is new behaviour: when the pooled number is exceeded, the request will wait for a bean to free up in the pool. This effectively caps the number of threads that are concurrently created for a this particular EJB request.

  • Between 1 - MAX_INTEGER

    This is the new behaviour, when the pool number is exceeded, the request will wait for a bean to free up in the pool, up to the number of milliseconds specified here. After this time has expired, a new EJB instance is created and dispatched, thus the configured pool size acts as a soft upper bound, one that can be exceeded once the timer has expired.


This element controls the minimum number of beans in the bean pool. This will increase performance at the expense of memory footprint.

Additionally, a new system property fish.payara.ejb-container.max-wait-time-in-millis can be set to change the default global value of <max-wait-time-in-millis> for all Stateless EJB bean pools.

Unless overridden in the deployment descriptor file, this will become the new default value and can be used to cap the upper bound of all concurrent invocations of any Stateless EJB pools.


The following is a sample glassfish-ejb-jar.xml deployment descriptor that configures 2 EJBs with the settings mentioned beforehand.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE glassfish-ejb-jar PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN" "http://glassfish.org/dtds/glassfish-ejb-jar_3_1-1.dtd">

Overwriting the Module Name

Since Payara Server

When deploying an EJB-JAR module on Payara Server, the portable JNDI names for all scanned EJBs will be generated using the name of the module as specified on the ejb-jar.xml deployment descriptor:

<?xml version="1.0" encoding="UTF-8"?>
<ejb-jar xmlns = "http://java.sun.com/xml/ns/javaee"
         version = "3.1"
         xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation = "http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd">
If the name’s not specified in the deployment descriptor, the specification states that the module name will be the same as the JAR artifact used to deploy it.

However, When deploying a JAR from an IDE (like NetBeans or IntelliJ), the IDE deploys to Payara Server using the asadmin deploy command, with the --name option specified. This will force the module to have the specified name over the name defined in ejb-jar.xml. This is undesired because the IDE usually infers the module name from the name of the project or the JAR file and doesn’t take the correct name of the module into account.

In the current release, an improvement has been implemented to solve this scenario: The module name defined in the deployment descriptor will always be used even if it tries to be overridden using the -name option.

In the case you need to overwrite the name of the module when deploying the module, use the --forceName command option.

Persistent EJB Timers with Hazelcast

Since Payara Server

It is possible to persist an EJB Timer to the Hazelcast cluster rather than to a database. The same feature is available in Payara Micro.

Persisting an EJB Timer to the Hazelcast cluster means that the cluster itself will store the timer details, preserving it even if the original instance leaves the cluster.

This feature is tech-preview and is subject to change. Use with caution.

To persist an EJB Timer to the Hazelcast cluster, you will need to set the ejb-timer-service from asadmin. To get the current state, run the following command:

asadmin> get configs.config.${your-config}.ejb-container.ejb-timer-service

This will return the current state from the domain.xml, which by default should be something similar to the following:

asadmin> get configs.config.server-config.ejb-container.ejb-timer-service
Command get executed successfully.

To persist to the Hazelcast cluster you need only change the value for configs.config.server-config.ejb-container.ejb-timer-service.ejb-timer-service to Hazelcast. To do this, run the following set command:

asadmin> set configs.config.server-config.ejb-container.ejb-timer-service.ejb-timer-service=Hazelcast

set commands are not dynamic. You will need to restart your domain to apply the changes.

results matching ""

    No results matching ""