Tuesday, July 22, 2014

JSF 2.3 wish list part I - Components

Over the last days several Java EE specs have published JSR proposals for their next versions. Today JSF published its proposal on the JSF mailing list, titled: Let's get started on JSF 2.3

The JSR groups improvements into 4 categories:

  • Small scale new features
  • Community driven improvements
  • Platform integration
  • Action oriented MVC support

An interesting aspect is the "Community driven improvements", which means it's basically up to the community what will be done exactly. In practice this mostly boils down to issues that have been entered into the JSF issue tracker. It's remarkable how many community filed issues JSF has compared to several other Java EE specs; clearly JSF always has been a spec that's very much community driven. At ZEEF.com we're more than happy to take advantage of this opportunity and contribute whatever we can to JSF 2.3.

Taking a look at this existing issue tracker we see there are quite a lot of ideas indeed. So what should the community driven improvements focus on? Improving JSF's core strengths further, adding more features, incorporating ideas of other frameworks, clarifying/fixing edge cases, performance? All in all there's quite a lot that can be done, but there's as always only a limited amount of resources available so choices have to be made.

One thing that JSF has been working towards is pushing away functionality that became available in the larger Java EE platform, therefore positioning itself more as the default MVC framework in Java EE and less as an out of the box standalone MVC framework for Tomcat et al. Examples are ditching its own managed bean model, its own DI system, and its own expression language. Pushing away these concerns means more of the available resources can be spend on things that are truly unique to JSF.

Important key areas of JSF for which there are currently more than a few issues in the tracker are the following:

  • Components
  • Component iteration
  • State
  • AJAX

In this article I'll look at the issues related to components. The other key areas will be investigated in follow-up articles.

Components

While JSF is about more than just components, and it's certainly not idiomatic JSF to have a page consisting solely out of components, arguably the component model is still one of JSF's most defining features. Historically components were curiously tedious to create in JSF, but in current versions creating a basic component is pretty straightforward.

The simplification efforts should however not stop here as there's still more to be done. As shown in the above reference, there's e.g. still the required family override, which for most simple use cases doesn't make much sense to provide. This is captured by the following existing issues:

A more profound task is to increase the usage of annotations for components in order to make a more "component centric" programming model possible. This means that the programmer works more from the point of view of a component, and threats the component as a more "active thing" instead of something that's passively defined in XML files and assembled by the framework.

For this at least the component's attributes should be declared via annotations, making it no longer "required" to do a tedious registration of those in a taglib.xml file. Note that this registration is currently not technically required, but without it tools like a Facelet editor won't be able to do any autocompletion, so in practice people mostly define them anyway.

Besides simply mimicking the limited expressiveness for declaring attributes that's now available in taglib.xml files, some additional features would be really welcome. E.g. the ability to declare whether an attribute is required, its range of valid values and more advanced things like declaring that an attribute is an "output variable" (like the "var" attribute of a data table).

A nonsensical component to illustrate some of the ideas:

@FacesComponent
public class CustomComponent extends UIComponentBase {
 
    @Attribute
    @NotNull
    private ComponentAttribute<String> value;

    @Attribute
    @Min(3)
    private int dots;

    @Attribute(type=out)
    @RequestScoped
    private String var;
    
    private String theDots;

    @PostConstruct
    public void init() {
        theDots = createDots(dots);
    }
    
 
    @Override
    public void encodeBegin(FacesContext context) throws IOException { 
        ResponseWriter writer = context.getResponseWriter();
        writer.write(value.getValue().toUpperCase());
        writer.write(theDots);

        if (var != null) {
            getRequestParameterMap().put(var, theDots);
        }
    }
}
In the above example there are 4 instance variables, of which 3 are component attributes and marked with @Attribute. These last 3 could be recognized by tooling to perform auto-completion in e.g. tags associated with this component. Constraints on the attributes could be expressed via Bean Validation, which can then partially be processed by tooling as well.

Attribute value in the example above has as type ComponentAttribute, which could be a relatively simple wrapper around a component's existing attributes collection (obtainable via getAttributes()). The reason this should not directly be a String is that it can now be transparently backed by a deferred value expression (a binding that is lazily resolved when its value is obtained). Types like ComponentAttribute shouldn't be required when the component designer only wants to accept literals or immediate expressions. We see this happening for the dots and var attributes.

Finally, the example does away with declaring an explicit name for the component. In a fully annotation centric workflow a component name (which is typically used to refer to it in XML files) doesn't have as much use. A default name (e.g. the fully qualified class name, which is what we always use in OmniFaces for components anyway) would probably be best.

This is captured by the following issues:

Creating components is one thing, but the ease with which existing components can be customized is just as important or perhaps even more important. With all the moving parts that components had in the past this was never really simple. With components themselves being simplified, customizing existing ones could be simplified as well but here too there's more to be done. For instance, often times a user only knows a component by its tag and sees this as the entry point to override something. Internally however there's still the component name, the component class, the renderer name and the renderer class. Either of these can be problematic, but particularly the renderer class can be difficult to obtain.

E.g. suppose the user did get as far as finding out that <h:outputText> is the tag for component javax.faces.component.html.HtmlOutputText. This however uses a renderer named javax.faces.Text as shown by the following code fragment:

public class HtmlOutputText extends javax.faces.component.UIOutput {

    public HtmlOutputText() {
        setRendererType("javax.faces.Text");
    }

    public static final String COMPONENT_TYPE = "javax.faces.HtmlOutputText";
How does the user find out which renderer is associated with javax.faces.Text? And why is the component name javax.faces.HtmlOutputText as opposed to its fully qualified classname javax.faces.component.html.HtmlOutputText? To make matters somewhat worse, when we want to override the renderer of an component but keep its existing tag, we also have the find out the render-kit-id. (it's a question whether the advantages that all these indirection names offer really outweigh the extra complexity users have to deal with)

For creating components we can if we want ignore these things, but if we customize an existing component we often can't. Tooling may help us to discover those names, but in absence of such tools and/or to reduce our dependencies on them JSF could optionally just let us specify more visible things like the tag name instead.

This is captured by the following issues:

Although strictly speaking not part of the component model itself, one other issue is the ease with which a set of components can easily be grouped together. There's the composite component for that, but this has as a side-effect that a new component is created that has the set of components as its children. This doesn't work for those situations where the group of components is to be put inside a parent component that does something directly based on its children (like h:panelGrid). There's the Facelet tag for this, but it still has the somewhat old fashioned requirements of XML registrations. JSF could simplify this by giving a Facelet tag the same conveniences as were introduced for composite components. Another option might be the introduction of some kind of visit hint, via which things like a panel grid could be requested to look at the children of some component instead of that component itself. This could be handy to give composite components some of the power for which a Facelet tag is needed now.

This is partially captured by the following issues:

Finally there's an assortment of other issues on the tracker that aim to simplify working with components or make the model more powerful. For instance there's still come confusion about the encodeBegin(), encodeChildren(), encodeEnd() methods vs the newer encodeAll() in UIComponent. Also, dynamic manipulation of the component tree (fairly typical in various other frameworks that have a component or element tree) is still not entirely clear. As it appears, such modification is safe to do during the preRenderView event, but this fact is not immediately obvious and the current 2.2 spec doesn't mention it. Furthermore even if it's clear for someone that manipulation has to happen during this event, then the code to register for this event and handle it is still a bit tedious (see previous link).

Something annotation based may again be used to simplify matters, e.g.:

@FacesComponent
public class CustomComponent extends UIComponentBase {
 
    @PreRenderView // or @ModifyComponentTree as alias event
    public void init() {
        this.getParent().getChildren().add(...);
    }
}

This and some other things are captured by the following issues:

That's it for now. Stay tuned for the next part that will take a look at component iteration (UIData, UIRepeat, etc).

Arjan Tijms

Friday, June 27, 2014

JASPIC improvements in WebLogic 12.1.3

Yesterday WebLogic 12.1.3 was finally released. First of all congratulations to the team at Oracle for getting this release out of the door! :)

Among the big news is that WebLogic 12.1.3 is now a mixed Java EE 6/EE 7 server by (optionally) supporting several Java EE 7 technologies like JAX-RS 2.0.

Next to this there are a ton of smaller changes and fixes as well. One of those fixes concerns the standard authentication system of Java EE (JASPIC). As we saw some time back, the JASPIC implementation in WebLogic 12.1.1 and 12.1.2 wasn't entirely optimal (to WebLogic's defense, very few JASPIC implementations were at the time).

One particular problem with JASPIC is that it almost can't be any different than that its TCK is rather incomplete; implementations that don't actually authenticate or which are missing the most basic functionality got certified in the past. For this purpose I have created a small set of tests that checks for the most basic capabilities. Note that these tests have since been contributed to Arun Gupta's Java EE 7 samples project, and have additionally been extended. Since those tests have Java EE 7 as a baseline requirement we unfortunately can't use them directly to test WebLogic 12.1.3.

For WebLogic 12.1.2 we saw the following results for the original Java EE 6 tests:


[INFO] jaspic-capabilities-test .......................... SUCCESS [1.140s]
[INFO] jaspic-capabilities-test-common ................... SUCCESS [1.545s]
[INFO] jaspic-capabilities-test-basic-authentication ..... FAILURE [7.533s]
[INFO] jaspic-capabilities-test-lifecycle ................ FAILURE [3.825s]
[INFO] jaspic-capabilities-test-wrapping ................. FAILURE [3.803s]
[INFO] jaspic-capabilities-test-ejb-propagation .......... SUCCESS [4.624s]
FAILURES:

testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
 at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)

testPublicPageNotRememberLogin(org.omnifaces.jaspictest.BasicAuthenticationPublicIT)java.lang.AssertionError: null
 at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotLoggedin(BasicAuthenticationPublicIT.java:44)
 at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotRememberLogin(BasicAuthenticationPublicIT.java:64)

testBasicSAMMethodsCalled(org.omnifaces.jaspictest.AuthModuleMethodInvocationIT)
java.lang.AssertionError: SAM methods called in wrong order
 at org.omnifaces.jaspictest.AuthModuleMethodInvocationIT.testBasicSAMMethodsCalled(AuthModuleMethodInvocationIT.java:54)

testResponseWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Response wrapped by SAM did not arrive in Servlet.
 at org.omnifaces.jaspictest.WrappingIT.testResponseWrapping(WrappingIT.java:53)

testRequestWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Request wrapped by SAM did not arrive in Servlet.
 at org.omnifaces.jaspictest.WrappingIT.testRequestWrapping(WrappingIT.java:45)


WebLogic 12.1.3 does quite a bit better as we now see the following:


[INFO] jaspic-capabilities-test .......................... SUCCESS [1.172s]
[INFO] jaspic-capabilities-test-common ................... SUCCESS [1.802s]
[INFO] jaspic-capabilities-test-basic-authentication ..... FAILURE [6.811s]
[INFO] jaspic-capabilities-test-lifecycle ................ SUCCESS [3.847s]
[INFO] jaspic-capabilities-test-wrapping ................. SUCCESS [3.777s]
[INFO] jaspic-capabilities-test-ejb-propagation .......... SUCCESS [4.800s]
FAILURES:

testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT)
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
 at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)


In particular WebLogic 12.1.1 and 12.1.2 didn't support request/response wrapping (a feature that curiously not a single server supported), called a lifecycle method at the wrong time (the method secureResponse was called before a Servlet was invoked instead of after) and remembered the username of a previously logged-in user (within the same session, but JASPIC is supposed to be stateless).

As of WebLogic 12.1.3 the lifecycle method is called at the correct moment and request/response wrapping is actually possible. This now brings the total number of servers where the request/response can be wrapped to 3 (GlassFish since 4.0 and JBoss since WildFly 8 can also do this).

It remains a curious thing that the JASPIC TCK seemingly catches so few issues, but slowly the implementations of JASPIC are getting better. The JASPIC improvements in WebLogic 12.1.3 may not have made the headlines, but it's another important step for Java EE authentication.

Arjan Tijms

Tuesday, May 6, 2014

Implementation components used by various Java EE servers

There are quite a lot of Java EE server implementations out there. There are a bunch of well known ones like JBoss, GlassFish and TomEE, and some less known ones like Resin and Liberty, and a couple of obscure ones like JEUS and WebOTX.

One thing to keep in mind is that all those implementations are not all completely unique. There are a dozen or so Java EE implementations, but there are most definitely not a dozen of JSF implementations (in fact there are only two; Mojarra and MyFaces).

Java EE implementations in some way are not entirely unlike Linux distributions; they package together a large amount of existing software, which is glued together via software developed by the distro vendor and where some software is directly developed by that vendor, but then also used by other vendors.

In Java EE for example JBoss develops the CDI implementation Weld and uses that in its Java EE servers, but other vendors like Oracle also use this. The other way around, Oracle develops Mojarra, the aforementioned JSF implementation, and uses this in its servers. JBoss on its turn then uses Mojarra instead of developing its own JSF implementation.

In this post we'll take a deeper look at which of these "components" the various Java EE servers are using.

One source that's worth looking at to dig up this information is the Oracle Java EE certification page. While this does lists some implementations for each server, it's unfortunately highly irregular and incoherent. Some servers will list their JSF implementation, while some others don't do this but do list their JPA implementation. It gives one a start, but it's a very incomplete list and a list that's thus different for each server.

Another way is to download each server and just look at the /lib or /modules directory and look at the jar files being present. This works to some degree, but some servers rename jars of well known projects. E.g. Mojarra becomes "glassfish-jsf" in WebLogic. WebSphere does something similar.

Wikipedia, vendor product pages and technical presentations sometimes do mention some of the implementation libraries, but again it's only a few implementations that are mentioned if they are mentioned at all. A big exception to this is a post that I somehow missed when doing my initial research from Arun Gupta about WildFly 8 (the likely base of a future JBoss EAP 7) which very clearly lists and references nearly all component implementations used by that server.

A last resort is to hunt for several well known interfaces and/or abstract classes in each spec and then check by which class these are implemented in each server. This is fairly easy for specs like JSF, e.g. FacesContext is clearly implemented by the implementation. However for JTA and JCA this is somewhat more difficult as it contains mostly interfaces that are to be implemented by user code.

For reference, I used the following types for this last resort method:

  • Servlet - HttpServletRequest
  • JSF - FacesContext
  • CDI - BeanManager
  • JPA - EntityManager
  • BV - javax.validation.Configuration, ParameterNameProvider
  • EJB - SessionContext
  • JAX-RS - ContextResolver, javax.ws.rs.core.Application
  • JCA - WorkManager, ConnectionManager, ManagedConnection
  • JMS - Destination
  • EL - ELContext, ValueExpression
  • JTA - TransactionManager
  • JASPIC - ServerAuthConfig
  • Mail - MimeMultipart
  • WebSocket - ServerWebSocketContainer, Encoder
  • Concurrency - ManagedScheduledExecutorService
  • Batch - JobContext

Without further ado, here's the matrix of Java EE implementation components used by 10 Java EE servers:

Vendor Red Hat Oracle Apache IBM TMax Caucho OW2
Spec/AS JBoss/WildFly GlassFish WebLogic Geronimo TomEE+ WebSphere Liberty JEUS Resin JOnAS
Servlet Undertow Tomcat derivative & Grizzly Nameless internal Tomcat/Jetty Tomcat Nameless internal Nameless internal Nameless internal (jeus servlet) Nameless internal Tomcat + Jetty*
JSF Mojarra* Mojarra Mojarra MyFaces MyFaces MyFaces* MyFaces* Mojarra* Mojarra* Mojarra (def) + MyFaces*
CDI Weld Weld* Weld* OWB OWB OWB* OWB* Weld CanDI (semi internal) Weld*
JPA Hibernate Eclipselink+ EclipseLink+ OpenJPA OpenJPA OpenJPA* OpenJPA* Future version will be EclipseLink* Eclipselink* Eclipselink* Hibernate (def) + EclipseLink (certified with)*
BV Hibernate Validator Hibernate Validator* Hibernate Validator* BVal BVal BVal* BVal* Hibernate Validator* Hibernate Validator* Hibernate Validator*
EJB Nameless internal Nameless internal (EJB-container) Nameless internal OpenEJB OpenEJB Nameless internal Nameless internal Nameless internal Nameless internal EasyBeans
JAX-RS RESTEasy Jersey Jersey Wink CXF Wink* Wink* Jersey* - Jersey*
JCA IronJacamar Nameless internal (Connectors-runtime) Nameless internal Nameless internal (Geronimo Connector) Nameless internal (Geronimo Connector) Nameless internal -Nameless internal Nameless internal Nameless internal
JMS HornetQ OpenMQ WebLogic JMS (closed source) ActiveMQ ActiveMQ SiBus (closed source) Liberty messaging (closed source) Nameless internal Nameless internal JORAM
EL EL RI* EL RI EL RI Apache / Jasper / Tomcat(?) EL Apache / Jasper / Tomcat(?) EL Apache / Jasper / Tomcat(?) EL* Apache / Jasper / Tomcat(?) EL* EL RI* Nameless internal EL RI + Apache / Jasper / Tomcat(?) EL*
JTA Narayana Nameless internal Nameless internal Nameless internal (Geronimo Transaction) Nameless internal (Geronimo Transaction) Nameless internal Nameless internal Nameless internal (jeus tm) Nameless internal JOTM
JASPIC Part of PicketBox Nameless internal Nameless internal Nameless internal (Geronimo Jaspi) - Nameless internal - Nameless internal - -
Mail JavaMail RI* JavaMail RI JavaMail RI (Geronimo?) JavaMail (Geronimo?) JavaMail JavaMail RI* - JavaMail RI* JavaMail RI* JavaMail RI*
WebSocket Undertow Tyrus - - - - - Nameless internal (jeus websocket) - -
Concurrency Concurrency RI* Concurrency RI - - - - - Nameless Internal (jeus concurrent) - -
JBatch JBeret JBatch RI (IBM)* - - - - - JBatch RI (IBM)* -
(asterisk behind component name means vendor in given column uses implementation from other vendor, plus behind name means the implementation used to be from the vendor in that column, but the vendor donated the implementation to some external organization)

Looking at the matrix we can see there are mainly 3 big parties creating separate and re-usable Java EE components; Red Hat, Oracle and Apache. Apache is maybe a special case though, as it's an organization hosting tons of projects and not a vendor with a single strategic goal.

Next to these big parties there are two smaller ones producing a few components. Of those OW2 has a separate and re-usable implementation of EJB, JMS and JTA, while Resin has its own implementation of CDI. In the case of Resin it looks like it's only semi re-usable though. The implementation has its own name (CanDI) but there's isn't really a separate artifact or project page available for it, nor are there really any instructions on how to use CanDI on e.g. Tomcat or Jetty (like Weld has).

Apart from using (well known) open source implementations of components all servers (both open and closed source) had a couple of unnamed and/or internal implementations. Of these, JASPIC was most frequently implemented by nameless internal code, namely 4 out of 5 times, although the one implementation that was named (PicketBox) isn't really a direct JASPIC implementation but is more a security related project that includes the JASPIC implementation classes. JTA and EJB followed closely with 8 respectively 7 out of 10 implementations being nameless and internal. Remarkable is that all closed source servers tested had a nameless internal implementation of Servlet.

At the other end of the spectrum in the servers that I looked at there were no nameless internal and no closed source implementations of JSF, JPA, Bean Validation, JAX-RS, JavaMail and JBatch.

It's hard to say what exactly drives the creation of nameless internal components. One explanation may be that J2EE started out having Servlet and EJB as the internal foundation of everything, meaning a server didn't just include EJB, but more or less WAS EJB. In that world it wouldn't make much sense to include a re-usable EJB implementation. With the rise of open source Java EE components it made more sense to just reuse these, so all newer specs (JSF, JPA, etc) are preferable re-used from open source. One exception to this is however JEUS, which despite being in a hurry to be the first certified Java EE 7 implementation still felt the need to create their own implementations of the brand new WebSocket and Concurrency specs. It will be interesting to see what the next crop of Java EE 7 implementations will do with respect to these two specs.

An interesting observation is that WebSphere, which by some people may be seen as the poster child of the closed source and commercial AS, actually uses relatively many open source components, and of those nearly all of them are from Apache (which may also better explain why IBM sponsored the development of Geronimo for some time). JavaMail for some reason is the exception here. Geronimo has its own implementation of it, but WebSphere uses the Sun/Oracle RI version.

Another interesting observation is that servers don't seem to randomly mix components, but either use the RI components for everything, or use the Apache ones for everything. There's no server that uses say JMS from JBoss, JSF from Oracle and JPA from Apache. An exception to the rule is when servers allow alternative components to be configured, or even ship with multiple implementations of the same spec like JOnAS does.

We do have to realize that a Java EE application server is quite a bit more than just the set of spec components. For one there's always the integration code that's server specific, but there are also things like the implementation of pools for various things, the (im)possibility to do fail-over for datasources, (perhaps unfortunately) a number of security modules for LDAP, Database, Kerberos etc, and lower level server functionality like modular kernels (osgi or otherwise) that dynamically (e.g. JBoss) or statically (e.g. Liberty) load implementation components.

JEUS for instance may look like GlassFish as it uses a fair amount of the same components, but in actuality it's a completely different server at many levels.

Finally, note that not all servers were investigated and not all components. Notably the 3 Japanese servers NEC WebOTX, Fujitsu Interstage and Hitachi Cosminexus were not investigated, the reason being they are not exactly trivial to obtain. At the component level things like JAX-RPC, JAX-WS, SAAJ, JNDI etc are not in the matrix. They were mainly omitted to somewhat reduce the research time. I do hope to find some more time at a later stage and add the remaining Java EE servers and some more components.

Arjan Tijms

Tuesday, April 29, 2014

What should be the minimum dependency requirements for the next major OmniFaces version?

For the next major version of OmniFaces (2.x, planned for over a few months) we're currently looking at determining the minimum version of its dependencies. Specifically the minimum Java SE version is tricky to get right.

OmniFaces 1.x currently targets JSF 2.0/2.1 (2009/2010), which depends on Servlet 2.5 (2006) and Java SE 5 (2004).

In practice relatively few people appear to be using Servlet 2.5 so for OmniFaces 1.x we primarily test on Servlet 3.0. Nevertheless, Servlet 2.5 is the official minimum requirement and we sometimes had to go to great lengths to keep OmniFaces 1.x compatible with Servlet 2.5. As for the Java SE 5 requirement, we noted that so few people were still using this (especially among the people who were also using JSF 2.1) that we set Java SE 6 as the minimum. Until so far we never had a complaint about Java SE 6 being the minimum.

For OmniFaces 2.x we'll be targeting JSF 2.2 (2013), which itself has Servlet 3.0 (2009) as the minimum Servlet version and Java SE 6 (2006) as the minimum JDK version.

So this begs the question about which Servlet/Java EE and Java SE version we're going to require as the minimum this time. Lower versions means more people could use the technology, but it also means newer features can't be supported or can't be used by us. The latter isn't directly visible to the end-user, but it often means OmniFaces development is simply slower (we need more code to do something or need to reinvent a wheel ourselves that's already present in a newer JDK or Java EE release).

One possibility is to strictly adhere to what JSF 2.2 is depending on. This thus means Servlet 3.0 and Java SE 6. Specifically Java SE 6 is nasty though. It's 8 years old already, currently EOL even and we'll probably have to keep using this during the entire JSF 2.2 time-frame. When that ends it will be over 11 years old. At OmniFaces and ZEEF (the company we work at) we're big fans of being on top of recent developments, and being forced to work with an 11 years old JDK doesn't exactly fits in with that.

The other possibility is to do things just like we did with OmniFaces 1.x; support the same Servlet version as the JSF version we target, but move up one JDK version. This would thus mean OmniFaces 2.x will require JDK 7 as a minimum.

Yet another possibility would be to be a little more progressive with OmniFaces 2.x; have the technology from Java EE 7 as the minimum requirement (JSF 2.2 is part of Java EE 7 after all). This would mostly boil down to having Servlet 3.1 and CDI 1.1 as dependencies. Taking being progressive even one step further would be to start using JDK 8 as well, but this may be too much for now.

Besides the version numbers, another question that we've been struggling with is whether we should require CDI to be present or not. In OmniFaces 1.x we do use CDI, but mainly because Tomcat doesn't have CDI by default we jump through all kinds of hoops to prevent class not found exceptions; e.g. using reflection or using otherwise needless indirections via interfaces that don't import any CDI types, which are then implemented by classes which do, etc.

As CDI is becoming increasingly more foundational to Java EE and JSF is on the verge of officially deprecating its own managed beans (it's pretty much effectively deprecated in JSF 2.2 already), plus the fact that CDI is pretty easy to add to Tomcat, such a requirement may not be that bad. Nevertheless we're still pretty much undecided about this.

Summing up, should OmniFaces 2.x require as a minimum:

  1. Exactly what JSF 2.2 does: Servlet 3.0 and JDK 6.0
  2. Same pattern as OmniFaces 1.x did: Servlet 3.0 and JDK 7.0
  3. The versions of Java EE 7 of which JSF 2.2 is a part: Servlet 3.1, CDI 1.1 and JDK 7.0
  4. An extra progressive set: Java EE 7 and JDK 8.0

What do you think? If you want you can vote for your favorite option using the poll at the top right corner of this blog.

Arjan Tijms

Sunday, March 23, 2014

Implementing container authorization in Java EE with JACC

A while back we looked at how container authentication is done in Java EE by using the JASPIC API. In this article we'll take a look at its authorization counterpart; JACC/JSR 115.

JACC, which stands for Java Authorization Contract for Containers and for some reason also for Java Authorization Service Provider Contract for Containers is a specification that according to the official Java EE documentation "defines a contract between a Java EE application server and an authorization policy provider" and which "defines java.security.Permission classes that satisfy the Java EE authorization model.".

 

Public opinion

While JASPIC had only been added to Java EE as late as in Java EE 6, JACC has been part of Java EE since the dark old days of J2EE 1.4. Developers should thus have had plenty of time to get accustomed to JACC, but unfortunately this doesn't quite seem to be the case. While preparing for this article I talked to a few rather advanced Java EE developers (as in, those who literally wrote the book, worked or have worked on implementing various aspects of Java EE, etc). A few of their responses:

  • "I really have no idea whatsoever what JACC is supposed to do"
  • (After telling I'm working on a JACC article) "Wow! You really have guts!"
  • "[the situation surrounding JACC] really made me cry"
Online there are relatively few resources to be found about JACC. One that I did found said the following:
  • " [...] the PolicyConfiguration's (atrocious) contract [...] "
  • "More ammunition for my case that this particular Java EE contract is not worth the paper it's printed on."

 

What does JACC do?

The negativity aside, the prevailing emotion seems to be that it's just not clear what JACC brings to the table. While JASPIC is not widely used either people do tend to easily get what JASPIC primarily does; provide a standardized API for authentication modules, where those authentication modules are things that check credentials against LDAP servers, a database, a local file, etc and where those credentials are asked via mechanisms like an HTML form or HTTP BASIC etc.

But what does a "contract between an AS and an authorization policy provider" actually mean? In other words, what does JACC do?

In a very practical sense JACC offers the followings things:

  • Contextual objects: A number of convenient objects bound to thread local storage that can be obtained from "anywhere" like the current http request and the current Subject.
  • Authorization queries: A way for application code to ask the container authorization related things like: "What roles does the current user have?" and "Will this user have access to this URL?"
  • A hook into the authorization process: A way to register your own class that will:
    • receive all authorization constraints that have been put in web.xml, ejb-jar.xml and corresponding annotations
    • be consulted when the container makes an authentication decision, e.g. to determine if a user/caller has access to a URL
We'll take a more in-depth look at each of those things below.

 

Contextual objects

Just as the Servlet spec makes a number of (contextual) objects available as request parameters, and JSF does something like this via EL implicit objects, so does JACC makes a number of objects available.

While those objects seem to be primarily intended to be consumed by the special JACC policy providers (see below), they are in fact specified to work in the context of a "dispatched call" (the invocation of an actual Servlet + Filters or an EJB + Interceptors). As such they typically do work when called from user code inside e.g. a Servlet, but the spec could be a bit stricter here and specifically mandate this.

Contrary to Servlet's request/session/application attributes, you don't store instances of objects directly into the JACC policy context. Instead, just like can optionally be done for JNDI you store for each key a factory that knows how to produce an object corresponding to that key. E.g. in order to put a value "bar" in the context using the key "foo" you'd use the following code:

final String key = "foo";
            
PolicyContext.registerHandler(key, 
    new PolicyContextHandler() {
        @Override
        public Object getContext(String key, Object data) throws PolicyContextException {
            return "bar";
        }
        
        @Override
        public boolean supports(String key) throws PolicyContextException {
            return key.equals(key);
        }
        
        @Override
        public String[] getKeys() throws PolicyContextException {
            String[] keys = { key };
            return keys;
        }
    }, 
    true
);

// result will be "bar"
String result = PolicyContext.getContext("foo");
For general usage it's bit unwieldy that the factory (called handler) must known its own key(s). Most containers first get the factory by looking it up in a table using the key it was registered with and then ask the factory if it indeed supports that key. Since a factory has to be registered individually for each key it supports anyway it's debatable whether the responsibility for ensuring the correct keys are used shouldn't be with the one doing the registration. That way the factory interface could have been made a bit simpler.

By default the following keys are defined:

  • javax.security.auth.Subject.container - Returns the current Subject or null if user not authenticated.
  • javax.servlet.http.HttpServletRequest - Returns the current request when requested from a Servlet.
  • javax.xml.soap.SOAPMessage - Returns the SOAP message when requested from an EJB when JAX-WS is used.
  • javax.ejb.EnterpriseBean - Returns the EJB 1.0 "EnterpriseBean" interface. It's best forgotten that this exists.
  • javax.ejb.arguments - Returns the method arguments of the last call to an EJB that's on the stack.
  • java.security.Policy.supportsReuse - Indicates that a container (server) supports caching of an authorization outcome.

Getting the Subject is especially crucial here, since it's pretty much the only way in Java EE to get a hold of this, and we need this for our authorization queries (see below).

For general usage the other objects have the most value for actual policy providers, as in e.g. JSF there are already ways to get the current request from TLS. In practice however semi-proprietary JAAS login modules also not rarely use the PolicyContext to get access to the http request. It should be noted that it's not entirely clear what the "current" request is, especially when servlet filters are used that do request wrapping. Doing an authorization query first (see below) may force the container to set the current request to the one that's really current.

 

Authorization queries

JACC provides an API and a means to ask the container several authorization related things. Just like JASPIC, JACC provides an API that's rather abstract and which seems to be infinitely flexible. Unfortunately this also means it's infinitely difficult for users to find out how to perform certain common tasks. So, while JACC enables us to ask the aforementioned things like "What roles does the current user have?" and "Will this user have access to this URL?", there aren't any convenience methods such as "List<String> getAllUserRoles();" or "boolean hasAccess(String url);". Below we'll show how these things are being doing in JACC.

Get all users roles

We'll start with obtaining the Subject instance corresponding to the current user. For simplicity we assume the user is indeed logged-in here.
Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");
After that we'll get the so-called permission collection from the container that corresponds to this Subject:
PermissionCollection permissionCollection = Policy.getPolicy().getPermissions(
    new ProtectionDomain(
        new CodeSource(null, (Certificate[]) null), 
        null, null, 
        subject.getPrincipals().toArray(new Principal[subject.getPrincipals().size()])
    )
);
Most types shown here originate from the original Java SE security system (pre-JAAS even). Their usage is relatively rare in Java EE so we'll give a quick primer here. A more thorough explanation can be found in books like e.g. this one.


CodeSource

A code source indicates from which URL a class was loaded and with which certificates (if any) it was signed. This class was used in the original security model to protect your system from classes that were loaded from websites on the Internet; i.e. when Applets were the main focus area of Java. The roles associated with a user in Java EE do not depend in any way on the location from which the class asking this was loaded (they are so-called Principal-based permissions), so we provide a null for both the URL and the certificates here.

ProtectionDomain

A protection domain is primarily a grouping of a code source, and a set of static permissions that apply for the given Principals. In the original Java SE security model this type was introduced to be associated with a collection of classes, where each class part of the domain holds a reference to this type. In this case we're not using the protection domain in that way, but merely use it as input to ask which permissions the current Subject has. As such, the code source, static permissions and class loader (third parameter) are totally irrelevant. The only thing that matters is the Subject, and specifically its principals.

The reason why we pass in a code source with all its fields set to null instead of just a null directly is that the getPermissions() method of well known Policy implementations like the PolicyFile from the Oracle JDK will call methods on it without checking if the entire thing isn't null. The bulky code to transform the Principals into an array is an unfortunate mismatch between the design of the Subject class (which uses a Set) and the ProtectionDomain (which uses a native array). It's extra unfortunate since the ProtectionDomain's constructor specifies it will copy the array again, so two copies of our original Set will be made here.

Policy

A policy is the main mechanism in Java SE that originally encapsulated the mapping between code sources and a global set of permissions that were given to that code source. Only much later in Java 1.4 was the ability added to get the permissions for a protection domain. Its JavaDoc has the somewhat disheartening warning that applications should not actually call the method. In case of JACC we can probably ignore this warning (application server implementations call this method too).

PermissionCollection

At first glance a PermissionCollection is exactly what its name implies (no pun intended): a collection of permissions. So why does the JDK have a special type for this and doesn't it just use a Collection<Permission> or a List<Permission>? Maybe part of the answer is that PermissionCollection was created before the Collection Framework in Java was introduced.

But this may be only part of the answer. A PermissionCollection and its most important subclass Permissions make a distinction between homogenous and heterogenous Permissions. Adding an arbitrary Permission to a Permissions class is supposed to not just add it randomly in sequence, but to add it internally to a special "bucket" . This bucket is another PermissionCollection that stores permissions of the same type. It's typically implemented as a Class to PermissionCollection Map. This somewhat complex mechanism is used to optimize checking for a permission; iterating over every individual permission would not be ideal. We at least should be able to go right away to the right type of permission. E.g. when checking for permission to access a file, it's useless to ask every socket permission whether we have access to that.


After this we call the implies() method on the collection:

permissionCollection.implies(new WebRoleRefPermission("", "nothing"));
This is small trick, hack if you will, to get rid of a special type of permission that might be in the collection; the UnresolvedPermission. This is a special type of permission that may be used when permissions are read from a file. Such file then typically contains the fully qualified name of a class that represents a specific permission. If this class hasn't been loaded yet or has been loaded by another class loader than the one from which the file is read, a UnresolvedPermission will be created that just contains this fully qualified class name as a String. The implies() method checks if the given permission is implied by the collection and therefor forces the actual loading of at least the WebRoleRefPermission class. This class is the standard permission type that corresponds to the non-standard representation of the group/roles inside the collection of principals that we're after.

Finally we iterate over the permission collection and collect all role names from the WebRoleRefPermission:

Set<String> roles = new HashSet<>();
for (Permission permission : list(permissionCollection.elements())) {
   if (permission instanceof WebRoleRefPermission) {
       String role = permission.getActions();
      
       if (!roles.contains(role) && request.isUserInRole(role)) {
           roles.add(role);
       }
   }
}
A thing to note here is that there's no such thing as a WebRolePermission, but only a WebRoleRefPermission. In the Servlet spec a role ref is the thing that you use when inside a specific Servlet a role name is used that's different from the role names in the rest of your application. In theory this could be handy for secured Servlets from a library that you include in your application. Role refs are fully optional and when you don't use them you can simply use the application wide role names directly.

In JACC however there are only role refs. When a role ref is not explicitly defined then they are simply defaulted to the application role names. Since a role ref is per servlet, the number of WebRoleRefPermission instances that will be created is *at least* the number of roles in the application plus one (for the '**' role), times the number of servlets in the application (typically plus three for the default and JSP servlet, and an extra one for the so-called unmapped context). So given an application with two roles "foo" and "bar" and two Servlets named "servlet1" and "servlet2", the WebRoleRefPermission instances that will be created is as follows:

  1. servlet1 - foo
  2. servlet1 - bar
  3. servlet1 - **
  4. servlet2 - foo
  5. servlet2 - bar
  6. servlet2 - **
  7. default - foo
  8. default - bar
  9. default - **
  10. jsp - foo
  11. jsp - bar
  12. jsp - **
  13. "" - foo
  14. "" - bar
  15. "" - **
In order to filter out the duplicates the above code uses a Set and not e.g. a List. Additionally, to filter out any role refs other than those for the current Servlet from which we are calling the code we additionally do the request.isUserInRole(role) check. Alternatively we could have checked the name attribute of each WebRoleRefPermission as that one corresponds to the name of the current Servlet, which can be obtained via GenericServlet#getServletName. If we're sure that there aren't any role references being used in the application, or if we explicitly want all global roles, the following code can be used instead of the last fragment given above:
Set<String> roles = new HashSet<>();
for (Permission permission : list(permissionCollection.elements())) {
   if (permission instanceof WebRoleRefPermission && permission.getName().isEmpty()) {
       roles.add(permission.getActions());
   }
}

Typically JACC providers will create the total list of WebRoleRefPermission instances when an application is deployed and then return a sub-selection based on the Principals that we (indirectly) passed in our call to Policy#getPermissions. This however requires that all roles are statically and upfront declared. But a JASPIC auth module can dynamically return any amount of roles to the container and via HttpServletRequest#isUserInRole() an application can dynamically query for any such role without anything needing to be declared. Unfortunately such dynamic role usage typically doesn't work when JACC is used (the Java EE specification also forbids this, but on servers like JBoss it works anyway).

All in all the above shown query needs a lot of code and a lot of useless types and parameters for which nulls have to be passed. This could have been a lot simpler by any of the following means:

  • Availability of a "List<String> getAllUserRoles();" method
  • Standardization of the group/role Principals inside a subject (see JAAS in Java EE is not the universal standard you may think it is)
  • A convenience method to get permissions based on just a Subject or Principal collection, e.g. PermissionCollection getPermissions(Subject subject); or PermissionCollection getPermissions(Collection<Principals> principals);

This same technique with slightly different code is also explained here: Using JACC to determine a caller's roles

Has access

Asking whether a user has permission to access a given resource (e.g. a Servlet) is luckily a bit smaller:
Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");

boolean hasAccess = Policy.getPolicy().implies(
    new ProtectionDomain(
        new CodeSource(null, (Certificate[]) null),
        null, null, 
        subject.getPrincipals().toArray(new Principal[subject.getPrincipals().size()])
    ),
    new WebResourcePermission("/protected/Servlet", "GET"))
;
We first get the Subject and create a ProtectionDomain in the same way as we did before. This time around we don't need to get the permission collection, but can make use of a small shortcut in the API. Calling implies on the Policy instance effectively invokes it on the permission collection that this instance maintains. Besides being ever so slightly shorter in code, it's presumably more efficient as well.

The second parameter that we pass in is the actual query; via a WebResourcePermission instance we can ask whether the resource "/protected/Servlet" can be accessed via a GET request. Both parameters support patterns and wildcards (see the JavaDoc). It's important to note that a WebResourcePermission only checks permission for the resource name and the HTTP method. There's a third aspect for checking access to a resource and that boils down to the URI scheme that's used (http vs https) and which corresponds to the <transport-guarantee> in a <user-data-constraint> element in web.xml. In JACC this information is NOT included in the WebResourcePermission, but a separate permission has been invented for that; the WebUserDataPermission, which can be individually queried.

Although this query requires less code than the previous one, it's probably still more verbose than many users would want to cope with. Creating the ProtectionDomain remains a cumbersome affair and getting the Subject via a String that needs to be remembered and requiring a cast unfortunately all add to the feeling of JACC being an arcane API. Here too, things could be made a lot simpler by:

  • Availability of boolean hasAccess(String resource);" and boolean hasAccess(String resource, String metbod);" methods
  • A convenience method to do a permission check based on the current user, e.g. boolean hasUserPermission(Permission permission);
  • A convenience method to do a permission check based on just a Subject or Principal collection, e.g. boolean hasPermission(Subject subject, Permission permission); or boolean hasPermission(Collection principals, Permission permission);
As it appears, several JACC implementations in fact internally have methods not quite unlike these.

 

A hook into the authorization process

The most central feature of JACC is that it allows a class to be registered that hooks into the authorization process. "A class" is actually not entirely correct as typically 3 distinct classes are needed; a factory (there's of course always a factory), a "configuration" and the actual class (the "policy") that's called by the container (together the provider classes).

The factory and the policy have to be set individually via system properties (e.g. -D on the commandline). The configuration doesn't have to be set this way since it's created by the factory.

The following table gives an overview of these:

Class System property Description Origin
javax.security.jacc.PolicyConfigurationFactory javax.security.jacc.PolicyConfigurationFactory.provider Creates and stores instances of PolicyConfiguration JACC
javax.security.jacc.PolicyConfiguration - Receives Permission instances corresponding to authorization constraints and can configure a Policy instance JACC
java.security.Policy javax.security.jacc.policy.provider Called by the container for authorization decisions Java SE

 

Complexity

While JACC is not the only specification that requires the registration of a factory (e.g. JSF has a similar requirement for setting a custom external context) its perhaps debatable whether the requirement to implement 3 classes and set 2 system properties isn't yet another reason why JACC is perceived as arcane. Indeed, when looking at a typical implementation of the PolicyConfigurationFactory it doesn't seem it does anything that the container wouldn't be able to do itself.

Likewise, the need to have a separate "configuration" and policy object isn't entirely clear either. Although just an extra class, such seemingly simple requirement does greatly add to the (perceived) complexity. This is extra problematic in the case of JACC since the "configuration" has an extremely heavyweight interface with an associated requirement to implement a state machine that controls the life cycle in which its methods can be called. Reading it one can only wonder why the container doesn't implement this required state machine and just lets the user implement the bare essentials. And if this all isn't enough the spec also hints at users having to make sure their "configuration" implementation is thread safe, again a task which one would think a container would be able to do.

Of course we have to realize that JACC originates from J2EE 1.4, where a simple "hello, world" EJB 2 required implementing many useless (for most users) container interfaces, the use of custom tools and verbose registrations in XML deployment descriptors, and where a custom component in JSF required the implementation and registration of many such moving parts as well. Eventually the EJB bean was simplified to just a single POJO with a single annotation, and while the JSF custom component didn't became a POJO it did became a single class with a single annotation as well.

JACC had some maintenance revisions, but never got a really major revision for ease of use. It thus still strongly reflects the J2EE 1.4 era thinking where everything had to be ultimately flexible, abstract and under the control of the user. Seemingly admirable goals, but unfortunately in practice leading to a level of complexity very few users are willing to cope with.

Registration

The fact that the mentioned provider classes can only be registered via system properties is problematic as well. In a way it's maybe better than having no standardized registration method at all and leaving it completely up to the discretion of an application server vendor, but having it as the only option means the JACC provider classes can only be registered globally for the entire server. In particular it thus means that all applications on a server have to share the same JACC provider even though authorization is not rarely quite specific to an individual application.

At any length not having a way to register the required classes from within an application archive seriously impedes testing, makes tutorials and example applications harder and doesn't really work well with cloud deployments either. Interestingly JASPIC which was released much later dished the system properties method again and left a server wide registration method unspecified, but did specify a way to register its artifacts from within an application archive.

Default implementation

Arguably one of the biggest issues with JACC is that the spec doesn't mandate a default implementation of a JACC provider to be present. This has two very serious consequences. First of all users wanting to use JACC for authorization queries can not just do this, since there might not be a default JACC provider available for their server, and if there is it might not be activated.

Maybe even worse is that users wanting to provide extended authorization behavior have to implement the entire default behavior from scratch. This is bad. Really bad. Even JSF 1.0 which definitely had its own share of issues allowed users to provide extensions by overriding only what they wanted to be different from whatever a default implementation provided. JACC is complex enough as it is. Not providing a default implementation is making the complexity barrier literally go through the roof.

Role mapping

While we have seen some pretty big issues with JACC already, by far the biggest issue and a complete hindrance to any form of portability of JACC policy providers is the fact that there's a huge gaping hole in the specification; the crucial concept of "role mapping" is not specified. The spec mentions it, but then says it's the provider's responsibility.

The problem is two fold; many containers (but not all) have a mechanism in place where the role names that are returned by a (JASPIC) authentication module (typically called "groups" at this point) are mapped to the role names that are used in the application. This can be a one to one mapping, e.g. "admin" is mapped to "administrator", but can also be a many to many mapping. For example when "admin and "super" both map to "super-users", but "admin" is also mapped to "administrator". JACC has no standard API available to access this mapping, yet it can't work without this knowledge if the application server indeed uses it.

In practice JACC providers thus have to be paired with a factory to obtain a custom role mapper. This role mapper has to be implemented again for every application server that uses role mapping and every JACC provider. Especially for the somewhat lesser known application servers and/or closed source ones it can be rather obscure to find out how to implement a role mapper for it. And the task is different for every other JACC policy provider as well, and some JACC policy providers may not have a factory or interface for it.

Even if we ignore role mapping all together (e.g. assume role "admin" returned by an auth module is the same "admin" used by the application, which is not at all uncommon), there is another crucial task that's typically attributed to the role mapper which we're missing; the ability to identify which Principals are in fact roles. As we've seen above in the section that explained the authorization queries we're passing in the Principals of a Subject. But as we've explained in a previous article this collection contains all sorts of principals and there's no standard way to identify which of those represent roles.

So the somewhat shockingly conclusion is that given the current Java EE specification it's not possible to write a portable and actual working JACC policy provider. At some point we just need the roles, but there's no way to get to them via any Java EE API, SPI, convention or otherwise. In sample code below we've implemented a crude hack where we hardcoded the way to get to the roles for a small group of known servers. This is of course far from ideal.

Sample code

The code below shows how to implement a simple as can be JACC module that implements the default behavior for Servlet and EJB containers. This thus doesn't show how to provide specialized behavior, but should give some idea what's needed. The required state machine was left out as well as the article was already getting way too long without it.

The factory
We start with creating the factory. This is the class that we primarily register with the container. It stores instances of the configuration class TestPolicyConfiguration in a static concurrent map. The code is more or less thread-safe with respect to creating a configuration. In case of a race we'll create an instance for nothing, but since it's not a heavy instance there won't be much harm done. For simplicity's sake we won't be paying much if any attention to thread-safety after this. The getPolicyConfiguration() will be called by the container whenever it needs the instance corresponding to the "contextID", which is in its simplest form the (web) application for which we are doing authorization.
import static javax.security.jacc.PolicyContext.getContextID;

// other imports omitted for brevity

public class TestPolicyConfigurationFactory extends PolicyConfigurationFactory {
    
    private static final ConcurrentMap<String, TestPolicyConfiguration> configurations = new ConcurrentHashMap<>();

    @Override
    public PolicyConfiguration getPolicyConfiguration(String contextID, boolean remove) throws PolicyContextException {
        
        if (!configurations.containsKey(contextID)) {
            configurations.putIfAbsent(contextID, new TestPolicyConfiguration(contextID));
        }
        
        if (remove) {
            configurations.clear();
        }
        
        return configurations.get(contextID);
    }
    
    @Override
    public boolean inService(String contextID) throws PolicyContextException {
        return getPolicyConfiguration(contextID, false).inService();
    }
    
    public static TestPolicyConfiguration getCurrentPolicyConfiguration() {
        return configurations.get(getContextID());
    }
    
}
The configuration
The configuration class has a ton of methods to implement and additionally according to its JavaDocs a state machine has to be implemented as well.

The methods that need to be implemented can be placed into a few groups. We'll first show the implementation of each group of methods and then after that show the entire class.

The first is simply about the identity of the configuration. It contains the method getContextID that returns the ID for the module for which the configuration is created. Although there's no guidance where this ID has to come from, a logical way seems to pass it from the factory via the constructor:

    private final String contextID;
    
    public TestPolicyConfiguration(String contextID) {
        this.contextID = contextID;
    }
    
    @Override
    public String getContextID() throws PolicyContextException {
        return contextID;
    }
The second group concerns the methods that receive from the container all permissions applicable to the application module, namely the excluded, unchecked and per role permissions. No less than 6 methods are demanded to be implemented. 3 of them each take a single permission of the aforementioned types, while the other 3 take a collection (PermissionCollection) of these permissions. Having these two sub-groups of methods seems rather unnecessary. Maybe there was a deeper reason once, but of the several existing implementations I studied all just iterated over the collection and added each individual permission separately to an internal collection.

At any length, we just collect each permission that we receive in the most straightforward way possible.


    private Permissions excludedPermissions = new Permissions();
    private Permissions uncheckedPermissions = new Permissions();
    private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

    // Group 2a: collect single permission

    @Override
    public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
        excludedPermissions.add(permission);
    }

    @Override
    public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
        uncheckedPermissions.add(permission);
    }

    @Override
    public void addToRole(String roleName, Permission permission) throws PolicyContextException {
        Permissions permissions = perRolePermissions.get(roleName);
        if (permissions == null) {
            permissions = new Permissions();
            perRolePermissions.put(roleName, permissions);
        }
        
        permissions.add(permission);
    }

    // Group 2b: collect multiple permissions
    
    @Override
    public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToExcludedPolicy(permission);
        }
    }
    
    @Override
    public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToUncheckedPolicy(permission);
        }
    }
    
    @Override
    public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToRole(roleName, permission);
        }
    }

The third group are life-cycle methods, partly part of the above mentioned state machine. Much of the complexity here is caused by the fact that multi-module applications have to share some authorization data but also still need to have their own, and that some modules have to be available before others. The commit methods signals that the last permission has been given to the configuration class. Depending on the exact strategy used, the configuration class could now e.g. start building up some specialized data structure, write the collected permissions to a standard policy file on disk, etc.

In our case we have nothing special to do. For this simple example we only implement the most basic requirement of the aforementioned state machine and that's making sure the inService method returns true when we're done. Containers may check for this via the previously shown factory, and will otherwise not hand over the permissions to our configuration class or keep doing that forever.

    @Override
    public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
    }
    
    boolean inservice;
    @Override
    public void commit() throws PolicyContextException {
        inservice = true;
    }

    @Override
    public boolean inService() throws PolicyContextException {
        return inservice;
    }

The fourth group concern methods that ask for the deletion of previously given permissions. Here too we see some redundant overlap; there's 1 method that deletes all permissions and 1 method for each type. These methods are supposedly needed because programmatic registration of Servlets can change the authorization data during container startup and every time(?) that this happens all previously collected permissions would have to be deleted. Apparently the container can't delay the moment of handing permissions to the configuration class since during the time when it's legal to register Servlets a call can be made to another module (e.g. a ServletContextListener could call an EJB in a separate EJB module). Still, it's questionable whether it's really needed to have a kind of back-tracking permission collector in place and whether part of this burden should really be placed on the user implementing an authorization extension.

In our case the implementations are pretty straightforward. The Permissions type doesn't have a clear() method so we'll replace it by a new instance. For the Map we can call a clear() method, but there's a special protocol to be executed concerning the "*" role; if the collection explicitly contains a role name "*" remove it, otherwise clear the entire collection.

    @Override
    public void delete() throws PolicyContextException {
        removeExcludedPolicy();
        removeUncheckedPolicy();
        perRolePermissions.clear();
    }

    @Override
    public void removeExcludedPolicy() throws PolicyContextException {
        excludedPermissions = new Permissions();
    }

    @Override
    public void removeRole(String roleName) throws PolicyContextException {
        if (perRolePermissions.containsKey(roleName)) {
            perRolePermissions.remove(roleName);
        } else if ("*".equals(roleName)) {
            perRolePermissions.clear();
        }
    }

Finally we added 3 getters ourselves to give access to the 3 collections of permissions that we have been building up. We'll use this below to give the Policy instance access to the permissions.

    public Permissions getExcludedPermissions() {
        return excludedPermissions;
    }

    public Permissions getUncheckedPermissions() {
        return uncheckedPermissions;
    }

    public Map<String, Permissions> getPerRolePermissions() {
        return perRolePermissions;
    }

Although initially unwieldy looking, the PolicyConfiguration in our case becomes just a data structure to add, get and remove three types of different but related groups of objects. All together it becomes this:

import static java.util.Collections.list;

// other imports omitted for brevity

public class TestPolicyConfiguration implements PolicyConfiguration {
    
    private final String contextID;
    
    private Permissions excludedPermissions = new Permissions();
    private Permissions uncheckedPermissions = new Permissions();
    private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

    // Group 1: identity    

    public TestPolicyConfiguration(String contextID) {
        this.contextID = contextID;
    }
    
    @Override
    public String getContextID() throws PolicyContextException {
        return contextID;
    }


    // Group 2: collect permissions from container

    // Group 2a: collect single permission

    @Override
    public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
        excludedPermissions.add(permission);
    }

    @Override
    public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
        uncheckedPermissions.add(permission);
    }

    @Override
    public void addToRole(String roleName, Permission permission) throws PolicyContextException {
        Permissions permissions = perRolePermissions.get(roleName);
        if (permissions == null) {
            permissions = new Permissions();
            perRolePermissions.put(roleName, permissions);
        }
        
        permissions.add(permission);
    }

    // Group 2b: collect multiple permissions
    
    @Override
    public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToExcludedPolicy(permission);
        }
    }
    
    @Override
    public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToUncheckedPolicy(permission);
        }
    }
    
    @Override
    public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
        for (Permission permission : list(permissions.elements())) {
            addToRole(roleName, permission);
        }
    }

    // Group 3: life-cycle methods

    @Override
    public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
    }
    
    boolean inservice;
    @Override
    public void commit() throws PolicyContextException {
        inservice = true;
    }

    @Override
    public boolean inService() throws PolicyContextException {
        return inservice;
    }

    // Group 4: removing all or specific collection types again

    @Override
    public void delete() throws PolicyContextException {
        removeExcludedPolicy();
        removeUncheckedPolicy();
        perRolePermissions.clear();
    }

    @Override
    public void removeExcludedPolicy() throws PolicyContextException {
        excludedPermissions = new Permissions();
    }

    @Override
    public void removeRole(String roleName) throws PolicyContextException {
        if (perRolePermissions.containsKey(roleName)) {
            perRolePermissions.remove(roleName);
        } else if ("*".equals(roleName)) {
            perRolePermissions.clear();
        }
    }

    @Override
    public void removeUncheckedPolicy() throws PolicyContextException {
        uncheckedPermissions = new Permissions();
    }

    // Group 5: extra methods
    
    public Permissions getExcludedPermissions() {
        return excludedPermissions;
    }

    public Permissions getUncheckedPermissions() {
        return uncheckedPermissions;
    }

    public Map<String, Permissions> getPerRolePermissions() {
        return perRolePermissions;
    }
}
The policy
To implement the policy we inherit from the Java SE policy class and override the implies and getPermissions methods. Where the PolicyConfiguration is the "dumb" data structure that just contains the collections of different permissions, the Policy implements the rules that operate on this data.

The policy instance is per the JDK rules a single instance for the entire JVM, so central to its implementation in Java EE is that it first has to obtain the correct data corresponding to the "current" Java EE application or module within such application. The key to obtaining this data is the Context ID that is set in thread local storage by the container prior to calling the policy. The factory that we showed earlier stored the configuration instance under this key in a concurrent map and we use the extra method that we added to the factory here to conveniently retrieve it again.

The policy implementation also contains the hacky method that we referred to earlier for extracting the roles from a collection of principals. It just iterates over the collection and matches the principals against the known group names for each server. Of course this is a brittle and incomplete technique. Newer versions of servers can change the class names and we have not covered all servers; e.g. WebSphere is missing since via the custom method to obtain a Subject we saw earlier that the roles were stored in the credentials (we may need to study this further).

package test;

import static java.util.Collections.list;
import static test.TestPolicyConfigurationFactory.getCurrentPolicyConfiguration;

// other imports omitted for brevity

public class TestPolicy extends Policy {
    
    private Policy previousPolicy = Policy.getPolicy();
    
    @Override
    public boolean implies(ProtectionDomain domain, Permission permission) {
            
        TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
    
        if (isExcluded(policyConfiguration.getExcludedPermissions(), permission)) {
            // Excluded permissions cannot be accessed by anyone
            return false;
        }
        
        if (isUnchecked(policyConfiguration.getUncheckedPermissions(), permission)) {
            // Unchecked permissions are free to be accessed by everyone
            return true;
        }
        
        if (hasAccessViaRole(policyConfiguration.getPerRolePermissions(), getRoles(domain.getPrincipals()), permission)) {
            // Access is granted via role. Note that if this returns false it doesn't mean the permission is not
            // granted. A role can only grant, not take away permissions.
            return true;
        }
        
        if (previousPolicy != null) {
            return previousPolicy.implies(domain, permission);
        }
        
        return false;
    }

    @Override
    public PermissionCollection getPermissions(ProtectionDomain domain) {

        Permissions permissions = new Permissions();
        
        TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
        Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

        // First get all permissions from the previous (original) policy
        if (previousPolicy != null) {
            collectPermissions(previousPolicy.getPermissions(domain), permissions, excludedPermissions);
        }

        // If there are any static permissions, add those next
        if (domain.getPermissions() != null) {
            collectPermissions(domain.getPermissions(), permissions, excludedPermissions);
        }

        // Thirdly, get all unchecked permissions
        collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

        // Finally get the permissions for each role *that the current user has*
        Map<String, Permissions> perRolePermissions = policyConfiguration.getPerRolePermissions();
        for (String role : getRoles(domain.getPrincipals())) {
            if (perRolePermissions.containsKey(role)) {
                collectPermissions(perRolePermissions.get(role), permissions, excludedPermissions);
            }
        }

        return permissions;
    }
    
    @Override
    public PermissionCollection getPermissions(CodeSource codesource) {

        Permissions permissions = new Permissions();
        
        TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
        Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

        // First get all permissions from the previous (original) policy
        if (previousPolicy != null) {
            collectPermissions(previousPolicy.getPermissions(codesource), permissions, excludedPermissions);
        }

        // Secondly get the static permissions. Note that there are only two sources possible here, without
        // knowing the roles of the current user we can't check the per role permissions.
        collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

        return permissions;
    }
    
    private boolean isExcluded(Permissions excludedPermissions, Permission permission) {
        if (excludedPermissions.implies(permission)) {
            return true;
        }
        
        for (Permission excludedPermission : list(excludedPermissions.elements())) {
            if (permission.implies(excludedPermission)) {
                return true;
            }
        }
        
        return false;
    }
    
    private boolean isUnchecked(Permissions uncheckedPermissions, Permission permission) {
        return uncheckedPermissions.implies(permission);
    }
    
    private boolean hasAccessViaRole(Map<String, Permissions> perRolePermissions, List<String> roles, Permission permission) {
        for (String role : roles) {
            if (perRolePermissions.containsKey(role) && perRolePermissions.get(role).implies(permission)) {
                return true;
            }
        }
        
        return false;
    }
    
    /**
     * Copies permissions from a source into a target skipping any permission that's excluded.
     * 
     * @param sourcePermissions
     * @param targetPermissions
     * @param excludedPermissions
     */
    private void collectPermissions(PermissionCollection sourcePermissions, PermissionCollection targetPermissions, Permissions excludedPermissions) {
        
        boolean hasExcludedPermissions = excludedPermissions.elements().hasMoreElements();
        
        for (Permission permission : list(sourcePermissions.elements())) {
            if (!hasExcludedPermissions || !isExcluded(excludedPermissions, permission)) {
                targetPermissions.add(permission);
            }
        }
    }
    
    /**
     * Extracts the roles from the vendor specific principals. SAD that this is needed :(
     * @param principals
     * @return
     */
    private List<String> getRoles(Principal[] principals) {
        List<String> roles = new ArrayList<>();
        
        for (Principal principal : principals) {
            switch (principal.getClass().getName()) {
            
                case "org.glassfish.security.common.Group": // GlassFish
                case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
                case "weblogic.security.principal.WLSGroupImpl": // WebLogic
                case "jeus.security.resource.GroupPrincipalImpl": // JEUS
                    roles.add(principal.getName());
                    break;
                
                case "org.jboss.security.SimpleGroup": // JBoss
                    if (principal.getName().equals("Roles") && principal instanceof Group) {
                        Group rolesGroup = (Group) principal;
                        for (Principal groupPrincipal : list(rolesGroup.members())) {
                            roles.add(groupPrincipal.getName());
                        }
                        
                        // Should only be one group holding the roles, so can exit the loop
                        // early
                        return roles;
                    }
            }
        }
        
        return roles;
    }
    
}

 

Summary of JACC's problems

Unfortunately we've seen that JACC has a few problems. It has a somewhat arcane and verbose API which easily puts (new) users off. We've also seen that it puts too much responsibilities on the user, especially when it comes to customizing the authorization system.

A fatal flaw in JACC is that it didn't specify how to access roles from a collection of Principals. Since authorization is primarily role based in Java EE this makes it downright impossible to create portable JACC policy providers and much harder than necessary to create vendor specific ones.

Another fatal flaw of JACC is that there's not necessarily a default implementation of JACC active at runtime. This means that general Java EE applications cannot just use JACC; they would have to instruct their users to make sure JACC is activated for their server. Since not all servers ship with a simple default implementation this would not even work for all servers.

On the one hand it's impressive that JACC has managed to integrate so well with Java SE security. On the other hand it's debatable whether this has been really useful in practice. Java EE containers can now consult the Java SE Policy for authorization, but when a security manager is installed Java SE code will consult the very same one.

Java SE permissions are mostly about what code coming from some code location (e.g. a specific library directory on the severer) is allowed to do, while Java EE permissions are about the things an external user logged-in to the server is allowed to do. With the current setup the JACC policy replaces the low level Java SE one and thus all Java SE security checks will go through the JACC policy as well. This may be tens of checks per second or even more. All that a JACC policy can really do is delegate these to the default Java SE policy.

 

Conclusion

The concept of having a central repository holding all authorization rules is by itself a powerful addition to the Java EE security system. It allows one to query the repository and thus programmatically check if a resource will be accessible. This is particularly useful for URL based resources; if a user would not have access we can omit rendering a link to that resource, or perhaps render it with a different color, with a lock icon, etc.

Although not explicitly demonstrated in this article it shouldn't require too much imagination to see that the default implementation that we showed can easily do something else instead and thus tailor the authorization system to the specific needs of an application.

As shown it might not be that difficult to reduce the verbosity of the JACC API by introducing a couple of convenience methods for common functionality. Specifying which principals corresponds to roles would be the most straightforward solution for the roles problem, but a container provided role mapper instance that returns a list of group/role principals given a collection of strings representing application roles and the other way around might be a workable solution as well.

While tedious for users to implement, container vendors should not have much difficulties implementing a default JACC policy provider and activating it by default. Actually mandating providers to USE JACC themselves for their Servlet and EJB authorization decisions may be another story though. A couple of vendors (specifically Oracle themselves for WebLogic) claim performance and flexibility issues with JACC and actually more or less encourage users not to use it. In case of WebLogic this advice probably stems from the BEA days, but it's still in the current WebLogic documentation. With Oracle being the steward of JACC it's remarkable that they effectively suggest their own users not to use it, although GlassFish which is also from Oracle is one of the few or perhaps only server that in fact uses JACC internally itself.

As it stands JACC is not used a lot and not universally loved, but a few relatively small changes may be all that's needed to make it much more accessible and easy to use.

Arjan Tijms

Thursday, February 20, 2014

JAAS in Java EE is not the universal standard you may think it is

The Java EE security landscape is complex and fragmented. There is the original Java SE security model, there is JAAS, there is JASPIC, there is JACC and there are the various specs like Servlet, EJB and JAX-RS that each have their own sections on security.

For some reason or the other it's too often thought that there's only a single all encompassing security framework in Java EE, and that the name of that framework is JAAS. This is however not the case. Far from it. Raymond K. NG has explained this a long time ago in the following articles:

In short, Java EE has its own security model and over the years has tried to bridge this to the existing Java SE model, but this has never worked flawlessly. One particular issue that we'll be taking a more in-depth look at in this article concerns the JAAS Subject.

The JAAS Subject represent an entity like a person and is implemented as a so-called "bag of Principals". This means it holds a collection of arbitrary Principals, where a Principal can be anything that identifies the entity, like an email, telephone number or an identification number. This gives it great flexibility to work with, but since none of those Principals are in any way standardized also makes it difficult and open to interpretation. In a way one might almost just pass a plain HashMap along, which after all is very flexible too.

For Java EE most of the flexibility that the Subject offers is lost however. The most predominant APIs only deal with 2 specific principals; the caller/user principal and the group/role principal.

An enormous amount of complexity and rather arcane API workarounds have spun around a simple but very unfortunate fact; those two core principals have not been standardized in any way in Java EE. This means that a JAAS login module, which populates a Subject with principals when it commits, can never be used directly with an arbitrary Java EE server as it has no idea how a particular Java EE server represents the caller/user and group/role principals.

As explained in a previous article JASPIC has introduced a workaround for this, where a so-called CallerPrincipalCallback and GroupPrincipalCallback do represent the data for those two principals in a standardized way. The idea is that a container can read this data from those callbacks in this standardized way and then construct container specific principals which are subsequently stored in a container specific way into the Subject. This works to some degree, but it remains a crude workaround.

The problem is that the JASPIC CallbackHandler to which these Callbacks are passed is under no obligation to just read the data and directly store it into the Subject without any further side effects. In practice, this is thus indeed not what happens. Handlers will set global security contexts, call into services to execute all kinds of login logic and what have you. Furthermore, there is nothing available to directly read those Principals back from a given Subject. Indirectly you can get the user/caller principal back via e.g. the Servlet API's HttpServletRequest#getUserPrincipal, but this only works for the current authenticated user and not for an arbitrary Subject that you may have lying around. Likewise, the role/group principals can be obtained via the JACC API. Unfortunately JACC itself is a rather obscure API that while being part of Java EE does not necessarily actually has to be active at runtime by default, and in fact doesn't even have to work out of the box since Java EE implementations are not mandated to ship with an actual default JACC provider (they only have to provide the SPI to plug such provider into the container).

The big question is then of course why these two core principals have never been standardized. Do containers use such different representations that standardization is not feasible, or did it just never occurred to anyone in all those years that it may be handy to standardize these?

In order to see if vastly different representations might be the issue I took a look at how actual containers exactly store the Principals inside a Subject. This was primarily done by examining an instance of a Subject via a debugger, but I also took a quick look at some example JAAS modules for a couple of servers. After all, at some point the JAAS module has to add the Principals to the given Subject and the example may indicate how this is done for a specific server.

Examining an instance of a Subject via a debugger


For authentication I used the JASPIC auth module that was introduced in step 5 of the aforementioned previous article and the same test application. In order to retrieve the Subject I used the following JACC code in the Servlet:
Subject subject = (Subject) PolicyContext.getContext("javax.security.auth.Subject.container");

For JBoss EAP 6.2, GlassFish 4 and Geronimo 3.0.1 this worked perfectly. There were however issues with WebLogic 12.1.2, WebSphere 8.5 and JEUS 8.

In WebLogic 12.1.2 JACC is not enabled by default. You have to explicitly enable it by starting the container with a huge string of command line options, including one that references the directory where WebLogic is installed:

./startWebLogic.sh -Djava.security.manager -Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl -Djava.security.policy==/opt/weblogic12.1.2/wls12120/wlserver/server/lib/weblogic.policy

A particular nasty requirement here is that WebLogic requires the Java SE security manager to be activated. The Java SE security manager is used for code level protection, which is a level of protection that is rarely if ever needed in Java EE as it's extremely rare that a server will run untrusted code (like e.g. a browser which runs an untrusted Applet from the Internet). Activating the security manager can have a huge performance impact and this is typically not recommended for application servers. Yet, WebLogic for some reason as the only server in the string of servers that I tested requires this.

After starting WebLogic this way it crashed immediately with an exception about not having access to read some internal file. To quickly remedy this I took the extreme measure of just granting anything to everything, which kinda beats using the security manager in the first place and which more or less proves the unnecessity of the WebLogic security manager requirement. To do this I put the following in the referenced policy file:

grant { 
    permission java.security.AllPermission;
};

Needless to say this is for testing only and should not be used for a production environment where code level security really is required. After this workaround WebLogic booted, but then started to throw out of memory exceptions. These appeared to be caused by a sudden lack of memory in the permanent generation. After I boosted this from 256MB to 512MB WebLogic was stable again.

WebSphere 8.5 proved to be even worse. Here too JACC is not enabled by default, but it can be enabled via the admin console in the security section. However WebSphere does not seem to ship with any default JACC provider. There's only a provider available that's a client for something called the Tivoli Access Manager, which after some research appeared to be some kind of authorization server that needed to be installed separately. For a moment I pondered whether I should try to install this, but the WebSphere installation had taken a horrendous amount of time already and it would be over the top and an immense overkill to have an external authorization server running just to get the thread local Subject inside a Servlet.

"Luckily" there's also an IBM proprietary way to get the Subject, so I used that instead:

Subject subject = com.ibm.websphere.security.auth.WSSubject.getCallerSubject();

Finally in JEUS 8 too JACC is not enabled by default. It can be enabled by commenting out or removing the existing repository-service and custom-authorization-service elements in the authorization section of [jeus install dir]/domains/jeus_domain/config/domain.xml and adding the empty jacc-service element. It then becomes just this:

<authorization>
    <jacc-service/>
</authorization>

Although there's no security manager used here and thus no overhead from that, TMaxSoft warns in its JEUS 7 security manual that the default JACC provider that will be activated by the above shown configuration is mainly for testing and doesn't advise to use it for real production usage. Curiously though, I found after some experimentation that the non-JACC default authorization provider in JEUS is pretty much just like JACC but with some small differences. This may be an interesting thing to take a deeper look at in a future article.

Additionally JEUS 8 also offered a proprietary way to get hold of the Subject when JACC is not enabled:

Subject subject = jeus.security.impl.login.CommonLoginService.doGetCurrentSubject().toJAASSubject();

Sadly JEUS 8 is the only application server that despite being Java EE 7 certified doesn't implement JASPIC in such a way that it actually works; an authentication module can be installed and it's correctly called for each request, but then the container just does nothing with the user/caller and group/roles that are passed to the handler. Because of this a native login module had to be used for JEUS 8 (the default file based username/password module).

The following shows the result of inspecting the Subject on every server:

JBoss EAP 6.2

Principals
    org.jboss.security.SimpleGroup (name=CallerPrincipal)
        members
            org.jboss.security.SimplePrincipal (name=test)

    org.jboss.security.SimpleGroup (name=Roles)
        members
            org.jboss.security.SimplePrincipal (name=architect)

GlassFish 4.0

Principals
    org.glassfish.security.common.PrincipalImpl (name = test)
    org.glassfish.security.common.Group (name = architect)

Geronimo 3.0.1

Pricipals
    org.apache.geronimo.security.realm.providers.GeronimoUserPrincipal (name="test")
    org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal (name="architect")
    org.apache.geronimo.security.IdentificationPrincipal
        (name = org.apache.geronimo.security.IdentificationPrincipal[[1392054106031:0x0f9d4c68befbfbee189e0ca0dadd3757d577da8b]]
        (id = SubjectId (name = [1392054106031:0x0f9d4c68befbfbee189e0ca0dadd3757d577da8b], subjectId = 1392054106031))

WebLogic 12.1.2

Principals
    weblogic.security.principal.WLSUserImpl (name = test)
    weblogic.security.principal.WLSGroupImpl (name = architect)

WebSphere 8.5

Principals
    com.ibm.ws.security.common.auth.WSPrincipalImpl (username = test, fullname = defaultWIMFileBasedRealm/test)

PublicCredentials
    com.ibm.ws.security.auth.WSCredentialImpl
        (accessId = user:defaultWIMFileBasedRealm/uid=test,o=defaultWIMFileBasedRealm
         groupIds
            group:defaultWIMFileBasedRealm/cn=architect,o=defaultWIMFileBasedRealm
         realmuniqueusername = defaultWIMFileBasedRealm/uid=test,o=defaultWIMFileBasedRealm
         uniqueusername = uid=test,o=defaultWIMFileBasedRealm
         username = test)

JEUS 8

Principals
    jeus.security.resource.PrincipalImpl (name = test)
    jeus.security.resource.GroupPrincipalImpl 
        (name = architect, 
         description =, 
         individuals = {Principal test=Principal test} (Hashtable of jeus.security.resource.PrincipalImpl to jeus.security.resource.PrincipalImpl), 
         subgroups = [] (empty Vector))
    jeus.security.base.Subject$SerializedSubjectPrincipal (bytes = …)

PrivateCredentials
    jeus.security.resource.Password (algorithm = "base64", password = "YWRtaW4wMDc=", plainPassword = "admin007")
    jeus.security.resource.SystemPassword (algorithm = null, password = "globalpass", plainPassword = "globalpass")

So there you have it; 6 different servers implementing pretty much the same thing in 6 different ways.

What we see is that there are several methods to distinguish between a Principal that represents a user/caller name and one that represents a group/role name. JBoss EAP 6.2 as the only tested server uses a named Group for this. The name of the Group indicated the type of the Principal. Not entirely consistently this is "CallerPrincipal" for the user/caller Principal and "Roles" for the Group/Roles Principals. Pretty much every other server uses the class type for this type of distinction. Of course every server uses its own type. Things like org.glassfish.security.common.PrincipalImpl, org.apache.geronimo.security.realm.providers.GeronimoUserPrincipal, weblogic.security.principal.WLSUserImpl and jeus.security.resource.PrincipalImpl are completely identical; a simple Principal implementation with a single String attribute called "name".

WebSphere 8.5 does do things a little different. It only models the user/caller name as a Principal and when doing that for some reason needs an additional "fullname" attribute, but otherwise this is still pretty much the same thing as what the other servers use. Things are more different when it comes to the group/roles. As the only server tested WebSphere doesn't put these in the Principals collection, but uses the private credentials one for this. This collection is of the general type Object, and thus can hold other things than just Principals. It can be argued whether this is more correct or not. Principals are supposed to identify an entity, but does a group/role identify an entity, or is it more a security related attribute? At any length, WebSphere is the only one that thinks the latter.

We also see that two servers store extra internal data into the Subject. Geronimo for some reason needs a org.apache.geronimo.security.IdentificationPrincipal, while JEUS needs an extra jeus.security.base.Subject$SerializedSubjectPrincipal. In case of JEUS the extra Principal seems to be used to convert from its own proprietary Subject type to the standard JAAS Subject type and back again. It looks like JEUS is the only server that thinks it absolutely needs its very own Subject type. From a quick glance at the Geronimo code it wasn't clear why Geronimo needs the IdentificationPrincipal, but I'm sure there is a solid reason for that.

Finally the group/role Principal in JEUS 8 is special in that it holds a reference to the collection of all individuals who are in that group/role. It may be that this specific linkage is only done when using a small build-in user repository like the local file based ones. In case of an external provider like Facebook it would of course really not be feasible. Also note that the credentials are there in JEUS 8 since we really presented a password when we used the server provided login module. In all other cases we used JASPIC and didn't provide a password.

Looking at example JAAS login modules

As mentioned above, JAAS example login modules are another source that's worth looking at. Specifically interesting is the commit method, where the module is supposed to do the transfer from the user/caller name and group/role names into the Subject.

Geronimo 3.0.1 comes with several example modules. Of those the commit methods are conceptually identical. Given below is the method from the PropertiesFileLoginModule:

    public boolean commit() throws LoginException {
        if(loginSucceeded) {
            if(username != null) {
                allPrincipals.add(new GeronimoUserPrincipal(username));
            }
            for (Map.Entry<String, Set<String>> entry : groups.entrySet()) {
                String groupName = entry.getKey();
                Set<String> users = entry.getValue();
                for (String user : users) {
                    if (username.equals(user)) {
                        allPrincipals.add(new GeronimoGroupPrincipal(groupName));
                        break;
                    }
                }
            }
            subject.getPrincipals().addAll(allPrincipals);
        }
        // Clear out the private state
        username = null;
        password = null;

        return loginSucceeded;
    }

In the security manual of JEUS 7 a couple of JAAS login modules are given as well. Given below is the commit method for the DBRealmLoginModule (there is an English version available as PDF, but it's often moved around and requires a login and is thus unfortunately very hard to link to)

    public boolean commit() throws LoginException {
        if (succeeded == false) {
            return false;
        } else {
            userPrincipal = new PrincipalImpl(username);
            if (!subject.getPrincipals().contains(userPrincipal))
                subject.getPrincipals().add(userPrincipal);

            ArrayList roles = getRoleSets();
            for (Iterator i = roles.iterator(); i.hasNext();) {
                String roleName = (String) i.next();
                logger.debug("Adding role to subject : username = " + username +
                        ", roleName = " + roleName);
                subject.getPrincipals().add(new RolePrincipalImpl(roleName));
            }

            userCredential = new Password(password);
            subject.getPrivateCredentials().add(userCredential);

            username = null;
            password = null;
            domain = null;
            commitSucceeded = true;
            return true;
        }
    }

JBoss ships with a number of login modules, which are essentially JAAS login modules as well. They have abstracted the common commit method to the base class AbstractServerLoginModule, which is shown below:

    public boolean commit() throws LoginException {
        PicketBoxLogger.LOGGER.traceBeginCommit(loginOk);
        if (loginOk == false)
            return false;

        Set<Principal> principals = subject.getPrincipals();
        Principal identity = getIdentity();
        principals.add(identity);
        
        // add role groups returned by getRoleSets.
        Group[] roleSets = getRoleSets();
        for (int g = 0; g < roleSets.length; g++) {
            Group group = roleSets[g];
            String name = group.getName();
            Group subjectGroup = createGroup(name, principals);
            if (subjectGroup instanceof NestableGroup) {
                /*
                 * A NestableGroup only allows Groups to be added to it so we need to add a SimpleGroup to subjectRoles to contain the roles
                 */
                SimpleGroup tmp = new SimpleGroup("Roles");
                subjectGroup.addMember(tmp);
                subjectGroup = tmp;
            }
            // Copy the group members to the Subject group
            Enumeration<? extends Principal> members = group.members();
            while (members.hasMoreElements()) {
                Principal role = (Principal) members.nextElement();
                subjectGroup.addMember(role);
            }
        }
        
        // add the CallerPrincipal group if none has been added in getRoleSets
        Group callerGroup = getCallerPrincipalGroup(principals);
        if (callerGroup == null) {
            callerGroup = new SimpleGroup(SecurityConstants.CALLER_PRINCIPAL_GROUP);
            callerGroup.addMember(identity);
            principals.add(callerGroup);
        }

        return true;
    }

As can be seen the JAAS modules all use the server specific way to store the Principals inside the Subject and this is done in pretty much the same way as the JASPIC container code did. A remarkable difference is that the JBoss JAAS module inserts the caller/user Principal twice; once directly in the root of the principals set and once in a named group, while the JASPIC auth code only used the named group. In the example DBRealmLoginModule of JEUS there's no trace of a reference to all users in a group/role. So this is either just an extra thing, or perhaps JEUS adds this information to the Subject after the JAAS login module has committed. For this article I did not investigate that further.

Conclusion

We looked at a relatively large number of servers here, nearly all the ones that implement the full Java EE profile (and thus support both JASPIC and JACC where the Subject type comes into play). There are more servers out there like Tomcat, TomEE, Jetty and Resin, but I did not look into them. From previous experience I know that Tomcat doesn't use the Subject type internally and only has support to read from a Subject via its JAAS realm. There are a small number of rather obscure other full Java EE implementations like the Hitachi and NEC offerings which I might investigate in a future article.

For the servers tested it seems that most servers could simply use two standardized Principals for the caller/user and group/role Principal. E.g.

  • javax.security.CallerPrincipal
  • javax.security.GroupPrincipal

Each principal would only need a single attribute name of type String. GlassFish, Geronimo and WebLogic could use such types as a direct replacement. JBoss would have to switch from looking at a group node to the class type of the Principal. JEUS could directly use the standard type for the caller/user principal, but may have to stuff the extra information it now puts in to the group/role principal somewhere else (assuming that it indeed really needs this info and we did not just observed some coincidental side-effect of the particular login-module that we used). The only server that really does things differently is WebSphere. Its caller/user principal is mostly equivalent to those of the other servers, but the group/role one is radically different.

To accommodate the extra info a few servers may need to store in the Principal, it might be feasible to define an extra Map<String, Object> attribute on the standardized Principals where this extra server specific info could be stored.

All in all it thus seems this standardization should be possible without too many problems. The introduction of these two small types would likely simplify the far too abstract and complex Java EE security landscape a lot. The question remains though why this wasn't done long ago.

Arjan Tijms