Friday, May 26, 2006

 

Looking at Java™ EE 5 for useful features.

Coming from a background where we used an object-oriented database (ObjectStore in fact!), I'm always to keen to look at technologies that provide a layer of abstraction over any underlying relational database. I had been looking at the Java Data Objects specification with a view to this end. This is a specification that has run in parallel to Enterprise Java Beans. They come from different backgrounds and view points, but go about achieving roughly the same thing in similar ways(!). The JSR for Java™ EE 5 recently went final, so I've been looking at that too, what with it's Persistence API that can be run outside of the application server container. Unfortunately I think both are a bad fit for our purposes. Our event model is very simple and really should just map to a single class (or table in SQL speak!). The tutorials for these technologies I've mentioned always talk about mapping complex relationships, using sporting examples where you have teams, players and leagues. You can express the cardinalities of the relationships as things like OneToOne, OneToMany,

ManyToOne or ManyToMany via annotations on the appropriate attributes / properties. They also support the notion of cascading delete so that if a persisted object entirely relies on the existence of another, if the object on which it depends gets deleted, it will too. Again, you can express this easily through annotations. Alas, our data model is so simple we just don't need this kind of thing!

However, one thing that we could use is resource-injection. This is where you map something like a datasource to a JNDI name in the container and then refer to it from within your application. Instead of explicitly writing JNDI code to get a handle on this datasource, you can just mark up as associated variable with the annotation @Resource and the container will do the binding for you.

A library that we think will be applicable for our purposes is a library called Hyperjaxb2. In principle you can take the JAXB classes that you've generated from your XML schema (which we've already done). Then, in conjunction with Hibernate you can create an object-relational mapping, to bind your classes to a particular database. You can also use Hibernate to then generate the mapping for other database vendors. You can use Hibernate's own HQL query language in order to perform queries on the data and retrieve the results as XML going via the JAXB classes again. It'll be interesting to see if it works as desired (the only foreseeable drawback is that it's only at 0.4, but maybe the developers are ultra-conservative with their versioning!).


 

Subversive winning out!

I've blogged in the past about some of the various Eclipse plugins there are available for use with subversion; these are Subclipse (which is now at version 1.0.1) and Subversive (which is now at version 1.0.0 RC1a). I had been favouring Subclipse, as it enabled with me to do all the typical operations I wanted (such as server-side copying) which Subversive didn't. However, what's finally won me over with Subversive is the excellent projections of refactorings into version control. What I mean is that you can rename a Java class and / or move what package it's in. Subversive keeps track of all this, so that when you come to commit your files all the version history is kept on (what is effectively) otherwise a "new" file. Subclipse can not do this at all; you have to carry out the intermediate steps all manually. On the other hand, I'm starting to suspect that Subversive not being able to perform server-side copying may have something to do with the Subversion configuration at sourceforge. Even if I still can't do these, I can perform the equivalent operations client-side instead. I also think that the views associated with subversion repository exploring are also better in Subversive. For example, if you have moved a file, you can clearly see where it was moved from.



Tuesday, May 16, 2006

 

Abstracting the persistence layer.

In this day and age, I'd like to be able to avoid having to write raw SQL (certainly at the development stage!), so I'm keen to be able to use whatever O/R mapping technologies there are available. The other benefit of this is that you don't tie yourself to any one particular database (the database becomes a "pluggable" component itself). I also personally find it a lot more natural to think in terms of objects, rather than table columns.

One of the things that piques my interest is the Java Persistence API (which is part of EJB 3.0). This states that you can leverage EJB 3.0 style persistence outside of an application server. I first heard about this in a tech talk with Gavin King from Hibernate. Persisting entities in this way makes a great deal of use of annotations, which really cuts down on the verbosity and enables you to do away with tricky XML descriptor files almost altogether (there's an interesting article about this here).

An entity becomes a simple POJO and so it should be clear what the class actually does; all the semantics concerning it's connections with an EJB container / application server are hidden away by marking up the relevant features with annotations. An alternative to using EJB 3.0 is Java Data Objects. This is another way of handling the O/R mapping and was always intended to be usable outside an application server. This (JDO 2.0) and EJB 3.0 have recently become more closely aligned. But some think that they will both exist as separate standards for some time to come (as can be seen with this
tech talk with Patrick Linksey from Solarmetric).

Some relevant standards that went final this month.


 

Using JAXB to generate XML schema from Java.

Recently I had attempted to start creating the XML schema for our tracking events by using the excellent oXygen, which you can use standalone or as a plugin for Eclipse. However, to be honest, I'm a beginner with XML schema, so even with code tips and code completion, I was finding it a bit of a struggle! However, it occured to me that I'm comfortable thinking in terms of objects, so I decided to give using JAXB a spin; specifically, to generate the schema from the Java (rather than the other way round!).

There are two features of Java 5 that prove useful straight away when working with JAXB; enums and annotations. In fact they really come into their own here! By default, JAXB maps an enum as a restricted set of strings (not as a restricted set of ints), so the mapping is very natural and of course you can do this very succinctly. If you only want the value of an element or attribute to be one of the values red, green or blue you can declare an enum thus; enum colour {red, green, blue};. Within JAXB you can use annotations to control how your Java gets mapped into schema. For example, if you want a property mapped as an attribute rather than an element, you can tag it with the annotation @XmlAttribute. You can put this on the field or on one of the appropriate accessors.



Saturday, May 13, 2006

 

An Apache project called DdlUtils may be useful.

A few days ago I came across an an Apache project called
DdlUtils. It was only after I dug around a bit that I discovered that they haven't released any binaries yet! However, I was able to check it out from their subversion respository and build it using Ant. It currently describes itself as version 1.0-dev.

What it does is enables you to migrate schema and content between databases that have as associated JDBC driver (which is most of them these days!). It does this through an associated DDL (data definition) file, which is in XML format. So for example, you could develop an SQL schema on a PostgreSQL database, use DdlUtils to generate a DDL file from it and then use that to generate the schema in a MySQL database. You can then use in to evolve the schema in parallel, so you could change the schema in the PostgreSQL database, generate another DDL file and then ask DdlUtils nicely to evolve the schema in the MySQL database to match it. It should work for content too, so in theory you could use it take a dump of the database content of one vendor and import the dump into the database of another.

This could come in handy with TReCX, because we want to keep the schema generic and not tied to closely to any one specific database.


This page is powered by Blogger. Isn't yours?