Making of JBoss Enterprise Web Server

February 14, 2011

Blogs

I currently work on a second version of JBoss Enterprise Web Server which should be released pretty soon. You can find more info about the product itself on the official site. I’ll describe here some behind the scene technical aspects about how it is build and maintained.

One of the problems with majority of Open Source projects is the lack of support for previous version. By the time you or your IT department deploys the software stack needed to run your fancy new application, it is a good chance that those components making your software stack have already released multitude of new version, patches or security updates. If the component breaks backward compatibility (deliberately or unintentionally) you are in big problems. You will either need to fix your application or stay with previous version of the component. The first one will cause you to make the quality testing all over again and if by the time you have tested again a new version emerges, you are in the endless loop.

As an example I’ll give the latest Apache Tomcat 5.5 release.
Version 5.5.32 was released on February 1st. Around that time a JDK Double.parseDouble Denial-Of-Service security issue was discovered () for which we created a workaround and on February 10th the 5.5.53 was released fixing that issue. You may think, wow! that’s cool, open source at its best since a traditional software vendor would need months for that. Not so fast!

Together with fix, a whole bunch of other code changes came along with version 5.5.33, which you can see listed in changelog. Those changes won’t probably break anything from 5.5.32 but you or your QE team will have to verify that statement for sure.

All those problems are foundations upon which companies like mine are basing their existence. We make sure that version we give you doesn’t break the compatibility and that it has all the relevant security issues fixed. We have our QE team that verifies all that before shipping the updates to our customers. At the end this makes your life and your IT department’s life a lot easier. At least you can blame someone if it’s not working.

So, how that works with JBoss Enterprise Web Server? Basically the same way as with any Red Hat Enterprise Linux component. If you are familiar with RHEL updates and .rpm files (or even Fedora for a more limited time) you know that component versions doesn’t change which is a first premise in ensuring the backward compatibility. Only security features and limited
number of enhancements (mainly performance related) are added to base code, tested and made as an update. Inside Red Hat we have a huge engineering team that implements those code updates and patches and handing over the updated code to equally huge Quality engineering team that verifies the resulting package doesn’t break anything.

However there is one small problem with all that. It’s targeted for Linux operating system. Rpm packaging system is deeply embedded inside Linux distribution so although the source package is multiplatform the end result isn’t. Since JBoss Enterprise Web Server was meant to be used on multiple platforms like Microsoft Windows and Oracle Solaris we needed something that would allow us to leverage exiting update code base while at the same time making is platform neutral. That’s how the new build system was born.

It uses Hudson for managing the produced artifacts and for kicking the builds on a particular native platform. Hudson was used because our existing JBoss lab which was the only one inside Red Hat having platforms other then Linux was using it. Although Hudson is designed primary for building Java applications, it has an option to execute external programs as part of a build process, so we used that feature to simply fire a shell script that actually does a build job.

On the following picture you can see how that looks in practice:

As you can see, quite complex, but at the end it’s a professional stuff. The Hudson build consists of two parts. The Master build prepares the source package from RHEL CVS server using the same logic rpm packaging does internally. It uses the package’s original source code and applies a set of patches. The build system does that in a loop for every component that makes EWS, starting from major components to a particular component dependencies. It also applies a set of local patches that are not part of original rpm .spec file. Those patches are used mainly for enabling the multiplatform builds and they don’t change a package functionality.

When the master job finishes it fires number of sub tasks, or Slave builds, which are executed on a particular native platform. The system itself uses a group of Solaris boxes for example and assigns a slave job to a fist free box. After the builds are done, produced artifacts are gathered by Hudson as a final build. Our QE team uses those builds to verify their integrity and functionality and if everything goes well they are delivered to the customers.

Updates are done in a similar way the full builds are done. Actually we always do a full build which is suboptimal and probably an area for future improvements. A special file is manually maintained that lists the actual changes. According to that list, the final update package is created which is basically a subset of a full build. The update might contain just a single entry (e.g Apache Tomcat’s catalina.jar) or any number of them.

This build system is also used for producing JBoss Enterprise Application Platform native components and connectors, and is currently in testing stage for producing PostgreSQL ODBC drivers as part of JBoss SOA.

Subscribe

Subscribe to our e-mail newsletter to receive updates.

No comments yet.