Thursday, April 4, 2013

Code+tests == specification?

Have you worked together with Agile enthusiasts (or fanatics) in the past? If the answer is yes, you might know what I want to talk about.
One of the core concepts of the agile movement is "working code over comprehensive documentation". I strongly agree with it. Many of us know of projects failed or missed deadlines because of the big up front specification and design. It usually does not work if requirements can change during the design or implementation phase. Unfortunately many agile enthusiasts simply state that time spent preparing the specification is useless, as the code and test suite themselves replace specification. I think they try to put the cart behind the horse.
First, without enough time spent on up front specification and design, the project usually ends up with a sub-optimal architecture, ad hoc solutions and endless refactoring. The more large scale the project is, the greater the chance of failure or at least missing deadlines.
Second, you need a specification to make sure you build the right product. Yes, you can show the result of each iteration to the customer, but his/her opinion it not a proof. He/she might not be aware of other things than the UI and a subset of the functional requirements. Without a written specification you cannot prove anything. Sure, you can test the implementation, but unit tests can only prove the presence of bugs, they cannot prove their absence. Even Test-Driven Development won't help you, since tests may be incomplete or may contain bugs and the code reflects this. The only way to prove correctness is to test each and every feature based on the written specification. This can be automated by creating an acceptance test suite, but first you need a specification to write acceptance tests.
Years before, I was attended to a conference organized by a local computer science society. One of the presenters talked about a case study of a military-grade project done by using Extreme Programming. The presenter was proud of the fact that they only used post-it notes created on iteration planning meetings. I was curious if they had to verify the resulting system and how could they do it without a specification, only based on these post-it notes? Unfortunately the session ran out of time, and the presenter left the room, leave me no chance to talk him.
To recap, in most of the cases, skipping the specification step is not something that can be done without negative consequences. We are agile enough if we spend some time for creating a specification and a conceptual architecture up front. This is an overhead, of course, but it'll pays back in the long run.

Monday, March 18, 2013

Google Summer of Code

New year, new Google Summer of Code. If I would be a student again, I'd seriously consider to apply to one of the projects proposed by the Java Pathfinder team. I wish their proposals will be accepted by Google.

Wednesday, February 27, 2013

Nulls Are Mostly Harmful


This is an old thing, but it should be emphasized again and again. Nulls are mostly harmful, because
  • they force programmers to pollute the code with null checks
  • uninitialized references may easily cause runtime errors
  • null semantics is ambiguous, null may represent
    • an unintentionally uninitialized reference
    • a legal value (an intentionally uninitialized reference)
    • a lack of result of an operation
    • a result of a failed operation
    • a result of a human (usually programming) error
    • etc.
So avoid them by using
  • defensive programming practices
    • never return null, only in the rare case when the operation has clear semantics, and
    • explicitly document this decision in a formal way (e.g. using the javax.annotation.Nullable annotation)
    • use the Null Object design pattern if you should represent a legal null value
    • annotate the code with javax.validation.constraints.NotNull to be able to verify runtime reference nullness
  • validation using static and runtime analysis
    • static analysis is fast and well-integrated with most of the popular IDEs (see The Checker Framework and FindBugs), but it only finds errors that can be detected in compile-time
    • runtime analysis is more accurate, but slower and requires a testing phase in the development process (see Java Pathfinder and Hibernate Validator).

Monday, January 14, 2013

Java PathFinder Maven repository how-to

I set up a Maven repository for Java PathFinder projects in the past few days. I spent long hours to figure out the solution for some problems, so I decided to share my experiences.

Install a Maven repository manager

The first step is to install a Maven repository manager. There are many good open-source candidates like Artifactory or Apache Archiva, but my favourite is Sonatype Nexus. There is a standalone version which runs in a Jetty container, but it is also distributed as a war file that can be deployed in almost any JavaEE container. Its install process is very straightforward, consisting of a few easy steps.
After you have a working repository manager, you should create a Maven repository for JPF. It can be done using the web-based admin interface of Nexus. (Do not forget to change the default password of admin and deployment users.)
At the left panel you find the "Repositories" link which opens the "Repositories" tab. Click the "Add" button on the toolbar and choose the "Hosted Repository" from the list, then fill the "Repository ID" and "Repository Name" fields. After you save the configuration, your repository can be accessed in {URL of your Nexus deployment}/content/repositories/.

Generate and upload the artifacts

At first, clone the Mercurial repositories of JPF that you want to hosts in your Maven repository. Then you should run Ant build for each project to create the jar files that you upload to the Maven repository.
After this step, you are ready to upload the artifacts. It can be done using the admin UI of Nexus or from command line by executing mvn deploy for each jar file:
mvn deploy:deploy-file -Durl={URL of your Nexus deployment}/content/repositories/{repository ID} -DrepositoryId={repository ID} -Dpackaging=jar -Dfile={jar file} -DgroupId=gov.nasa.jpf -DartifactId={name of the jar without the extension} -Dversion={JPF module version number}
Before issuing this command, make sure that your Maven settings contain a repository entry for the repository ID and the corresponding username/password entries are set.


Automate

The JPF projects are constantly changing, so you might want to update your Maven repository with the artifacts containing these changes. The next step is to create a script to poll the Mercurial repositories, compile the sources and upload the jar files to the Maven repository. I created a small Bash shell script for this task:
#!/bin/bash
 

nexus_repo='{URL of your Nexus deployment}/content/repositories/{repository ID}'
jpf_home='{path to your local Mercurial repositories}'
repo_id='{repository ID}'
group_id='gov.nasa.jpf'
module_version='{JPF module version number}'

for repo in $jpf_home/jpf-*; do

    cd $repo
    changesets=$(hg incoming)
    if [ -v "$changesets" ] || [ ! -d build ]; then

        hg pull -u && ant || continue
        for jar in build/jpf*.jar; do
            if [ -f $jar ]; then
                artifact_id=${jar:6:-4}

                mvn deploy:deploy-file -Durl=$nexus_repo -DrepositoryId=$repo_id -Dpackaging=jar -Dfile=$jar -DgroupId=$group_id -DartifactId=$artifact_id -Dversion=$module_version -Ddescription="$changesets"
            fi

        done
    fi
done
This script updates the local Mercurial repositories, executes the Ant build task for each JPF project and uploads the resulting jars to the Maven repository. In case you want to run the script periodically, you may create a cron job running couple of times a day or a week.

..and done.