Monday, February 18, 2019

Java date-time timezone formats

Java has excellent date-time formatting with the arrival of java.time (JSR310) in Java 8. I point out that release as it came with a usable, safe API. (Let us never speak of Calendar again).

However, I never recall how to format timezone. There are so many options, and it is easy to get is "almost right", but not exactly right.

Problem

I'd like to append a "Z" character on the end of a UTC timestamp. OK, let's look at the options, showing only those for timezone/offset:

Symbol Meaning Presentation Examples
V time-zone ID zone-id America/Los_Angeles; Z; -08:30
v generic time-zone name zone-name Pacific Time; PT
z time-zone name zone-name Pacific Standard Time; PST
O localized zone-offset offset-O GMT+8; GMT+08:00; UTC-08:00
X zone-offset 'Z' for zero offset-X Z; -08; -0830; -08:30; -083015; -08:30:15
x zone-offset offset-x +0000; -08; -0830; -08:30; -083015; -08:30:15
Z zone-offset offset-Z +0000; -0800; -08:00

One thing to be wary of: formatting characters can be doubled, tripled, or quadrupled, and it changes the result. Further, some characters have special rules on repeating (eg, "VV", and "O" vs "OOOO").

The best way to understand what to use is to try them all:

final var when = ZonedDateTime.of(
        LocalDate.of(2011, 2, 3),
        LocalTime.of(14, 5, 6, 7_000_000),
        ZoneId.of("UTC"))
        .toInstant();
for (final String tzFormat
        : List.of("VV", "v", "z", "zz", "zzz", "zzzz", "O", "OOOO", "X", "XX", "XXX",
        "XXXX", "x", "xx", "xxx", "xxxx", "Z", "ZZ", "ZZZ", "ZZZZ")) {
    System.out.println(
            tzFormat + " - " + DateTimeFormatter
                    .ofPattern("yyyy-MM-dd'T'HH:mm:ss" + tzFormat)
                    .withZone(ZoneId.of("UTC"))
                    .format(when));
}

Producing:

VV - 2011-02-03T14:05:06UTC
v - 2011-02-03T14:05:06UTC
z - 2011-02-03T14:05:06UTC
zz - 2011-02-03T14:05:06UTC
zzz - 2011-02-03T14:05:06UTC
zzzz - 2011-02-03T14:05:06Coordinated Universal Time
O - 2011-02-03T14:05:06GMT
OOOO - 2011-02-03T14:05:06GMT
X - 2011-02-03T14:05:06Z
XX - 2011-02-03T14:05:06Z
XXX - 2011-02-03T14:05:06Z
XXXX - 2011-02-03T14:05:06Z
x - 2011-02-03T14:05:06+00
xx - 2011-02-03T14:05:06+0000
xxx - 2011-02-03T14:05:06+00:00
xxxx - 2011-02-03T14:05:06+0000
Z - 2011-02-03T14:05:06+0000
ZZ - 2011-02-03T14:05:06+0000
ZZZ - 2011-02-03T14:05:06+0000
ZZZZ - 2011-02-03T14:05:06GMT

What an exciting list! "zzzz" is rather wordy, and it's unclear what "ZZZZ" is doing. Actually, the whole list is even more iteresting for timezones other than UTC.

Solution

Since the goal is to append a "Z", the simplest choice is: yyyy-MM-dd'T'HH:mm:ssX.

Addendum

Why didn't I just use DateTimeFormatter.ISO_INSTANT, which is documented to produce the "Z"? I want a timestamp that is to only seconds-precision, and the format for "ISO_INSTANT" includes milliseconds.

Friday, January 18, 2019

Spring REST testing

After too much Internet searching, I was unable to find an easy solution to repeated duplication in my Spring MockMVC tests of REST controller endpoints. For years now, the endpoints we write have typically sent or received JSON. This is what I mean:

mockMvc.perform(post("/some/endpoint")
        .contentType(APPLICATION_JSON_UTF8)
        .accept(APPLICATION_JSON_UTF8)
        .content(someRequestJson))
        .andExpect(status().isCreated())
        .andExpect(header().string(CONTENT_TYPE, APPLICATION_JSON_UTF8_VALUE))
        .andExpect(header().string(LOCATION, "/some/endpoint/name-or-id"))
        .andExpect(content().json(someResponseJson));

All the repeated "APPLICATION_JSON_UTF8"s, in every controller test!

If there is an existing Spring testing solution, I'd love to hear about it. Rather than wait, I wrote up a small extension of @WebMvcTest to default these values.

First, an annotation for Spring to use in setting up a MockMvc (javadoc elided):

@Documented
@Import(JsonMockMvcConfiguration.class)
@Retention(RUNTIME)
@Target(TYPE)
@WebMvcTest
public @interface JsonWebMvcTest {
    @AliasFor(annotation = WebMvcTest.class)
    String[] properties() default {};

    @AliasFor(annotation = WebMvcTest.class)
    Class[] value() default {};

    @AliasFor(annotation = WebMvcTest.class)
    Class[] controllers() default {};

    @AliasFor(annotation = WebMvcTest.class)
    boolean useDefaultFilters() default true;

    @AliasFor(annotation = WebMvcTest.class)
    ComponentScan.Filter[] includeFilters() default {};

    @AliasFor(annotation = WebMvcTest.class)
    ComponentScan.Filter[] excludeFilters() default {};

    @AliasFor(annotation = WebMvcTest.class)
    Class[] excludeAutoConfiguration() default {};
}

Note it is a near exact lookalike of @WebMvcTest (minus the deprecated parameter). The important bits are:

  1. Marking this annotation with @WebMvcTest, a kind of extension through composition.
  2. Adding @Import to bind custom configuration to this annotation.
  3. Tying the same-named annotation parameters to @WebMvcTest, so this annotation is a drop-in replacement of that one.

Next a configuration class, imported by the annotation, to customize MockMvc:

@Configuration
public class JsonMockMvcConfiguration {
    @Bean
    @Primary
    public MockMvc jsonMockMvc(final WebApplicationContext ctx) {
        return webAppContextSetup(ctx)
                .defaultRequest(post("/")
                        .contentType(APPLICATION_JSON_UTF8)
                        .accept(APPLICATION_JSON_UTF8_VALUE))
                .alwaysExpect(header().string(
                        CONTENT_TYPE, APPLICATION_JSON_UTF8_VALUE))
                .build();
    }
}

Some points about this class:

  • @Primary is not necessary for Spring, but helped IntelliJ — perhaps I got lucky with Spring without @Primary, and IntelliJ highlighted a real problem.
  • It took quite a while to get defaultRequest(...) working. I was unable to (re)implement the relevant interfaces, and eventually found that passing any MockHttpServletRequestBuilder sufficed. Spring "merges" (overlays) the actual request builder from the test over this default, replacing POST and "/" with whichever HTTP method and path the test uses (eg, GET "/bob"). Only the header customization is used.

Example usage:

@JsonWebMvcTest(SomeController.class)
class SomeControllerTest {
    @Autowired
    private MockMvc jsonMockMvc;

    @Test
    void shouldCheckSomething()
            throws Exception {
        jsonMockMvc.perform(post("/some/endpoint")
                .content(someRequestJson))
                .andExpect(status().isCreated())
                .andExpect(header()
                        .string(LOCATION, "/some/endpoint/new-name"))
                .andExpect(content().json(someResponseJson));
    }
}

See the Basilisk project for source code and sample usage. (Basilisk is a demonstration project for my team illustrating Spring usage and conventions.)

Wednesday, January 09, 2019

Magic Bus returns

During my first stint at ThoughtWorks, I paired with Gregor Hohpe on implementing messaging patterns while he worked with Bobby Woolf on Enterprise Integration Patterns (EIP). To this day, this remains one of my favorite technical books. In conversation I was always struck by Gregor's meticulous "napkin diagrams" as he illustrated the point he was making.

One output from that pairing was to experiment with using messaging patterns within a single program, not just between programs. So I wrote the "Magic Bus" library in Java, using reflection, to connect publishing and subscribing components within a web services backend.

While working a new project, I find myself diagramming one of our backend services using EIP's notations for messaging patterns. And I recalled "Magic Bus".

I thought I had long ago lost the source code, but found some JVM .class files in a forgotten directory. IntelliJ to the rescue! Using JetBrain's excellent Fernflower decompiler, I recovered a later stage of "Magic Bus" after I had converted it to typesafer generics and dropped reflection.

That code is now in public GitHub, brought up to Java 11, and cleaned up.

If I recall correctly, I originally dropped "Magic Bus" after Guava's Event Bus came along. What makes "Magic Bus" different from Event Bus? Not too much, actually. The main feature in "Magic Bus" lacking in Event Bus is subscribing to message handler exceptions: in Guava one instead registers a global callback to handle exceptions.

Monday, January 07, 2019

Hard-won JDK offset knowedge

It took far more research time than I expected. The goal: Output an OffsetDateTime with offset for Zulu (OTC) timezone as +00:00.

I have a project where a 3rd-party JSON exchange expected timestamps in the format 01-02-03T04:05:06+00:00. We're using Jackson in a Java project. All the default configuration I could find, and trying all the "knobs" on Jackson I could find, led to: 01-02-03T04:05:06Z. Interesting, as any non-0 offset for timezone produced: 01-02-03T04:05:06+07:00 rather than a timezone abbreviation: Zero offset is special.

Finally, circling back to the JDK javadocs yet again, I spotted what I had overlooked many times before:

Offset X and x: This formats the offset based on the number of pattern letters. One letter outputs just the hour, such as '+01', unless the minute is non-zero in which case the minute is also output, such as '+0130'. Two letters outputs the hour and minute, without a colon, such as '+0130'. Three letters outputs the hour and minute, with a colon, such as '+01:30'. Four letters outputs the hour and minute and optional second, without a colon, such as '+013015'. Five letters outputs the hour and minute and optional second, with a colon, such as '+01:30:15'. Six or more letters throws IllegalArgumentException. Pattern letter 'X' (upper case) will output 'Z' when the offset to be output would be zero, whereas pattern letter 'x' (lower case) will output '+00', '+0000', or '+00:00'.

The key is to use lowercase 'x' in the format specification. So my problem with Jackson became:

    @JsonFormat(pattern = "yyyy-MM-dd'T'HH:mm:ssxxx")
    private final OffsetDateTime someOffsetDateTime;

And the result is the desired, 01-02-03T04:05:06+00:00.

Now I can return to more interesting problems.

Friday, November 09, 2018

What are exceptions?

What are exceptions?

Essentially, exceptions are a form of structured, non-local goto with stack unwinding. "Structured" here means "higher level than the machine" (a matter of taste/opinion), and "non-local" means "beyond a single function/method".

What this means is that you can replace code like (in "C"):

int bottom()
{
    int things_go_wrong = -1; // For illustration

    if (things_go_wrong) goto error;
    return 0;

error:
    return -1;
}

int
middle()
{
    if (-1 == bottom()) goto error;
    return 0;

error:
    return -1;
}

void
top()
{
    if (-1 == middle()) {
        handle_failure();
    }
}

With code like (in Java):

public class A {
    void botton() {
        boolean thingsGoWrong = true; // For illustration

        if (thingsGoWrong) throw new ThingsWentWrong("So wrong!");
    }

    void middle() {
        bottom();
    }

    void top() {
        try {
            middle();
        } catch (ThingsWentWrong e) {
            handleFailure();
        }
    }
}

(An example in Scheme.)

"Unwinding" here means the compiler or runtime treats intermediate calls (the stack) the same as if returning normally (for example, the stack pointer is moved back; and in a language like C++, destructors are executed), and program execution resumes in the catch block.

It is "structured" in the sense that it is not the same as a direct goto to the resume point. This is not possible in standard "C" or C++ (or in Java), which only suppport local labels within functions. The equivalent in "C" and C++ is to use setjmp and longjmp for a non-local goto, and forget all about deallocating memory or calling destructors. (As odd as this sounds, it is needed for call-with-continuation, an important feature in LISP-like languages).

Takeaway

All human endeavors build on the work of those who came previous. Exceptions are no exception. They are the result of 1960s and 1970s programmers wanting an easier way to deal with exceptional conditions. This is not completely identical with "errors", which may sometimes be better represented with a simple boolean true/false return. Like any sharp instrument, do not abuse it.

Friday, October 19, 2018

Avoid JIRA

I filed this issue with Atlassian about JIRA:

You make it incredibly difficult to report issues about JIRA itself:

 * I cannot use Markdown in the editor.  You are the only tool I use which does
   not support markdown.  This is one of the top reasons I recommend against
   using JIRA to clients  Ex: quoting code with single backticks, or code blocks
   with tripple backticks
 * Finding the issue tracker for JIRA is a PITA.  Even after finding it, when
   creating a new issue, it offers a dialog/link that takes me back to the
   beginning
 * The web pages for a team project does not have any easy way to report JIRA
   issues to Atlassian
 * How do I find out the version of JIRA software in use when I report problems?
 * Reporting to you, you _require__ a component, and severity.  Which component
   should I pick?  I don't know how your product is architected, so I guessed
   at one.  And "affected version"?  Heck if I know.  Really, you can't
   provide a link on a cloud JIRA board which fills this in automatically?
 * In a dropdown for picking what Atlassian product to report against, the text
   describing them is cut off.  So when it says "XXX (includi)" I have no idea
   what it is including

Only .. I didn't. Their publicly accessible issue tracker does not let me file issues.

Sunday, September 23, 2018

Removing Joda from Spring Boot

Automated Acceptance Criteria

Removing Joda from Spring Boot

The problem

We recently migrated a medium-sized Java project to Spring Boot 2 from version 1. One of the challenges was migrating to the JDK date-time library from Joda. It turns out that Spring Boot 2 has excellent native support for JDK date-times, as does Jackson (JSON) and Hibernate (database), the default technologies offered by Spring Boot 2 for these features.

The migration itself went smoothly, which is unsurprising given the fantastic work of Stephen Colebourne in designing JDK date-time support based on his authorship of Joda.

So we looked at disabling Joda completely in our Gradle build. The most concise approach we found was:

configurations {
    compile.exclude group: 'joda-time'
}

This removed Joda completely from configurations (classpaths) related to Java. However, this had unintended side effects:

  • During tests, we needed Joda in the runtime classpath for a 3rd-party library, OpenSAML
  • During boot run (running the app), we needed Joda in the classpath for another 3rd-party library, SpringFox

We easily found a workaround for SpringFox, but not for OpenSAML.

(If you're curious, yes, we do intent to migrate from OpenSAML 2 (desupported in 2016) to OpenSAML 3; however, we would like spring-security-saml2-core support first.)

A solution in progress

The exclusion in the compile configuration does exactly what we need: Joda disappears! But what to do about SpringFox and OpenSAML?

For the Spring Boot runtime classpath, there is another concise solution, though finding it was rather troublesome, and it is not well-documented by Pivotal or in Stack Overflow.

First, we setup another classpath of our own making named bootRuntime:

configurations {
    // Other parts of "configurations", including the Joda exclusion from above

    bootRuntime // Synthetic configuration for deps needed *only* to launch app
}

Then we added Joda to that synthetic classpath relying on Spring Boot plugin's definition for the version of Joda to use:

dependencies {
    // Other parts of "dependencies"

    bootRuntime 'joda-time:joda-time'
}

Lastly, we taught Spring Boot to include this synthetic classpath when launching our app (this was the trickiest part):

bootJar {
    bootInf {
        from configurations.bootRuntime
        into 'lib'
    }
}

bootRun {
    classpath += configurations.bootRuntime
}

This adds Joda to the runtime classpath for both the single "fat jar" built by Spring (bootJar), and when launching the app on the command line with gradle (bootRun).

Unless you are a heavy Gradle user, from ... into ... syntax may be unfamiliar: this copies the jars in the synthetic configuration into the fat jar at the location Boot expects to find them. The "'lib'" is literally a directory location within the jar. Useful magic, but a bit obtuse. The outcome:

$ jar tf build/libs/the-project.jar | grep joda-time
BOOT-INF/lib/joda-time-2.9.9.jar

As a matter of fact, Joda is the very last file in the boot jar, a suggestion that it was added by our bootInf section after the Boot plugin built the jar.

(Our workaround is intentionally small. If we're unable to make it work, we'll switch to brute-force library exclusions in our dependencies lists. The goal is to prevent accidental import from Joda, for example, of LocalDate.)

Remaining work

For running the boot app, this solution is great: it is small, readable, easy to maintain, and it works. However, for tests which exercise our user authentication with OpenSAML, it fails. Joda is not in the test classpath, and we cannot use or mock OpenSAML methods which use Joda types.

Barring another magical solution like bootRuntime, we'll fall back on manually excluding Joda from each dependency, and adding it back in to the test classpath. A pity given how pithy the solution is with exclusion from the compile configuration.

Sunday, July 29, 2018

Kotlin JPA patterns

Kotlin JPA pattern

The problem

Kotlin does many wonderful things for you. For example, the data classes create helpful constructors, and automatically implement equals and hashCode in a reasonable way.

Similarly, JPA works magic—expecially in the context of Spring Data.

So how do I test that my Kotlin entity is correctly annotated for JPA? The simplest thing would be a "round trip" test: create an entity, save it to a database, read it back, and confirm the object has the same values. Let's start with a simple entity, and the simplest possible test:

@Entity
data class Greeting(
        val content: String,
        @Id @GeneratedValue
        val: Int id = 0)
@DataJpaTest
@ExtendWith(SpringExtension::class)
internal class GreetingRepositoryIT(
        @Autowired val repository: GreetingRepository,
        @Autowired val entityManager: EntityManager) {
    @DisplayName("WHEN saving a greeting properly annotated")
    @Nested
    inner class Roundtrip {
        @Test
        fun `THEN is can be read back`() {
            val greeting = Greeting("Hello, world!")

            repository.saveAndFlush(greeting.copy())
            entityManager.clear()

            assertThat(repository.findOne(Example.of(greeting)).get())
                    .isEqualTo(greeting)
        }
    }
}

Some things to note:

  1. To ensure we truly read from the database, and not the entity manager's in-memory cache, flush the object and clear the cache.
  2. As saving also updates the entity's id field, save a copy, so our original is untouched.
  3. Be careful to use saveAndFlush on the Spring repository, rather than entityManager.flush(), which requires a transaction, and would add unneeded complexity to the test.

But this test fails! Why?

The unsaved entity (remember, we made a copy to keep the original pristine) does not have a value for id, and the entity read back does. Hence, the automatically generated equals method says the two objects differ because of id (null in the original vs some value from the database).

Further, the Spring Data QBE (QBE) search for our entity includes id in the search criteria. Even changing equals would not address this.

What to do?

The solution

It turns out we need to address two issues:

  1. The generated equals takes id into account, but we are only interested in the data values, not the database administrivia.
  2. The test look up in the database includes the SQL id column. Although we could try repository.getOne(saved.id), I'd prefer to keep using QBE, if the code is reasonable.

To address equals, we can rely on an interesting fact about Kotlin data classes: only default constructor parameters are used, not properties in the class body, when generating equals and hashCode. Hence, I write the entity like this, and equals does not include id, while JPA is stil happy as it relies on getter reflection:

@Entity
data class Greeting(
        val content: String) {
    @Id
    @GeneratedValue
    val id = 0
}

To address the test, we can ask QBE to ignore id when fetching our saved entity back from the database:

@DataJpaTest
@ExtendWith(SpringExtension::class)
internal class GreetingRepositoryIT(
        @Autowired val repository: GreetingRepository,
        @Autowired val entityManager: EntityManager) {
    @DisplayName("WHEN saving a greeting properly annotated")
    @Nested
    inner class Roundtrip {
        @Test
        fun `THEN is can be read back`() {
            val greeting = Greeting("Hello, world!")

            repository.saveAndFlush(greeting.copy())
            entityManager.clear()

            val matcher = ExampleMatcher.matching()
                    .withIgnoreNullValues()
                    .withIgnorePaths("id")
            val example = Example.of(greeting, matcher);

            assertThat(repository.findOne(example).get()).isEqualTo(greeting)
        }
    }
}

In a larger database, I'd look into providing an entity.asExample() to avoid duplicating ExampleMatcher in each test.

Java approach

The closest to Kotlin's data classes for JPA entities is Lombok's @Data annotation, together with @EqualsAndHashCode(exclude = "id") and @Builder(toBuilder = true), however the expressiveness is lower, and clutter higher.

The test would be largely the same modulo language, replacing greeting.copy() with greeting.toBuilder().build(). Alternatively, rather than a QBE matcher, one could write greeting.toBuilder().id(null).build().

This last fact leads to an alternative with Kotlin: include id in the data class' default constructor, and in the test compare the QBE result as findOne(example).get().copy(id = null) without a matcher.

Conclusion

What Kotlin JPA patterns have you discovered?

Tuesday, May 01, 2018

JaCoCo, Gradle, and exclusions

The setup

My team is working on a Java server, as part of a larger project project, using Gradle to build and JaCoCo to measure testing code coverage. The build fails if coverage drops below fixed limits (branch, instruction, and line)—"verification" in JaCoCo-speak.

We follow the strategy of The Ratchet: as dev pairs push commits into the project, code coverage may not drop without group agreement, and if coverage rises, the verification limits rise to match. This ensures we have ever-rising coverage, and avoid new code which lacks adequate testing.

The problem

At a work project, we're struggling to get JaCoCo to ignore some new, configuration-only Java classes. These classes have no "real" implementation code to test, are used to setup communication with an external resource, yet are high line-count (static configuration via code). So they drag down our code coverage limits, and there is no effective way to unit test them sensibly. (They are best tested as system tests within our CI pipeline using live programs and remote resources.)

JaCoCo has what seems at first blush a sensible way to exclude these configuration classes from testing:

jacocoTestVerificationCoverage {
    violationRules {
        rule {
            excludes ['hm.binkley.labs.saml.SomeConfig']
            limit {
                counter = 'LINE'
                minimum = 0.90
            }
        }
    }
}

Unfortunately, this does nothing. There is no warning or error, and coverage continues to include the whole code base.

A solution

After a lot of experimenting and StackOverflow research, this answer from Juan Vimberg worked exactly as we needed. Following his approach:

final def excludedClasses = ['hm.binkley.labs.saml.SomeConfig']

jacocoTestVerificationCoverage {
    violationRules {
        rule {
            limit {
                counter = 'LINE'
                minimum = 0.90
            }
        }
    }

    afterEvaluate {
        classDirectories = files(classDirectories.files.collect {
            fileTree(dir: it, excludes: excludedClasses.collect {
                it.replace('.', '/') + '.class'
            })
        })
    }
}

The list of excluded classes is extracted so the same trick can be used in the generated reports:

jacocoTestReport {
    executionData test, databaseTest
    reports {
        html.enabled = true
        xml.enabled = true
        csv.enabled = false
    }
    afterEvaluate {
        classDirectories = files(classDirectories.files.collect {
            fileTree(dir: it, excludes: excludedClasses.collect {
                it.replace('.', '/') + '.class'
            })
        })
    }
}

Something to consider: using wildcards (hm.binkley.labs.saml.*) may take additional work.

Why?

Why does this work, and the "obvious" way does not?

JaCoCo has more than one notion of scoping. The clearest one is the counters: branches, classes, instructions, lines, and methods.

Not as well documented is the scope of checks: bundles, classes, methods, packages, and source files. These are not mix-and-match. For example, exclusions apply to classes. Lyudmil Latinov has the best hints I've found on how this works.

Saturday, March 31, 2018

Workaround for jenv on Cygwin

I'd like to use jenv on my Cygwin setup at home. Oracle has moved to a 6-month release pace, and so I find myself dealing with multiple Java major verions. However, my tool of choice, jenv, does not play well with Cygwin.

(Note: There are two jenvs out there. I am talking about jenv.be, not jenv.io. Apologies that neither does HTTPS well.)

As a workaround, I wrote a straight-forward shell function to provide the minimum I need: switch between versions in the current shell:

# Until jenv.be supports Cygwin
function set-java {
    local -a java_v
    local jdk v OPTIND
    for jdk in /cygdrive/c/Program\ Files/Java/jdk*
    do
        jdk="${jdk/\/cygdrive\/c\/Program\ Files\/Java\/jdk/}"
        v=${jdk#-}
        v=${v#1.}
        v=${v%%.*}
        java_v[$v]=$jdk
    done

    local verbose=false
    while getopts :hv opt
    do
        case $opt in
        h ) cat <<EOH
            Usage: $FUNCNAME [-hv] VERSION

            Options:
            -h Print help and exit
            -v Verbose output

            Arguments:
            VERSION One of ${!java_v[@]}
            EOH
            return 0 ;;
            v ) verbose=true ;;
            * ) echo "Usage: $FUNCNAME [-hv] VERSION" >&2 ; return 2 ;;
        esac
    done
    shift $((OPTIND - 1))

    case $# in
    1 ) ;;
    * ) echo "Usage: $FUNCNAME [-hv] VERSION" >&2 ; return 2 ;;
    esac

    if ! [[ ${java_v[$1]+foo} ]]
    then
        echo "$FUNCNAME: No such Java version: $1.  Try $FUNCNAME -h" >&2
        return 2
    fi

    export JAVA_HOME='C:\Program Files\Java\jdk'${java_v[$1]}
    for v in ${!java_v[@]}
    do
        case $v in
        $1 ) ;;
        * ) export PATH="${PATH//${java_v[$v]}/${java_v[$1]}}" ;;
        esac
    done

    if $verbose
    then
        echo "$FUNCNAME: Updated JAVA_HOME and PATH for JDK to $v at $JAVA_HOME"
    fi
}

Try the -h flag (help).

Thursday, January 11, 2018

Automated acceptance criteria

Automated Acceptance Criteria

Automated Acceptance Criteria

I have a dream (Story Card)

What is my dream story card? I don't mean: What's the story I'd most like to work on! I mean: What should a virtual story card look like (as opposed to card stock on a wall)? This may be a trivial question. But for me the user experience working with stories is very important: I work with them daily, write them, discuss them, work on them, accept them, etc. I want the feel of the card to be thoughtful, like that fellow to the right.

And more than that. I am lazy, impatient, hubristic. The acceptance criteria, I want them testable, literally testable in that each has a matching test I can execute. Given my laziness, I don't want to switch systems and run tests; I'd like to execute acceptance criteria directly from the story card.

So is there a system like that today? No. There are bits and pieces though.

Not all story card systems are equal

Some story card systems are particularly awkward to read, understand or use. Special demerits for:

  • A hard-coded workflow that the team cannot be change to fit how they work: the team is expected to fit the tool
  • Workflow state transition buttons are nice, but not so nice is unconfigurable labels, especially when the button labels are misleading
  • A hard-coded or limited hierarchy of stories, so if a team uses epics or features or themes or whatever to organize stories, and there are more than one level to this, the team is out of luck
  • Lack of quality RESTful support, in particular, no simple identifier for story cards, so linking directly to cards is opaque, useless or completely absent

A scenario

Post-development testing on this team is a fairly ordinary role. The developers say: a new web page is ready. Testers then validate the same page features each time for each new page, simple things:

  • Can an account with security role X log in?
  • Can X submit the form? (Or not submit if forbidden?)
  • What form defaults appear for role X? Do they reflect role X?

(Yes, I know — what about developer testing? Bear with me.)

On it goes, the same work each time. Redundant, repetitive, error-prone, fiddly. And worst of all—boring, BORING! This is traditional, manual "user testing" at its worst.

What's to be done? Can we fix this?

After all, the testing is valuable: nobody wants broken web pages; everybody wants to log in. But the tester is valuable, too, more valuable even: is this the most valuable testing a human could do? Surely people are more clever, more insightful than this. And what about all the other page features not tested because of time spent on the basics?

Well, people are more clever than this.

Clearing the path

What guides testing? If you're using nearly any form of modern user story writing, this includes something like "Acceptance Criteria". These are the gates that permit story development to be called successful: the testers are gatekeepers in the manual testing world. In the manual world these criteria might be congregated into a single "Requirements Document" or similar (think: big, upfront design).

We can do better! Gatekeeper-style testing assumes a linear path from requirements to implementation to testing, just as waterfall considers these activities as distinct phases. But we know agile approaches do better than waterfall in most cases. Why should we build our teams to mirror waterfall? Of course the answer is to structure teams to look agile, just as the team itself practices agile values.

So how do we make Acceptance Criteria more agile?

Enter the Three Amigos

In current agile practice, a story card is not a ready to play until approved by the The Three Amigos: BA, Dev, QA. Each plays their part, brings their perspective, contributes to meeting team-agreed "Definition of Ready".

A key component of playable cards are the Acceptance Criteria — answering the question, "What does success look like?" when a story is finished.

The perspectives include:

  • BA: Is the story told right? — What is the way to describe the work?
  • Dev: Is the story the right size? — What is the complexity of the work?
  • QA: Is it the right story to tell? — What is the value of the work?

What are Acceptance Criteria?

But where does this simple testing come from? Any software delivery process beyond "winging it" has some requirements

Well-written agile stories have Acceptance Criteria. What are these? An Acceptance Criteria (AC) is a statement in a story that a tester (QA) can use to validate that all or a portion of the story is complete. The formulaic phrasing for ACs I like best is:

GIVEN some state of the world
WHEN some change happens
THEN some observable outcome results

Generally for web applications this means when I do something with a web page in the application, then the page changes in some particular way or submits a request and the response page has some particular quality or property (or the negative case that is does not have that quality or property).

In some cases it is even simpler: just check that a particular web address fully loads, for example, when testing login access to pages.

Martin Fowler's Test Pyramid

So what's the question?

Wherever possible we want to automate. If something can be done for us by a computing machine, we don't want to spend human time on it. Humans can move on to more interesting, valuable work when existing tasks can be automated.

Consider the Test Pyramid (image right): automating lower-value tests focuses people on higher-value ones. You get more human attention and insight on the kinds of tests which best improve the value of software. You win more.

The Story

This is a sample story card with a simplistic implementation of AACs. Live buttons call back into the story system, to be mapped to calls in the testing system. (Another implementation might have the buttons directly call to the testing system, avoiding an extra call into the story system but showing details in the page source about the testing system.)

Title

Narrative

AS A AAC author
I WANT a mock executable story
SO THAT others can see the value

Details

No actual criteria were validated in the execution of these tests. This is only a mock.

Acceptance criteria

Summary: 1 missing, 1 untested, 1 running, 1 passed, 1 failed, 1 errored, 1 disabled
GIVEN magical thinking
WHEN in Missingland
THEN there's no test
No test (yet) - create one!
GIVEN magical thinking
WHEN in Newland
THEN nothing has happened yet
Test never run - be the first!
GIVEN magical thinking
WHEN in Fastland
THEN tests run quickly
15% done (3s)
GIVEN magical thinking
WHEN in Happyland
THEN Unicorns
GIVEN magical thinking
WHEN in Sadland
THEN there be Dragons
Expected: Dragons, got: Puppies
GIVEN magical thinking
WHEN in Crazyland
THEN nothing works right
Test timed out after 90 seconds
GIVEN magical thinking
WHEN in Slowland
THEN tests are disabled
@dev1 @qa2: BLOCKED on widget spanner

Acceptance Criteria states

Every AC potentially has a message from the testing system giving more detail on state. These are noted below.

Missing

This AC has no matching test in the testing system. Use the Create button to create a new test. This does not run the test.

The message is boilerplate to remind users to create tests.

Untested

The AC has a matching test in the testing system, but the test has never been run. Use the Test button to run the test.

Typically there is no message for this state.

Running

The matching test for the AC is running in the testing system. Use the Cancel button to stop the test, or wait for it to complete.

The message, is supported by the testing system, should give a notion of progress. See REST and long-running jobs for how to do this.

Passed

The matching test for the AC passed last time it ran. Use the Test button to run the test again.

Typically there is no message for this state.

Failed

The matching test for the AC failed last time it ran. Use the Test button to run the test again.

The message must give a notion of why the test failed.

Errored

The matching test for the AC errored last time it ran. Use the Test button to run the test again.

The message must give a notion of why the test errored.

Disabled

The matching test for the AC is disabled in the testing system. Update the testing system to reenable.

The message, if supported, should give a reason the test is disabled when available.

Potential problems

Nothing is free. Potential problems with AACs include:

Integrations

There are no existing integrations along these lines. You need to build your own against JIRA, Mingle, FitNesse, Cucumber, etc. Whether the story system drives the interaction, or the test system does, may depend on the exact combination. Best would be if both systems can call the other.

Scaling

As more AACs run, the complete suite takes longer. For example, adding 1 minute of AAC/story, and 5 stories/iteration, in a 12 iteration project takes 60 minutes to run. This is not specific to AACs but a general problem with acceptance tests. It's still much cheaper than the manual steps for each test, but prohibitive for a developer to run the whole suite locally.

Best practice is for the 3 amigos to run only the tests specific to a story before calling that story Ready to Accept.

Update

Until I published this post on Blogger, I really wasn't certain how it would look on that platform. I manually tested the visuals from Chrome with the local post, and it looked good. After seeing it in Blogger, however, the "aside" sections are laid out poorly, overlapping the text. I won't relay the page: mistakes are the best way to improve, and a subtext of this post is Experiment & Learn Rapidly. Public experiements are the most faithful kind: no opportunity to fudge and pretend all was well on the first try.

Tuesday, January 09, 2018

Sproingk lives!

Sproingk lives!

After months of instability, I again have a fully working pipeline and a live web page for Sproingk, my demo project combining the latest public betas in:

If you're interested in any combination of these, please take a gander.

About some of the items

Kotlin (It's not just for Android!) is really where JVM programming is headed. And it's fun.

Springfox gives you a lovely UI for your REST API. (In the demo, try the "greeting controller".)

Boxfuse make minimal shrink-wrapped Linux images of your software, and handles blue/green AWS deployments.

Wednesday, January 03, 2018

A language stack for 2018

(I talk about myself in the post more than usual. I'm expressing opinions more than usual, rather than observations and advice. Caveat lector. Also, this post is link-rich.)

The stack

After reading Eric S. Raymond (ESR)'s posts on the post-"C" world, I realized that I, too, live in that world. What would my ideal language stack look like for 2018?

For context, here are ESR's posts I have in mind:

  1. The long goodbye to C
  2. The big break in computer languages
  3. Language engineering for great justice
  4. C, Python, Go, and the Generalized Greenspun Law

Read them? Good. So this is the language stack I have in mind:

  • Python — By default
  • Kotlin (JVM) — When you need it
  • Go — When you must

Each of these languages hits a sweet spot, and displaces an earlier language which was itself a sweet spot of its time:

  • Bash and PerlPython
  • Java → Python and Kotlin
  • "C" and C++ → Kotlin and Go

An interesting general trend here: Not just replace a language with a more modern equivalent, but also move programming further away from the hardware. As ESR points out, Moore's law and improving language engineering have raised the bar.

(ESR's thinking has evolved over time, a sign of someone who has given deep and sustained thought to the subject.)

About my experience with these languages

Python

I have moderate experience in Python spread out since the mid-90s. At that time, I was undecided between Python, Ruby and Perl. Over my career I worked heavily in Perl (it paid well, then), some in Ruby (mostly at ThoughtWorks), and gravitated strongly to Python, especially after using it at Macquarie commodities trading where it was central to their business.

Kotlin

I've been a Kotlin fan since it was announced. It scratches itches that Java persistently gives, and JetBrains is far more pleasant a "benevolent dictator" than Oracle: JetBrains continually sought input and feedback from the community in designing the language, for example. Java has been my primary language since the late 90s, bread and butter in most projects. If JetBrains keeps Kotlin in its current directions, I expect it to displace Java, and deservedly so.

Go

This is where I am relying on the advice of others more than personal experience. I started programming with "C" and LISP (Emacs), and quickly became an expert (things being relative) in C++. Among other C++ projects, I implemented for INSO (Microsoft Word multilingual spell checker) a then new specification for UNICODE/CJKV support in C++ (wstring and friends). I still love "C" but could do without C++. Go seems to be the right way to head, especially with garbage collection.

With Ken Thompson and Rob Pike behind it, intelligent luminaries like ESR pitching for it, and colleagues at ThoughtWorks excited to start new Go projects, it's high time I make up this gap.

What makes a "modern" language?

I'm looking for specific things in a "modern" language, chiefly:

Good community

Varying and strong:

You can explore the figures at TIOBE and StackOverflow.

Kotlin is the interesting case. Though low in the rankings, because of 100% interoperability running on the JVM, it's easy to call Java from Kotlin and call Kotlin from Java, so the whole Java ecosystem is available to Kotlin natively.

Further, Google fully supports Kotlin on Android as a first-class language, so there is a wealth of interoperability there. (The story for iOS is more nuanced, with Kotlin/Native including that platform as a target but in progress.)

Lastly, Kotlin/Native is bringing similar interoperability between Kotlin and Go.

(This is related to having a rich ecosystem.)

Garbage collection

(A quick primer (2011) on garbage collection.)

Among Go's advantages over "C" and C++ is solid garbage collection out of the box, though Boehm a valiant effort. It's only been since 1959. No modern programmer—short of special cases—should manually manage memory.

Kotlin gets a head start here. It's built on the JVM (when targeting that environment), which has arguably the world's greatest GC (or at least most tested). It definitely gives you a choice of garbage collectors. There's too much to say about GC on the JVM for this post except that it is first-rate.

Python has garbage collection. Though not as strong as the JVM, it continues to improve. It is unusual for GC to become a limiting factor in a Python program; you will know if it does.

(If you are in "C" or C++, and want some of the benefits of GC, do consider the Boehm-Demers -Weiser garbage collector. No, you won't get the fullest benefits of a language built for GC, but you'll get enough to help a lot, even if just for leak detection.)

Static type inference

Kotlin really demonstrates where Java could improve by leaps rather than baby steps: automatic type inference. At least Java is heading in the right direction. Don't tell the computer how to run your program when it can figure this our for itself! Keep your focus on what the program is for.

Interestingly, Kotlin and Go have strong typing out of the box, but not Python. Python has built-in support for typing, and an excellent optional implementation, mypy, however this is type declaration not type inference, so it loses a bit there. And none of the common alternatives (Ruby, Perl, PHP, etc.) have type inference either. I'll need to check again in a few years, and possibly update this choice.

(Paul Chiusana writes on the value of static type checking.)

Rich ecosystem

Of the languages, Kotlin is a standout here. Because it is a JVM language, it is 100% compatible with Java, and can fully use the rich world of Java libraries and frameworks. The main weakness for Kotlin is lack of tooling: because more advanced tools may make assumptions about bytecode, Kotlin's particular choices of emitted bytecode sometimes confuse them.

(JetBrains has surveyed on the state of ecosystems for programming languages, related to having a good community.)

Close behind is Python, "batteries included" and all, and it has a better organized and documented standard library than Java. For some problem domains, Python has a richer ecosystem, for example, SciPy and NumPy is the best math environment available in any language. (Some specialty languages like MATLAB deserve mention—an early employer of mine.) I may need to reconsider my ranking Kotlin over Python here.

Go is, frankly, too new to have developed an equivalent ecosystem, and full-blown package management is still a work in progress.

Concision and convenience

A common thread in recent language development is lower ceremony: fewer punctuation marks and boilerplate; make the machine do more work, you do less. Kotlin provides the most obvious example compared to Java. Go is know for cleanness and brevity. And Python ranks high here as well.

(Donnie Berkholz writes an interesting post on ranking language expressiveness.)

Code samples

The classic "Hello, World!" showing three things:

  • Writing a "main" callable from the command line
  • Using standard output to print to console
  • String formatting to build the output message

This doesn't, of course, give a sense of how these languages in their full spectrum, but does give a first taste.

Java

Java in a file named MyStuff.java:

package my.stuff;

public final class MyStuff {
    public static final String LANGUAGE = "Java";

    public static void main(final String... args) {
        System.out.println(String.format("Hello, World, from %s", language));
    }
}

Kotlin

Kotlin in a file named my-program.kt:

package my.stuff

const val LANGUAGE = "Kotlin"

fun main(args: Array<String>) = println("Hello, World, from $LANGUAGE")

Go

But also compare Go to C++:

package main

import "fmt"

const Language = "Go"

func main() {
    fmt.Println("Hello, World, from", Language)
}

C++

And C++:

#include <iostream>

int
main()
{
  std::cout << "Hello, World!" << std::endl;

  return 0;
}

Python

And for completeness, Python compared to Perl and BASH:

#!/usr/bin/python

language = 'Python'

print('Hello, World, from {}'.format(language))

Perl

Any Perl:

#!/usr/bin/perl

use strict;
use warnings;

my $language = "Perl";

printf("Hello, World, from %s\n", $language);

BASH

Unfairly simple:

#!/bin/bash

language=BASH

echo "Hello World, from $language"

See also

Update

As usual, I never catch as many problems in my writing as I do after reading it posted publically. Many small edits, and an added, explicit mention of wstring.

Footnotes

  1. A surprising take on Python versus JavaScript from Michael Bolin. And I do not feel Kotlin (JS) is ready yet for front-end work.
  2. In contrast to ESR's thoughtful posts, a nice Steve Yegge rant in favor of Kotlin. (I enjoy both their writing styles.)
  3. YMMV — I use Perl as a example, but it could be Ruby or PHP or similar. And some might prefer Node.js to Python (but don't: see footnote 1. The exact choice is a matter of preference: I prefer Python, and some would keep Ruby (for example).
  4. Mike Vanier wrote a similar list for "scalable computer programming languages" in 2001, at least for the technical elements.
  5. Mike Hearn points out potential pitfalls with Go GC.

Thursday, December 28, 2017

Push early, push often, push on green

(This post follows up on Frequent commits, a post about git command line help for TDD. I assume you are already following good TDD practices. Also, please recall git normally requires pulling before pushing if your repo is behind on commits from the remote.)

Prologue

I'm chatting with a colleague who is new in her role as an Agile Coach (she comes from a PM background). We were talking about ways to organize a team's story board (card wall), and turned to Desk Checks and when they fit into a story's life.

An interesting remark by her: a team had agreed to share work (push) only after the desk check was successful; that is, they did not push code until a story was almost done: work lay fallow on individuals' machines for potentially days at a stretch.

I was surprised. Why would they wait days to push—what did they do about merge conflicts, complex refactorings, integration failures in the pipeline, et al?

Lecture

Entropy

To me this was clearly a smell. Martin Fowler specifically addresses this in Everyone Commits To the Mainline Every Day, and I would go further: Push commits at the earliest responsible moment. This is opposite the advice for refactoring, or especially emergent design, where the "Rule of 3" and last responsible moment cautions waiting for more information before committing to a course of action.

And you can see why early pushes differ from the other two: waiting will not get you more information. On the contrary, waiting will only increase the entropy of the code base! Commits lie fallow in the local repo, increasing the size of potential merge conflicts for others.

  Benefit from more information? Principle Entropy from waiting
Early push None available Earliest responsible moment Rises from fallow commits
Refactoring Get more code examples Rule of 3 Falls after refactoring
Architecture decision Learn more about system Last responsible moment Falls if responsible

(The "information" in the case of pushes are the pulled commits themselves.)

I've definitely experienced this firsthand, when I'd eventually discard my local commits after waiting too long, and letting them grow too much in a different direction from how the rest of the team progressed. Waste!

Complexity

Consider this work cycle:

  1. Local commit
  2. Fetch commits from remote
  3. Merge, if needed
  4. Local commit again, if needed
  5. Push commits to remote

I've grouped these to emphasize what is local to you (the first four) and what is global to your team (the last one).

Considered only locally, you minimize entropy with frequent pulls for yourself, and likewise for your teammates individually, so you can catch merge conflicts early and resolve them when they are small. But considered globally, you need frequent pushes so those local pulls have only small changes in them. The longer you wait to push, the more work for those who pull.

Early push
You Rest of team Work for others
You commit    
You push  
  They pull Less complexity of merge (1 commit)
You commit  
You push  
  They pull Less complexity of merge (1 commit)

Each single push can be treated on it's own. There are two opportunities for merge conflict, but each is a small amount of work.

Late push[1]
You Rest of team Work for others
You commit    
  They pull No changes to merge
You commit  
You push  
  They pull Greater complexity of merge (2 commits)

In each scenario, there are two commits for others to contend with. The larger, combined push has a greater opportunity for merge conflict, and a greater chance for a large amount of work, because of the combined interactions of the two commits.

And as teams work in parallel, there are more opportunities for merge conflicts.

Push early, push often, push on green

From the above discussion, the safest course is to push early rather than wait as commits pile up locally. But when to push—what is the "earliest responsible moment"?

If your codebase is well-tested, and answer presents itself: Push when tests are green and changes alter the complexity.

The goal is to avoid complex commit interactions that lead to merge conflicts. Tests are the safety net. Further, if all else fails and a commit is bad, it is easy to throw away the last commit until things are right again: only a small amount of work is lost, not days worth.

Understanding what kind of changes alter complexity takes skill: skills improve with experience and coaching. The cost of early pushes is low, and the occassional penalty of late pushes high, so this would be a good topic for a "team norms" ("dev practices") discussion.

For example, the team might agree that changes to comments are not in themselves worth a push. At the other end, your refactorings which impact more than one source file almost certainly should be pushed early: discover their impact on others before you add more refactorings.

A good work cycle:

  1. Pull
  2. Build, run tests
  3. Edit sources
  4. Build, run tests
  5. Commit
  6. Pull and push

After a preliminary sanity check (#1 and #2), get in the the cycle of #3 to #6.

Epilogue

I checked with other teams: it is minority practice to wait until a successful desk check to push changes. That's a relief. Hopefully this practice can be made more rare.

One rational reason—itself a smell—is when tests take too long to run frequently. When I design a pipeline, I recommend breaking out "unit" tests from "integration" tests for this exact reason: even when integration tests run long, the initial CI stage with just unit tests should be fast enough to give quick feedback on frequent pushes, and encourage Push early, Push often, Push on (local) green.

Further reading

Footnotes

[1] The simple statement, "a greater chance for a large amount of work", has rather complex reasoning behind it, beyond the scope of this post.

For example, any particular commit can be viewed as applying an exponent to the overall complexity of a program. A neutral change (say, correcting a typo in a comment) has the exponent 1: it does not change the overall complexity; a positive change (say, removing code duplication) has an exponent between 0 and 1: it lowers the overall complexity; a negative change (say, adding a new dependency) has an exponent greater than 1: it raises the overall complexity.

Consider then that these complexity changes are not simple numbers, but distributions ("odds"), and change with time ("bitrot"), and involve more than the code (people or requirments changes).

[2] In Seth's post, do not confuse "publish once" with "wait to push": it means "don't publish the same commit twice" (which does sometimes happen accidentally, even for experts, from merging or rebasing).

Update

Sure enough, right after posting I read an interesting discussion on the value of the statistical mean (average) relevant to the discussion on two commits taken separately or together.

Essentially, even when the merge conflict work averages out over time for pushing two commits separately versus pushing them together, the outliers for pushing them together is significantly worse than for pushing them separately because of interactions and complexity.