Thursday, September 14, 2017

Docker Java Example Part 5: Kubernetes

Now that I've got my project getting packaged up in a docker image, the next step in this POC is to look at platforms for running docker. The only PaaS that I am familiar with now is Pivotal Cloud Foundry, which we used at my last job to deploy Spring Boot executable jars. PCF was working on a docker story, not sure how far that got. It looks like they are pretty bought into Kubernetes these days. In fact it seems like the whole cloud world is moving in that direction, with the likes of Pivotal, VMware, Amazon, Microsoft, Dell, Alibaba, and Mesosphere joining the Cloud Native Computing Foundation. So, I set out to learn more about Kubernetes.

I started out by following the excellent Hello Minikube tutorial provided in the kubernetes docs. It steps you through installing local kubernetes (a.k.a. minikube), creating a docker image, deploying it to kubernetes, making it accessible outside the cluster, and live updating the running image. I followed the tutorial as written first, then applied it to my demo java project. Of course, I ran into some issues.

Making Minikube Aware of your Docker Image

Minikube runs its own Docker daemon. As outlined here, you have a few options for getting your docker images into minikube. Part of the hello minikube tutorial is to point your local docker client at the minikube docker daemon, and build your image there:
$ eval $(minikube docker-env)
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.64.2:2376
DOCKER_API_VERSION=1.23
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/Users/ryanmckay/.minikube/certs
That works fine in the tutorial, because they are using the docker cli tool, which respects those env variables. Unfortunately, the bmuschko gradle docker plugin does not. But it can be configured to relatively easily. java-docker-example v0.5.1 adds:
So now you can build the docker image into kubernetes:
$ ./gradlew buildImage
And you can stop pointing at kubernetes' docker instance with:
$ eval $(minikube docker-env -u)
I'm not sure this is the long term strategy for local dev, but at least it makes gradle and docker cli work the same way, which seems appropriate.

Kubernetes Concepts

It's worth looking over the kubernetes concepts docs to understand the domain language.  A deployment is a declaration of how you want your container deployed.  It specifies things like which image to deploy, how many instances it should have, ports to expose, etc.  A deployment is mutable. The configuration of a live deployment can be modified to, e.g. target a new docker image, change number of replicas, etc.  

A deployment manages one or more replica sets.  Each replica set corresponds to a distinct configuration of the deployment.  So if the docker image config is changed on the deployment, a new replica set representing the new config is created.  The deployment remembers the mapping from configuration to replica set, so if it sees the same configuration again, it will reuse an existing replica set. Replica sets managed by deployments should not be modified directly, even though the api allows it.

A replica set manages one or more pods, depending on the number of desired replicas.  In most cases, a pod runs a single container, though it can be configured to run multiple containers that need to be colocated on the same cluster node.

Create a Deployment

A complete deployment spec is a lengthy document, but kubernetes provides a quick and easy way to create one with minimal input:
$ kubectl run java-docker-example --image=ryanmckay/java-docker-example:0.0.1-SNAPSHOT --port=8080
deployment "java-docker-example” created
Then you can look at the deployment on the cli with:
$ kubectl get deployment java-docker-example
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
java-docker-example   1         1         1            1           1d

$ kubectl describe deployment java-docker-example
Name:            java-docker-example
Namespace:        default
CreationTimestamp:    Thu, 07 Sep 2017 00:00:37 -0500
Labels:            run=java-docker-example
Annotations:        deployment.kubernetes.io/revision=1
Selector:        run=java-docker-example
Replicas:        1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:    1 max unavailable, 1 max surge
Pod Template<:
  Labels:    run=java-docker-example
  Containers:
   java-docker-example:
    Image:        ryanmckay/java-docker-example:0.0.1-SNAPSHOT
    Port:        8080/TCP
    Environment:    <none>
    Mounts:        <none>
  Volumes:        <none>
Conditions:
  Type        Status    Reason
  ----        ------    ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets:    <none>
NewReplicaSet:    java-docker-example-3948992014 (1/1 replicas created)
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----            -------------    --------    ------            -------
  1d        1d        1    deployment-controller            Normal        ScalingReplicaSet    Scaled up replica set java-docker-example-3948992014 to 1
Notice the "Pod Template" section that describes the type of pod that will be managed by this deployment (through a replica set). At any given time, a deployment may be managing multiple active replica sets, which may in turn be managing multiple pods. In this example, there is only one replica set, and it is only managing one pod. But if you configured higher replication and rolling update, then during a change to the deployment spec, it will be managing spinning down the old replica set while spinning up the new replica set, at a minimum. If the spec changes faster than kubernetes can apply it, it could be more than that.

The ownership relationship can be traversed at the command line. You can see the new and old replica set in the deployment description above. Replica set details can be obtained in similar fashion:
$ kubectl describe replicaset java-docker-example-3948992014
Name:  java-docker-example-3948992014
Namespace: default
Selector: pod-template-hash=3948992014,run=java-docker-example
Labels:  pod-template-hash=3948992014
  run=java-docker-example
Annotations: deployment.kubernetes.io/desired-replicas=1
  deployment.kubernetes.io/max-replicas=2
  deployment.kubernetes.io/revision=1
Controlled By: Deployment/java-docker-example
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels: pod-template-hash=3948992014
  run=java-docker-example
  Containers:
   java-docker-example:
    Image:  ryanmckay/java-docker-example:0.0.1-SNAPSHOT
    Port:  8080/TCP
    Environment: <none>
    Mounts:  <none>
  Volumes:  <none>
Events:
  FirstSeen LastSeen Count From   SubObjectPath Type  Reason   Message
  --------- -------- ----- ----   ------------- -------- ------   -------
  24m  24m  1 replicaset-controller   Normal  SuccessfulCreate Created pod: java-docker-example-3948992014-h1c0l

You can see the created pods in the replica set's Events log. It is worth noting that the "kubectl describe" command output is intended for human consumption. To get details in a machine readable format, use "kubectl get -o json".

Minikube Dashboard

Its good to know the cli, but there is also the very nice
$ minikube dashboard

That will launch your browser pointed at the minikube dashboard app. The information we saw at the cli is available and hyperlinked.


Internal Access to Container

At this point, the deployed container is running, and you can see logs with:
$ kubectl logs deployment/java-docker-example
$ kubectl logs java-docker-example-3948992014-h1c0l
You can access it from within the cluster. Note the pod's IP address from the Pod image above. The following will start another pod running busybox.
$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
/ # telnet 172.17.0.4:8080
GET /greeting
{"id":4,"content":"Hello, World!"}
There are some issues here. We had to know the IP address of the pod. Also, if we were running more replicas, we wouldn't want to be reaching out to one specific instance.  The way to expose pods in kubernetes is through a service. First, note the busybox pod's environment:

/ # env | sort
HOME=/root
HOSTNAME=busybox
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm
Now we launch a service for our deployment:
$ kubectl expose deployment java-docker-example
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   ClusterIP
IP:   10.0.0.32
Port:   <unset> 8080/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>
Now if we restart our busybox pod, we will have some new env variables related to the new service.
/ # env | sort
HOME=/root
HOSTNAME=busybox
JAVA_DOCKER_EXAMPLE_PORT=tcp://10.0.0.32:8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP=tcp://10.0.0.32:8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_ADDR=10.0.0.32
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_PORT=8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_PROTO=tcp
JAVA_DOCKER_EXAMPLE_SERVICE_HOST=10.0.0.32
JAVA_DOCKER_EXAMPLE_SERVICE_PORT=8080
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm

/ # telnet $JAVA_DOCKER_EXAMPLE_SERVICE_HOST:$JAVA_DOCKER_EXAMPLE_SERVICE_PORT
GET /greeting
{"id":5,"content":"Hello, World!"}
There are a couple of points to make here.  First, if you plan for pods within the cluster to use a service, and you want to use the env variables for discovery, the service needs to be created before those consuming pods. Second, there are several different service types.  Since we didn't specify a type, we got the default, ClusterIP. This exposes the service only within the cluster.

External Access to Container

At some point you're going to want to expose your containers outside the cluster.  The service types build on each other.

NodePort Service Type

NodePort exposes the service externally on each node's IP at a static port.  This supports managing your own load balancer in front of the nodes. Notice that it also set up a ClusterIP at 10.0.0.131.

$ kubectl expose deployment java-docker-example --type=NodePort
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   NodePort
IP:   10.0.0.131
Port:   <unset> 8080/TCP
NodePort:  <unset> 32478/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>

$ kubectl get node minikube -o jsonpath='{.status.addresses[].address}'
192.168.99.100

$ curl 192.168.99.100:32478/greeting
{"id":6,"content":"Hello, World!"}

LoadBalancer Service Type

This type will configure a cloud-based load balancer for you.  I need to learn more about this, as I did all these exercises on minikube only. Even on minikube though, LoadBalancer type makes your life easier.
$ kubectl expose deployment java-docker-example --type=LoadBalancer
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   LoadBalancer
IP:   10.0.0.193
Port:   <unset> 8080/TCP
NodePort:  <unset> 32535/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>

$ minikube service java-docker-example
Opening kubernetes service default/java-docker-example in default browser...
This saves you from having to track down and piece together the node ip and port.

Friday, September 1, 2017

Docker Java Example Part 4: Bmuschko and Nebula Gradle Docker Plugins

Converting from the transmode gradle plugin to the bmuschko remote api gradle plugin was pretty straightforward. Other than importing and applying the plugin, the code to get local docker image creation working is as follows:

Note that bmuschko does support multiple image tags, and I took advantage of that to get the versioned tag as well as the "latest" tag.
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
ryanmckay/java-docker-example   0.0.1-SNAPSHOT      7fd01d5b247f        6 seconds ago       115MB
ryanmckay/java-docker-example   latest              7fd01d5b247f        6 seconds ago       115MB
I tagged the code repo at this point v0.4.1

Java Application plugin

In addition to the low-level remote api plugin, bmuschko offers an opinionated docker-java-application plugin based on the application gradle plugin. Using the opinionated plugin cuts down dramatically on the boilerplate in the build.gradle:


Unfortunately, this task only supports one tag. By default, you get the versioned one.
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
ryanmckay/java-docker-example   0.0.1-snapshot      415a9e4b201d        3 seconds ago       115MB
The generated Dockerfile looks like this:

As an interesting side note, the ADD Dockerfile directive has special behavior when the file being added is a tar file. In that case, it unpacks it to the destination.

The application gradle plugin is a more generic method of packaging up a java application than that offered by the spring boot plugin. It creates a tar file containing the application jar and all the dependency jars. It also contains a shell script for launching the application, which has OS detection and some OS-specific config.
$ tar tf build/distributions/java-docker-example-0.0.1-SNAPSHOT.tar 
java-docker-example-0.0.1-SNAPSHOT/
java-docker-example-0.0.1-SNAPSHOT/lib/
java-docker-example-0.0.1-SNAPSHOT/lib/java-docker-example-0.0.1-SNAPSHOT.jar
java-docker-example-0.0.1-SNAPSHOT/lib/spring-boot-starter-1.5.4.RELEASE.jar
java-docker-example-0.0.1-SNAPSHOT/lib/spring-boot-starter-web-1.5.4.RELEASE.jar
...
java-docker-example-0.0.1-SNAPSHOT/bin/
java-docker-example-0.0.1-SNAPSHOT/bin/java-docker-example
java-docker-example-0.0.1-SNAPSHOT/bin/java-docker-example.bat

I started using gradle about the same time I started using spring boot (which has its own gradle plugin with executable jar packaging), so wasn't familiar with the application plugin. It makes sense that bmuschko would base the opinionated plugin on that, so it can support all types of java applications, not just spring boot.  However, since I plan to exclusively use spring boot for the foreseeable future, and can completely specify the execution environment in Docker (so don't need the OS-related functionality provided by the application plugin), I want to stick with Spring Boot application packaging and running.
I left the modifications in a branch tagged as v0.4.2

Nebula docker gradle plugin

Netflix publishes a set of plugins for gradle called Nebula. The nebula-docker-plugin is another opinionated plugin built on top of the bmuschko and application plugins.  It doesn't seem to add a lot beyond the bmuschko application plugin, other than the concept of separate test and production repositories for publishing docker images.  I'm going to look into docker deployment models next so it might come into play there.

Tuesday, August 29, 2017

Docker Java Example Part 3: Transmode Gradle plugin

At last we get to some Docker in this Docker example.

Gradle Docker Plugin

There are a few prominent docker plugins for gradle: transmode, bmuschko, and Netflix nebula. First, I used transmode, as recommended in the spring boot guide Spring Boot with Docker. After adding the Dockerfile:

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD target/java-docker-example-0.0.1-SNAPSHOT.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

, and updating build.gradle as described in the guide, I was able to build a docker image for my application:

$ gw clean build buildDocker --info

Setting up staging directory.
Creating Dockerfile from file /Users/ryanmckay/projects/java-docker-example/java-docker-example/src/main/docker/Dockerfile.
Determining image tag: ryanmckay/java-docker-example:0.0.1-SNAPSHOT
Using the native docker binary.
Sending build context to Docker daemon  14.43MB
Step 1/5 : FROM openjdk:8-jdk-alpine
---> 478bf389b75b
Step 2/5 : VOLUME /tmp
---> Using cache
---> 136f2d4e58dc
Step 3/5 : ADD target/java-docker-example-0.0.1-SNAPSHOT.jar app.jar
---> b3b47b89bbf1
Removing intermediate container 92f637bc67e0
Step 4/5 : ENV JAVA_OPTS ""
---> Running in e90c9a3557eb
---> 1d3f6526e8e5
Removing intermediate container e90c9a3557eb
Step 5/5 : ENTRYPOINT sh -c java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
---> Running in 2fbfb52f836d
---> f001bdddc80b
Removing intermediate container 2fbfb52f836d
Successfully built f001bdddc80b
Successfully tagged ryanmckay/java-docker-example:0.0.1-SNAPSHOT

$ docker images
REPOSITORY                       TAG               IMAGE ID        CREATED          SIZE
ryanmckay/java-docker-example    0.0.1-SNAPSHOT    f001bdddc80b    3 minutes ago    115MB

$ docker run -p 8080:8080 -t ryanmckay/java-docker-example:0.0.1-SNAPSHOT 

Automatically Tracking Application Version

Note that the Dockerfile at this point has the application version hard-coded in it. This duplication must not stand. The transmode gradle plugin also supports a dsl for specifying the Dockerfile in build.gradle. Then as part of the build process, it produces the actual Dockerfile.

I set about moving line by line of the Dockerfile into the dsl. With one exception it went smoothly. You can see the result in v0.3 of the app. The relevant portion of build.gradle is listed here. You can see its pretty much a line for line translation of the Dockerfile. And since we have access to the jar filename in the build script, nothing needs to be hard coded for docker.

// for docker
group = 'ryanmckay'

docker {
 baseImage 'openjdk:8-jdk-alpine'
}

task buildDocker(type: Docker, dependsOn: build) {
 applicationName = jar.baseName
 volume('/tmp')
 addFile(jar.archivePath, 'app.jar')
 setEnvironment('JAVA_OPTS', '""')
 entryPoint([ 'sh', '-c', 'java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar' ])
}

ENV is used at build time And at run time

One little gotcha in the previous section lead to an interesting learning. My initial attempt to set the JAVA_OPTS env variable looked like this:
setEnvironment('JAVA_OPTS', '')
but that produced an illegal line in the Dockerfile:
ENV JAVA_OPTS
That led me to read about the ENV directive in the Dockerfile reference docs.  I was confused about whether ENV directives are used at build time, run time, or both.  Turns out, the answer is both, as I was able to prove to myself with the following. The ENV THE_FILE is used at build time to decide which file to add to the image, and at run time as an environment variable, which can be overriden at the command line.

$ cat somefile
somefile contents

$ cat Dockerfile
FROM alpine:latest
ENV THE_FILE="somefile"
ADD $THE_FILE containerizedfile
ENTRYPOINT ["sh", "-c", "cat containerizedfile && echo '-----' && env | sort"]

$ docker build -t envtest .
Sending build context to Docker daemon  3.072kB
Step 1/4 : FROM alpine:latest
 ---> 7328f6f8b418
Step 2/4 : ENV THE_FILE "somefile"
 ---> Using cache
 ---> 148a4236ce19
Step 3/4 : ADD $THE_FILE containerizedfile
 ---> Using cache
 ---> d44f9e242685
Step 4/4 : ENTRYPOINT sh -c cat containerizedfile && echo '-----' && env | sort
 ---> Running in 70de8ceac5ef
 ---> 5d875712904a
Removing intermediate container 70de8ceac5ef
Successfully built 5d875712904a
Successfully tagged envtest:latest

$ docker run envtest
somefile contents
-----
HOME=/root
HOSTNAME=36b3233697df
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
THE_FILE=somefile
no_proxy=*.local, 169.254/16

$ docker run -e THE_FILE=blah envtest
somefile contents
-----
HOME=/root
HOSTNAME=6a0f8c183a18
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
THE_FILE=blah
no_proxy=*.local, 169.254/16

Version Tag and Latest Tag

So that all worked fine for creating and publishing a versioned docker image locally.  But I really want that image tagged with the semantic version and also the latest tag.  The transmode plugin currently does not support multiple tags for the produced docker image.  I'm not the only one who wants this feature.  I took a look at the source code, and it wouldn't be a minor change.  At this point, I'm only publishing locally, so given the choice between version tag and latest tag, I'm going to go for latest for now.  This is a simple matter of adding tagVersion = 'latest' to the buildDocker task.

I tagged the code repo at v0.3.2 at this point.

I'm going to move on to evaluating the bmuschko and Netflix Nebula Docker Gradle plugins next.

Wednesday, August 23, 2017

Docker Java Example Part 2: Spring Web MVC Testing

The next step was to add some tests. The tests that came with the demo controller used a Spring feature I was not familiar with, MockMvc. The Spring Guide "Testing the Web Layer" provides a good discussion of various levels of testing, focusing on how much of the Spring context to load. There are 3 main levels: 1) start the full Tomcat server with full Spring context, 2) full Spring context without server, and 3) narrower MVC-focused context without server. I wanted to compare all three, plus add in variation in testing framework and assertion framework. Specifically I wanted to add Spock with groovy power assert.  The aspects I wanted to compare were: test speed, readability of test code, readability of test output.  I intentionally made one of the tests fail in each approach to compare output.


Spock with Full Tomcat Server

This is the approach I am most familiar with.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerSpec.groovy

Timing

I ran and timed the test in isolation with
$ ./gradlew test --tests '*GreetingControllerSpec' --profile

Total 'test' task time (reported by gradle profile output): 13.734s
Total test run time (reported by junit test output): 12.690s
Time to start GreetingControllerSpec (load full context and start tomcat): 12.157s
So, not fast. Maybe one of the other approaches can do better.

Test Code Readability

def "no Param greeting should return default message"() {

    when:
    ResponseEntity<Greeting> responseGreeting = restTemplate
                .getForEntity("http://localhost:" + port + "/greeting", Greeting.class)

    then:
    responseGreeting.statusCode == HttpStatus.OK
    responseGreeting.body.content == "blah"
}
I really like Spock. I like the plain English test names. I like the separate sections for given, when, then, etc. I think it reads well and makes it obvious what is under test.

Test Output Readability

When a test fails, you want to see why, right?  In this aspect, groovy power assertions are simply unparalleled.
Condition not satisfied:

responseGreeting.body.content == "blah"
|                |    |       |
|                |    |       false
|                |    |       12 differences (7% similarity)
|                |    |       (He)l(lo, World!)
|                |    |       (b-)l(ah--------)
|                |    Hello, World!
|                Greeting(id=1, content=Hello, World!)
<200 OK,Greeting(id=1, content=Hello, World!),{Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 22 Aug 2017 22:06:33 GMT]}>

 at net.ryanmckay.demo.GreetingControllerSpec.no Param greeting should return default message(GreetingControllerSpec.groovy:27)
Note that the nice output for responseGreeting itself comes from ResponseEntity.toString(), and from Greeting.toString(), which is provided by Lombok.

Spock with MockMvc

By adding @AutoConfigureMockMvc to your test class, you can inject a MockMvc instance, which facilitates making calls directly to Springs HTTP request handling layer.  This allows you to skip starting up a Tomcat server, so should save some time and/or memory.  On the other hand, you are testing less of the round trip, so the time savings would need to be significant to justify this approach.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerMockMvcSpec.groovy

Timing

This approach was about 500ms faster than with tomcat.  Not significant enough to justify for me, considering the overall time scale.

Total 'test' task time (reported by gradle profile output): 13.263s
Total test run time (reported by junit test output): 12.281s
Time to start GreetingControllerSpec (load full context, no tomcat): 11.804s

Test Code Readability

def "no Param greeting should return default message"() {

    when:
    def resultActions = mockMvc.perform(get("/greeting")).andDo(print())

    then:
    resultActions
            .andExpect(status().isOk())
            .andExpect(jsonPath('$.content').value("blah"))
}
This reads reasonably well.  Capturing the resultActions in the when block to use later in the then block is a little awkward, but not too bad.  Being able to express arbitrary JSON path expectations is convenient.  I didn't see an obvious way to get a ResponseEntity as was done in the full Tomcat example.

Test Output Readability

Condition failed with Exception:

resultActions .andExpect(status().isOk()) .andExpect(jsonPath('$.content').value("blah"))
|              |         |        |        |         |                     |
|              |         |        |        |         |                     org.springframework.test.web.servlet.result.JsonPathResultMatchers$2@1f977413
|              |         |        |        |         org.springframework.test.web.servlet.result.JsonPathResultMatchers@6cd50e89
|              |         |        |        java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!&rt;
|              |         |        org.springframework.test.web.servlet.result.StatusResultMatchers$10@660dd332
|              |         org.springframework.test.web.servlet.result.StatusResultMatchers@251379e8
|              org.springframework.test.web.servlet.MockMvc$1@68837646
org.springframework.test.web.servlet.MockMvc$1@68837646

 at net.ryanmckay.demo.GreetingControllerMockMvcSpec.no Param greeting should return default message(GreetingControllerMockMvcSpec.groovy:29)
Caused by: java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!&rt;

This test output does not read well at all. Spock and the Spring MockMvc library are both tripping over each other trying to provide verbose output.  I think you need choose either Spock or MockMvc, but not both.

JUnit with WebMvcTest and MockMvc

This configuration is on the far other end of the spectrum from full service Spock.  With @WebMvcTest, not only does it not start a Tomcat server, it doesn't even load a full context.  In the current state of the project this doesn't make much of a difference because the GreetingController has no injected dependencies.  If it did, I would have to mock those out.  Again, because of the differences from "real" configuration, time savings would need to be significant.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerTests.java

Timing

This approach was also about 500ms faster overall than full context with Tomcat.

Total 'test' task time (reported by gradle profile output): 13.275s
Total test run time (reported by junit test output): 0.269s
Time to start GreetingControllerSpec (load narrow context, no tomcat): 11.88s

Test Code Readability

@Testpublic void noParamGreetingShouldReturnDefaultMessage() throws Exception {

    this.mockMvc.perform(get("/greeting")).andDo(print()).andExpect(status().isOk())
            .andExpect(jsonPath("$.content").value("blah"));
}

This is the least readable for me.  Again, I like separating the call under test from the assertions.

Test Output Readability

The failure message for MockMvc-based assertion failures isn't as informative as Spock in this case.
java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>" type="java.lang.AssertionError">java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>

Because the test called .andDo(print()), some additional information is available in the standard out of the test, including the full response status code and body.

Conclusion

I'm as convinced as ever that Spock is the premier Java testing framework.  I'm reserving judgment on the Spring annotations that let you avoid starting a Tomcat server or load the full context.  If the project gets more complicated, those could potentially provide a nice speedup.

I tagged the code repo at v0.2 at this point.

Sunday, July 23, 2017

Docker Java Example Part 1: Initializing a new Spring Boot Project

I've been wanting to learn more about Docker for a while.  I'm almost done with this udemy course Docker for Java Developers.  Its a good course, and the concepts are straightforward.  To help me commit it to memory, I wanted to do my own project to apply what I'm learning.
https://github.com/ryanmckaytx/java-docker-example
In this part, I'm just going to initialize a new project.  Part 2 covers Spring Web MVC testing.

Set up your dev machine

Every once in a while, like when you switch jobs and get a new laptop, you need to set up a machine for java development.  

For this, I like to use sdkman.  It helps install the tools you need to do java development.  It also helps switch between multiple versions of those tools.  I've installed java, groovy, gradle, and maven.
$ sdk c

Using:

gradle: 4.0
groovy: 2.4.11
java: 8u131-zulu
maven: 3.5.0

Create a new project

There are a few good ways to do this.  The typical way I do this is to copy another project.  Within an organization, or at least within a team, there is typically some amount of infrastructure and institutional knowledge built into existing projects that you want in a new project.  But for this project, I wanted to practice starting completely from scratch.  There are a couple of good options.

Gradle init

I like gradle as a build tool, and gradle has a built in project initializer.  It supports a few project archtypes, including pom (converting a maven project to gradle), java library, and java application.  It even explicitly supports spock testing framework.

$ gradle init --type java-application --test-framework spock

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed
$ tree
.
|____build.gradle
|____gradle
| |____wrapper
| | |____gradle-wrapper.jar
| | |____gradle-wrapper.properties
|____gradlew
|____gradlew.bat
|____settings.gradle
|____src
| |____main
| | |____java
| | | |____App.java
| |____test
| | |____groovy
| | | |____AppTest.groovy

You can see it also generates a demo App and AppTest.

Spring Boot Initializr

For Spring Boot apps, Spring provides the Spring Boot Initializr.  This lets you choose from a curated  (but extensive) set of options and dependencies, and then generates a project in a zip file for download.  Similarly to gradle init, it includes a default basic app and test.



$ unzip demo.zip
Archive:  demo.zip
   creating: demo/
  inflating: demo/gradlew
   creating: demo/gradle/
   creating: demo/gradle/wrapper/
   creating: demo/src/
   creating: demo/src/main/
   creating: demo/src/main/java/
   creating: demo/src/main/java/net/
   creating: demo/src/main/java/net/ryanmckay/
   creating: demo/src/main/java/net/ryanmckay/demo/
   creating: demo/src/main/resources/
   creating: demo/src/main/resources/static/
   creating: demo/src/main/resources/templates/
   creating: demo/src/test/
   creating: demo/src/test/java/
   creating: demo/src/test/java/net/
   creating: demo/src/test/java/net/ryanmckay/
   creating: demo/src/test/java/net/ryanmckay/demo/
  inflating: demo/.gitignore
  inflating: demo/build.gradle
  inflating: demo/gradle/wrapper/gradle-wrapper.jar
  inflating: demo/gradle/wrapper/gradle-wrapper.properties
  inflating: demo/gradlew.bat
  inflating: demo/src/main/java/net/ryanmckay/demo/DemoApplication.java
  inflating: demo/src/main/resources/application.properties
  inflating: demo/src/test/java/net/ryanmckay/demo/DemoApplicationTests.java

Pretty much the only thing I don't like here is that the test isn't spock and there doesn't seem to be a way to choose it. Not a big deal, its easy to change afterward.  I went with initializr for this project.

JHipster

JHipster is an opinionated full-stack project generator for Spring Boot + Angular apps.  It has a lot of features that I want to explore later, so for now I stuck with Spring Boot Initializr.

Add a Rest Controller

I could have gone straight to CI at this point, but I wanted to do a little extra work that I wish the Initializr could have done for me.  I copied in a basic hello world spring rest controller (and its tests) from the Spring Restful Web Service Guide.  The repo is now at this point https://github.com/ryanmckaytx/java-docker-example/tree/v0.1

Saturday, June 24, 2017

Interview Prep/Tips


I recently interviewed for a new job, which led me to review some notes I made for myself after some interviews a couple years ago.  I thought I would share them in case someone else might find something useful here.  The point of preparing is not to pretend to be anything you aren't or to know anything you don't, the point is just to have everything you do know on the tip of your tongue.  You only have probably an hour per panel, so you need to be on point.

Make a list of professional things that are important to you.  Out of those, pick the most important and write a few sentences.  Those are kind of your mission statement.  I put those sentences right at the top of my resume.  Here is my list:
  • Working with smart, motivated people
  • Agile
  • Mentoring
  • Being Mentored
  • DevOps
  • Physical fitness
  • Feeling like I'm contributing to company's success
  • Company mission
  • Competence of other departments
  • Trust in leadership
  • Trust from leadership
  • Advanced software architecture
  • Advanced technology
  • Work/Life balance
Make a list of the major lessons learned during your time at your current job.  These can include accomplishments, things you wish you had done better, or just experience gained.

Make a list of what areas you want to learn more/grow more in.  These should be things that really excite you.  It can be stuff you already do in your current job or not.  Its helpful if you've shown some initiative and done some learning on your own outside of work.

Make a list of recent videos watched, books read, conferences attended, and one or two sentences about each.  Try to tie the things you learned in these back to the items in the other lists. You want to show that you are passionate about your craft and always learning and improving.

Learn about the prospective company, their products, their market, their competition. Make a list of questions you have about the prospective company.  Some of these should tie back to what you are passionate about.  For me it was a lot of questions about how they do agile.  What the dev teams look like.  How they interact with other departments like product and sys ops.  Questions show you are interested, passionate, and evens out the power structure of the interview a bit, which makes everybody feel more comfortable.  Yes, they are interviewing you, but you are also interviewing them.

For design problems, stay calm and focus on fundamentals.  Treat the interviewer as a subject matter expert/product owner.  Don’t be afraid to ask questions!  Your job is to:
  • Capture the ubiquitous language of the domain.  Make sure you understand what the pieces are and what they do. 
  • Capture the functional requirements of the application.  I like to focus on user stories/use cases.  Just solve one at a time, and evolve the design to handle additional ones.
  • Nouns you hear are good candidates for objects/resources.  Verbs are good candidates for methods.
  • Keep it lean.  Start simple and try to deliver a minimal top to bottom slice that does something.  Then move on to more complicated scenarios iteratively.

      Wednesday, May 24, 2017

      Stop Over-Engineering

      I just watched Greg Young's Build Stuff 2016 Keynote - Stop Over-Engineering, and had a lot of good takeaways.

      Your software is only part of an overall business process system.  You don't have to solve every problem using software.  Definitely not early on, and typically not ever.  Why wouldn't you try to solve all the problems?  Because the cost benefit tradeoff doesn't make sense.  Many tasks are less expensive for humans to do than to try to automate.  Instead of trying to handle every edge case, regardless of how infrequently it might be encountered, focus on handling the happy path, and detecting when the user has gone off the happy path.  When that happens, hand it off to humans.

      How do you know what is the happy path?  He used the analogy, "Stop watering the weeds in your life and start watering the flowers".  How do you know where the flowers are? Data.

      This is where a distinction was made between brown-field projects and green-field projects.  Brown-field projects have the advantage of usage data.  You can see which features are used the most and how they are used.  So it is easer to make data-driven decisions about where to invest effort.

      He gave an example of an invoicing app.  If you try to capture all the requirements for an invoicing system, you will be in endless meetings.  This struck a chord with me, because in a past job, I worked on a brownfield project in the consumer financial sector, and I had exactly this same experience.  In Greg's case, after 2 weeks of meetings, they decided to stop capturing exhaustive requirements, and instead, look at the past year's worth of actual invoices.  Then the domain experts were simply asked to classify the invoices wrt whether they were on the happy path or not, and how to detect getting outside the happy path.  In one day, they were able to implement a solution that solved 60-70% of the previous year's invoices.  Then they looked at the next most common case (15%) and solved that the next day.  And the next (6%).  After 2 weeks they got to 99%, and then they stopped.  They had reached the point of diminishing returns on investment.  That project had been budgeted to take 9 months.  Don't spend 4 days of modeling to automate a task that takes a human 5 minutes of work once a year!

      The problem with green-field projects is that there is no data.  He asks the question, how many of us have built features that have never been used?  How many of us have build whole products that have never been used?  So how do we know where the flowers are?  We need to get data.  How do we get data?  Greg described two complementary approaches: Throw sh*t at the wall (and see what sticks), and human concierge service.  The first approach is to build small, unpolished pieces of functionality and see what people actually like.  Taken to the extreme, you get the Feature Fake, where you make it look like you have added a new feature to your application, and see who shows interest in it.

      The human concierge service actually has no automation at all.  Humans do all the work for a while, so that you can gather usage data, and find the happy path to automate first.  Human concierge is one of several lean experimentation techniques.

      Some other bullet points:

      • Most things in software aren't worth building
      • Cost Benefit Analyze everything!  Where is your break even point?  If you can't figure it out - stop.  And figure it out.
      • Every developer should hire another developer to build something, to see the CBA from the other side.