Sunday, April 15, 2018

Essential Reading/Watching for Software Engineers

I've been really fortunate through the years to be surrounded by people who are dedicated to continuous learning and improvement.  A few years ago, I put together a reading list as part of an engineering department reboot.  I updated it late last year.  I have rough reading/discussion times for some of it.  What am I missing?

Notes

  • Video time slots are roughly 80% watching, 20% discussion
  • Reading time slots are roughly 66% reading, 33% discussion
  • Long chapters are broken up into 2 sessions

Agile

Clean Code

  • Video: Clean Code Episode 1: Clean Code - 1.25 hours
    • Covers Clean Code Book Ch.1: Clean Code
  • Video: Clean Code Episode 2: Names - 1 hour
    • Covers Clean Code Book Ch.2: Meaningful Names
  • Video: Clean Code Episode 3: Functions - 1.25 hours
    • Covers Clean Code Book Ch.3: Functions
  • Video: Clean Code Episode 4: Function Structure - 2 hours
  • Video: Clean Code Episode 5: Form - 1.5 hours
  • Video: Clean Code Episode 6: Testing part 1 - 1.25 hours
  • Video: Clean Code Episode 6: Testing part 2 - 1.5 hours
  • Video: Clean Code Episode 7: Architecture - 1.75 hours
  • Video: Clean Code Episode 8: Foundations of the SOLID principles - 1.25 hours

Refactoring

  • Chapter 1 - Refactoring, a First Example - 45 minutes
  • Chapter 2 - Principles in Refactoring - 1.5 hours
  • Chapter 3 - Bad Smells in Code - 45 minutes

Effective Java (2nd edition)

  • Chapter 4 - Classes and Interfaces - 1.5 hours (est)
  • Chapter 7 - Methods - 45 minutes (est)

Domain Driven Design

Microservices

  • Video: The Practical Implications of Microservices - by Sam Newman, the author of “Building Microservices” - 1.25 hours
  • Video: Deploying and Testing Microservices - also Sam Newman - 1.5 hours
  • Building Microservices book
    • Chapter 1 - Microservices - 45 minutes
    • Chapter 3 - How to Model Services - 45 minutes
    • Chapter 4 - Integration
      • Part 1: Beginning of chapter through "Downsides to REST over HTTP" - 2 hours
      • Part 2: "Implementing Asynchronous Event-Based Collaboration" through end of chapter - 2 hours
    • Chapter 5 - Splitting the Monolith
      • Part 1: Beginning of chapter thru "So What to Do?" - 1.5 hours
      • Part 2: "Reporting" thru end of chapter - 1.5 hours
    • Chapter 7 - Testing  
      • Part 1: Beginning of chapter through "The Metaversion" - 1.5 hours
      • Part 2: "Test Journeys, Not Stories" through end of chapter - 1.5 hours
  • The Twelve-Factor App - 1.5 hours

Testing

REST/API design

Event Sourcing/CQRS

Continuous Delivery

  • Chapter 1 - The Problem of Delivering Software
  • Chapter 2 - Configuration Management
  • Chapter 7 - The Commit Stage

Working Effectively with Legacy Code

  • Chapter 4 - The Seam Model
  • Chapter 9 - I can't get this class into a test harness
  • Chapter 13 - I need to make a change, but I don't know what tests to write

Further Reading

  • Agile Software Development (If you don't have access to this book, see Uncle Bob's blog about SOLID principles)
    • Chapter 7 - What is Agile Design?
    • Chapter 8 - SRP: The Single-Responsibility Principle
    • Chapter 9 - OCP: The Open-Close Principle
    • Chapter 10 - LSP: The Liskov Substitution Principle
    • Chapter 11 - DIP: The Dependency-Inversion Principle
    • Chapter 12 - ISP: The Interface-Segregation Principle
  • Implementing Domain Driven Design
    • Ch. 4: Architecture
    • Ch. 5: Entities
    • Ch. 6: Value Objects
    • Ch. 7: Services
    • Ch. 8: Domain Events
    • Ch. 9: Modules

Thursday, September 14, 2017

Docker Java Example Part 5: Kubernetes

Now that I've got my project getting packaged up in a docker image, the next step in this POC is to look at platforms for running docker. The only PaaS that I am familiar with now is Pivotal Cloud Foundry, which we used at my last job to deploy Spring Boot executable jars. PCF was working on a docker story, not sure how far that got. It looks like they are pretty bought into Kubernetes these days. In fact it seems like the whole cloud world is moving in that direction, with the likes of Pivotal, VMware, Amazon, Microsoft, Dell, Alibaba, and Mesosphere joining the Cloud Native Computing Foundation. So, I set out to learn more about Kubernetes.

I started out by following the excellent Hello Minikube tutorial provided in the kubernetes docs. It steps you through installing local kubernetes (a.k.a. minikube), creating a docker image, deploying it to kubernetes, making it accessible outside the cluster, and live updating the running image. I followed the tutorial as written first, then applied it to my demo java project. Of course, I ran into some issues.

Making Minikube Aware of your Docker Image

Minikube runs its own Docker daemon. As outlined here, you have a few options for getting your docker images into minikube. Part of the hello minikube tutorial is to point your local docker client at the minikube docker daemon, and build your image there:
$ eval $(minikube docker-env)
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.64.2:2376
DOCKER_API_VERSION=1.23
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/Users/ryanmckay/.minikube/certs
That works fine in the tutorial, because they are using the docker cli tool, which respects those env variables. Unfortunately, the bmuschko gradle docker plugin does not. But it can be configured to relatively easily. java-docker-example v0.5.1 adds:
So now you can build the docker image into kubernetes:
$ ./gradlew buildImage
And you can stop pointing at kubernetes' docker instance with:
$ eval $(minikube docker-env -u)
I'm not sure this is the long term strategy for local dev, but at least it makes gradle and docker cli work the same way, which seems appropriate.

Kubernetes Concepts

It's worth looking over the kubernetes concepts docs to understand the domain language.  A deployment is a declaration of how you want your container deployed.  It specifies things like which image to deploy, how many instances it should have, ports to expose, etc.  A deployment is mutable. The configuration of a live deployment can be modified to, e.g. target a new docker image, change number of replicas, etc.  

A deployment manages one or more replica sets.  Each replica set corresponds to a distinct configuration of the deployment.  So if the docker image config is changed on the deployment, a new replica set representing the new config is created.  The deployment remembers the mapping from configuration to replica set, so if it sees the same configuration again, it will reuse an existing replica set. Replica sets managed by deployments should not be modified directly, even though the api allows it.

A replica set manages one or more pods, depending on the number of desired replicas.  In most cases, a pod runs a single container, though it can be configured to run multiple containers that need to be colocated on the same cluster node.

Create a Deployment

A complete deployment spec is a lengthy document, but kubernetes provides a quick and easy way to create one with minimal input:
$ kubectl run java-docker-example --image=ryanmckay/java-docker-example:0.0.1-SNAPSHOT --port=8080
deployment "java-docker-example” created
Then you can look at the deployment on the cli with:
$ kubectl get deployment java-docker-example
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
java-docker-example   1         1         1            1           1d

$ kubectl describe deployment java-docker-example
Name:            java-docker-example
Namespace:        default
CreationTimestamp:    Thu, 07 Sep 2017 00:00:37 -0500
Labels:            run=java-docker-example
Annotations:        deployment.kubernetes.io/revision=1
Selector:        run=java-docker-example
Replicas:        1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:    1 max unavailable, 1 max surge
Pod Template<:
  Labels:    run=java-docker-example
  Containers:
   java-docker-example:
    Image:        ryanmckay/java-docker-example:0.0.1-SNAPSHOT
    Port:        8080/TCP
    Environment:    <none>
    Mounts:        <none>
  Volumes:        <none>
Conditions:
  Type        Status    Reason
  ----        ------    ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets:    <none>
NewReplicaSet:    java-docker-example-3948992014 (1/1 replicas created)
Events:
  FirstSeen    LastSeen    Count    From            SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----            -------------    --------    ------            -------
  1d        1d        1    deployment-controller            Normal        ScalingReplicaSet    Scaled up replica set java-docker-example-3948992014 to 1
Notice the "Pod Template" section that describes the type of pod that will be managed by this deployment (through a replica set). At any given time, a deployment may be managing multiple active replica sets, which may in turn be managing multiple pods. In this example, there is only one replica set, and it is only managing one pod. But if you configured higher replication and rolling update, then during a change to the deployment spec, it will be managing spinning down the old replica set while spinning up the new replica set, at a minimum. If the spec changes faster than kubernetes can apply it, it could be more than that.

The ownership relationship can be traversed at the command line. You can see the new and old replica set in the deployment description above. Replica set details can be obtained in similar fashion:
$ kubectl describe replicaset java-docker-example-3948992014
Name:  java-docker-example-3948992014
Namespace: default
Selector: pod-template-hash=3948992014,run=java-docker-example
Labels:  pod-template-hash=3948992014
  run=java-docker-example
Annotations: deployment.kubernetes.io/desired-replicas=1
  deployment.kubernetes.io/max-replicas=2
  deployment.kubernetes.io/revision=1
Controlled By: Deployment/java-docker-example
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels: pod-template-hash=3948992014
  run=java-docker-example
  Containers:
   java-docker-example:
    Image:  ryanmckay/java-docker-example:0.0.1-SNAPSHOT
    Port:  8080/TCP
    Environment: <none>
    Mounts:  <none>
  Volumes:  <none>
Events:
  FirstSeen LastSeen Count From   SubObjectPath Type  Reason   Message
  --------- -------- ----- ----   ------------- -------- ------   -------
  24m  24m  1 replicaset-controller   Normal  SuccessfulCreate Created pod: java-docker-example-3948992014-h1c0l

You can see the created pods in the replica set's Events log. It is worth noting that the "kubectl describe" command output is intended for human consumption. To get details in a machine readable format, use "kubectl get -o json".

Minikube Dashboard

Its good to know the cli, but there is also the very nice
$ minikube dashboard

That will launch your browser pointed at the minikube dashboard app. The information we saw at the cli is available and hyperlinked.


Internal Access to Container

At this point, the deployed container is running, and you can see logs with:
$ kubectl logs deployment/java-docker-example
$ kubectl logs java-docker-example-3948992014-h1c0l
You can access it from within the cluster. Note the pod's IP address from the Pod image above. The following will start another pod running busybox.
$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
/ # telnet 172.17.0.4:8080
GET /greeting
{"id":4,"content":"Hello, World!"}
There are some issues here. We had to know the IP address of the pod. Also, if we were running more replicas, we wouldn't want to be reaching out to one specific instance.  The way to expose pods in kubernetes is through a service. First, note the busybox pod's environment:

/ # env | sort
HOME=/root
HOSTNAME=busybox
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm
Now we launch a service for our deployment:
$ kubectl expose deployment java-docker-example
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   ClusterIP
IP:   10.0.0.32
Port:   <unset> 8080/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>
Now if we restart our busybox pod, we will have some new env variables related to the new service.
/ # env | sort
HOME=/root
HOSTNAME=busybox
JAVA_DOCKER_EXAMPLE_PORT=tcp://10.0.0.32:8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP=tcp://10.0.0.32:8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_ADDR=10.0.0.32
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_PORT=8080
JAVA_DOCKER_EXAMPLE_PORT_8080_TCP_PROTO=tcp
JAVA_DOCKER_EXAMPLE_SERVICE_HOST=10.0.0.32
JAVA_DOCKER_EXAMPLE_SERVICE_PORT=8080
KUBERNETES_PORT=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm

/ # telnet $JAVA_DOCKER_EXAMPLE_SERVICE_HOST:$JAVA_DOCKER_EXAMPLE_SERVICE_PORT
GET /greeting
{"id":5,"content":"Hello, World!"}
There are a couple of points to make here.  First, if you plan for pods within the cluster to use a service, and you want to use the env variables for discovery, the service needs to be created before those consuming pods. Second, there are several different service types.  Since we didn't specify a type, we got the default, ClusterIP. This exposes the service only within the cluster.

External Access to Container

At some point you're going to want to expose your containers outside the cluster.  The service types build on each other.

NodePort Service Type

NodePort exposes the service externally on each node's IP at a static port.  This supports managing your own load balancer in front of the nodes. Notice that it also set up a ClusterIP at 10.0.0.131.

$ kubectl expose deployment java-docker-example --type=NodePort
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   NodePort
IP:   10.0.0.131
Port:   <unset> 8080/TCP
NodePort:  <unset> 32478/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>

$ kubectl get node minikube -o jsonpath='{.status.addresses[].address}'
192.168.99.100

$ curl 192.168.99.100:32478/greeting
{"id":6,"content":"Hello, World!"}

LoadBalancer Service Type

This type will configure a cloud-based load balancer for you.  I need to learn more about this, as I did all these exercises on minikube only. Even on minikube though, LoadBalancer type makes your life easier.
$ kubectl expose deployment java-docker-example --type=LoadBalancer
service "java-docker-example" exposed

$ kubectl describe service java-docker-example
Name:   java-docker-example
Namespace:  default
Labels:   run=java-docker-example
Annotations:  <none>
Selector:  run=java-docker-example
Type:   LoadBalancer
IP:   10.0.0.193
Port:   <unset> 8080/TCP
NodePort:  <unset> 32535/TCP
Endpoints:  172.17.0.4:8080
Session Affinity: None
Events:   <none>

$ minikube service java-docker-example
Opening kubernetes service default/java-docker-example in default browser...
This saves you from having to track down and piece together the node ip and port.

Friday, September 1, 2017

Docker Java Example Part 4: Bmuschko and Nebula Gradle Docker Plugins

Converting from the transmode gradle plugin to the bmuschko remote api gradle plugin was pretty straightforward. Other than importing and applying the plugin, the code to get local docker image creation working is as follows:

Note that bmuschko does support multiple image tags, and I took advantage of that to get the versioned tag as well as the "latest" tag.
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
ryanmckay/java-docker-example   0.0.1-SNAPSHOT      7fd01d5b247f        6 seconds ago       115MB
ryanmckay/java-docker-example   latest              7fd01d5b247f        6 seconds ago       115MB
I tagged the code repo at this point v0.4.1

Java Application plugin

In addition to the low-level remote api plugin, bmuschko offers an opinionated docker-java-application plugin based on the application gradle plugin. Using the opinionated plugin cuts down dramatically on the boilerplate in the build.gradle:


Unfortunately, this task only supports one tag. By default, you get the versioned one.
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
ryanmckay/java-docker-example   0.0.1-snapshot      415a9e4b201d        3 seconds ago       115MB
The generated Dockerfile looks like this:

As an interesting side note, the ADD Dockerfile directive has special behavior when the file being added is a tar file. In that case, it unpacks it to the destination.

The application gradle plugin is a more generic method of packaging up a java application than that offered by the spring boot plugin. It creates a tar file containing the application jar and all the dependency jars. It also contains a shell script for launching the application, which has OS detection and some OS-specific config.
$ tar tf build/distributions/java-docker-example-0.0.1-SNAPSHOT.tar 
java-docker-example-0.0.1-SNAPSHOT/
java-docker-example-0.0.1-SNAPSHOT/lib/
java-docker-example-0.0.1-SNAPSHOT/lib/java-docker-example-0.0.1-SNAPSHOT.jar
java-docker-example-0.0.1-SNAPSHOT/lib/spring-boot-starter-1.5.4.RELEASE.jar
java-docker-example-0.0.1-SNAPSHOT/lib/spring-boot-starter-web-1.5.4.RELEASE.jar
...
java-docker-example-0.0.1-SNAPSHOT/bin/
java-docker-example-0.0.1-SNAPSHOT/bin/java-docker-example
java-docker-example-0.0.1-SNAPSHOT/bin/java-docker-example.bat

I started using gradle about the same time I started using spring boot (which has its own gradle plugin with executable jar packaging), so wasn't familiar with the application plugin. It makes sense that bmuschko would base the opinionated plugin on that, so it can support all types of java applications, not just spring boot.  However, since I plan to exclusively use spring boot for the foreseeable future, and can completely specify the execution environment in Docker (so don't need the OS-related functionality provided by the application plugin), I want to stick with Spring Boot application packaging and running.
I left the modifications in a branch tagged as v0.4.2

Nebula docker gradle plugin

Netflix publishes a set of plugins for gradle called Nebula. The nebula-docker-plugin is another opinionated plugin built on top of the bmuschko and application plugins.  It doesn't seem to add a lot beyond the bmuschko application plugin, other than the concept of separate test and production repositories for publishing docker images.  I'm going to look into docker deployment models next so it might come into play there.

Tuesday, August 29, 2017

Docker Java Example Part 3: Transmode Gradle plugin

At last we get to some Docker in this Docker example.

Gradle Docker Plugin

There are a few prominent docker plugins for gradle: transmode, bmuschko, and Netflix nebula. First, I used transmode, as recommended in the spring boot guide Spring Boot with Docker. After adding the Dockerfile:

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD target/java-docker-example-0.0.1-SNAPSHOT.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

, and updating build.gradle as described in the guide, I was able to build a docker image for my application:

$ gw clean build buildDocker --info

Setting up staging directory.
Creating Dockerfile from file /Users/ryanmckay/projects/java-docker-example/java-docker-example/src/main/docker/Dockerfile.
Determining image tag: ryanmckay/java-docker-example:0.0.1-SNAPSHOT
Using the native docker binary.
Sending build context to Docker daemon  14.43MB
Step 1/5 : FROM openjdk:8-jdk-alpine
---> 478bf389b75b
Step 2/5 : VOLUME /tmp
---> Using cache
---> 136f2d4e58dc
Step 3/5 : ADD target/java-docker-example-0.0.1-SNAPSHOT.jar app.jar
---> b3b47b89bbf1
Removing intermediate container 92f637bc67e0
Step 4/5 : ENV JAVA_OPTS ""
---> Running in e90c9a3557eb
---> 1d3f6526e8e5
Removing intermediate container e90c9a3557eb
Step 5/5 : ENTRYPOINT sh -c java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
---> Running in 2fbfb52f836d
---> f001bdddc80b
Removing intermediate container 2fbfb52f836d
Successfully built f001bdddc80b
Successfully tagged ryanmckay/java-docker-example:0.0.1-SNAPSHOT

$ docker images
REPOSITORY                       TAG               IMAGE ID        CREATED          SIZE
ryanmckay/java-docker-example    0.0.1-SNAPSHOT    f001bdddc80b    3 minutes ago    115MB

$ docker run -p 8080:8080 -t ryanmckay/java-docker-example:0.0.1-SNAPSHOT 

Automatically Tracking Application Version

Note that the Dockerfile at this point has the application version hard-coded in it. This duplication must not stand. The transmode gradle plugin also supports a dsl for specifying the Dockerfile in build.gradle. Then as part of the build process, it produces the actual Dockerfile.

I set about moving line by line of the Dockerfile into the dsl. With one exception it went smoothly. You can see the result in v0.3 of the app. The relevant portion of build.gradle is listed here. You can see its pretty much a line for line translation of the Dockerfile. And since we have access to the jar filename in the build script, nothing needs to be hard coded for docker.

// for docker
group = 'ryanmckay'

docker {
 baseImage 'openjdk:8-jdk-alpine'
}

task buildDocker(type: Docker, dependsOn: build) {
 applicationName = jar.baseName
 volume('/tmp')
 addFile(jar.archivePath, 'app.jar')
 setEnvironment('JAVA_OPTS', '""')
 entryPoint([ 'sh', '-c', 'java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar' ])
}

ENV is used at build time And at run time

One little gotcha in the previous section lead to an interesting learning. My initial attempt to set the JAVA_OPTS env variable looked like this:
setEnvironment('JAVA_OPTS', '')
but that produced an illegal line in the Dockerfile:
ENV JAVA_OPTS
That led me to read about the ENV directive in the Dockerfile reference docs.  I was confused about whether ENV directives are used at build time, run time, or both.  Turns out, the answer is both, as I was able to prove to myself with the following. The ENV THE_FILE is used at build time to decide which file to add to the image, and at run time as an environment variable, which can be overriden at the command line.

$ cat somefile
somefile contents

$ cat Dockerfile
FROM alpine:latest
ENV THE_FILE="somefile"
ADD $THE_FILE containerizedfile
ENTRYPOINT ["sh", "-c", "cat containerizedfile && echo '-----' && env | sort"]

$ docker build -t envtest .
Sending build context to Docker daemon  3.072kB
Step 1/4 : FROM alpine:latest
 ---> 7328f6f8b418
Step 2/4 : ENV THE_FILE "somefile"
 ---> Using cache
 ---> 148a4236ce19
Step 3/4 : ADD $THE_FILE containerizedfile
 ---> Using cache
 ---> d44f9e242685
Step 4/4 : ENTRYPOINT sh -c cat containerizedfile && echo '-----' && env | sort
 ---> Running in 70de8ceac5ef
 ---> 5d875712904a
Removing intermediate container 70de8ceac5ef
Successfully built 5d875712904a
Successfully tagged envtest:latest

$ docker run envtest
somefile contents
-----
HOME=/root
HOSTNAME=36b3233697df
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
THE_FILE=somefile
no_proxy=*.local, 169.254/16

$ docker run -e THE_FILE=blah envtest
somefile contents
-----
HOME=/root
HOSTNAME=6a0f8c183a18
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
THE_FILE=blah
no_proxy=*.local, 169.254/16

Version Tag and Latest Tag

So that all worked fine for creating and publishing a versioned docker image locally.  But I really want that image tagged with the semantic version and also the latest tag.  The transmode plugin currently does not support multiple tags for the produced docker image.  I'm not the only one who wants this feature.  I took a look at the source code, and it wouldn't be a minor change.  At this point, I'm only publishing locally, so given the choice between version tag and latest tag, I'm going to go for latest for now.  This is a simple matter of adding tagVersion = 'latest' to the buildDocker task.

I tagged the code repo at v0.3.2 at this point.

I'm going to move on to evaluating the bmuschko and Netflix Nebula Docker Gradle plugins next.

Wednesday, August 23, 2017

Docker Java Example Part 2: Spring Web MVC Testing

The next step was to add some tests. The tests that came with the demo controller used a Spring feature I was not familiar with, MockMvc. The Spring Guide "Testing the Web Layer" provides a good discussion of various levels of testing, focusing on how much of the Spring context to load. There are 3 main levels: 1) start the full Tomcat server with full Spring context, 2) full Spring context without server, and 3) narrower MVC-focused context without server. I wanted to compare all three, plus add in variation in testing framework and assertion framework. Specifically I wanted to add Spock with groovy power assert.  The aspects I wanted to compare were: test speed, readability of test code, readability of test output.  I intentionally made one of the tests fail in each approach to compare output.


Spock with Full Tomcat Server

This is the approach I am most familiar with.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerSpec.groovy

Timing

I ran and timed the test in isolation with
$ ./gradlew test --tests '*GreetingControllerSpec' --profile

Total 'test' task time (reported by gradle profile output): 13.734s
Total test run time (reported by junit test output): 12.690s
Time to start GreetingControllerSpec (load full context and start tomcat): 12.157s
So, not fast. Maybe one of the other approaches can do better.

Test Code Readability

def "no Param greeting should return default message"() {

    when:
    ResponseEntity<Greeting> responseGreeting = restTemplate
                .getForEntity("http://localhost:" + port + "/greeting", Greeting.class)

    then:
    responseGreeting.statusCode == HttpStatus.OK
    responseGreeting.body.content == "blah"
}
I really like Spock. I like the plain English test names. I like the separate sections for given, when, then, etc. I think it reads well and makes it obvious what is under test.

Test Output Readability

When a test fails, you want to see why, right?  In this aspect, groovy power assertions are simply unparalleled.
Condition not satisfied:

responseGreeting.body.content == "blah"
|                |    |       |
|                |    |       false
|                |    |       12 differences (7% similarity)
|                |    |       (He)l(lo, World!)
|                |    |       (b-)l(ah--------)
|                |    Hello, World!
|                Greeting(id=1, content=Hello, World!)
<200 OK,Greeting(id=1, content=Hello, World!),{Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 22 Aug 2017 22:06:33 GMT]}>

 at net.ryanmckay.demo.GreetingControllerSpec.no Param greeting should return default message(GreetingControllerSpec.groovy:27)
Note that the nice output for responseGreeting itself comes from ResponseEntity.toString(), and from Greeting.toString(), which is provided by Lombok.

Spock with MockMvc

By adding @AutoConfigureMockMvc to your test class, you can inject a MockMvc instance, which facilitates making calls directly to Springs HTTP request handling layer.  This allows you to skip starting up a Tomcat server, so should save some time and/or memory.  On the other hand, you are testing less of the round trip, so the time savings would need to be significant to justify this approach.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerMockMvcSpec.groovy

Timing

This approach was about 500ms faster than with tomcat.  Not significant enough to justify for me, considering the overall time scale.

Total 'test' task time (reported by gradle profile output): 13.263s
Total test run time (reported by junit test output): 12.281s
Time to start GreetingControllerSpec (load full context, no tomcat): 11.804s

Test Code Readability

def "no Param greeting should return default message"() {

    when:
    def resultActions = mockMvc.perform(get("/greeting")).andDo(print())

    then:
    resultActions
            .andExpect(status().isOk())
            .andExpect(jsonPath('$.content').value("blah"))
}
This reads reasonably well.  Capturing the resultActions in the when block to use later in the then block is a little awkward, but not too bad.  Being able to express arbitrary JSON path expectations is convenient.  I didn't see an obvious way to get a ResponseEntity as was done in the full Tomcat example.

Test Output Readability

Condition failed with Exception:

resultActions .andExpect(status().isOk()) .andExpect(jsonPath('$.content').value("blah"))
|              |         |        |        |         |                     |
|              |         |        |        |         |                     org.springframework.test.web.servlet.result.JsonPathResultMatchers$2@1f977413
|              |         |        |        |         org.springframework.test.web.servlet.result.JsonPathResultMatchers@6cd50e89
|              |         |        |        java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!&rt;
|              |         |        org.springframework.test.web.servlet.result.StatusResultMatchers$10@660dd332
|              |         org.springframework.test.web.servlet.result.StatusResultMatchers@251379e8
|              org.springframework.test.web.servlet.MockMvc$1@68837646
org.springframework.test.web.servlet.MockMvc$1@68837646

 at net.ryanmckay.demo.GreetingControllerMockMvcSpec.no Param greeting should return default message(GreetingControllerMockMvcSpec.groovy:29)
Caused by: java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!&rt;

This test output does not read well at all. Spock and the Spring MockMvc library are both tripping over each other trying to provide verbose output.  I think you need choose either Spock or MockMvc, but not both.

JUnit with WebMvcTest and MockMvc

This configuration is on the far other end of the spectrum from full service Spock.  With @WebMvcTest, not only does it not start a Tomcat server, it doesn't even load a full context.  In the current state of the project this doesn't make much of a difference because the GreetingController has no injected dependencies.  If it did, I would have to mock those out.  Again, because of the differences from "real" configuration, time savings would need to be significant.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerTests.java

Timing

This approach was also about 500ms faster overall than full context with Tomcat.

Total 'test' task time (reported by gradle profile output): 13.275s
Total test run time (reported by junit test output): 0.269s
Time to start GreetingControllerSpec (load narrow context, no tomcat): 11.88s

Test Code Readability

@Testpublic void noParamGreetingShouldReturnDefaultMessage() throws Exception {

    this.mockMvc.perform(get("/greeting")).andDo(print()).andExpect(status().isOk())
            .andExpect(jsonPath("$.content").value("blah"));
}

This is the least readable for me.  Again, I like separating the call under test from the assertions.

Test Output Readability

The failure message for MockMvc-based assertion failures isn't as informative as Spock in this case.
java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>" type="java.lang.AssertionError">java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>

Because the test called .andDo(print()), some additional information is available in the standard out of the test, including the full response status code and body.

Conclusion

I'm as convinced as ever that Spock is the premier Java testing framework.  I'm reserving judgment on the Spring annotations that let you avoid starting a Tomcat server or load the full context.  If the project gets more complicated, those could potentially provide a nice speedup.

I tagged the code repo at v0.2 at this point.

Sunday, July 23, 2017

Docker Java Example Part 1: Initializing a new Spring Boot Project

I've been wanting to learn more about Docker for a while.  I'm almost done with this udemy course Docker for Java Developers.  Its a good course, and the concepts are straightforward.  To help me commit it to memory, I wanted to do my own project to apply what I'm learning.
https://github.com/ryanmckaytx/java-docker-example
In this part, I'm just going to initialize a new project.  Part 2 covers Spring Web MVC testing.

Set up your dev machine

Every once in a while, like when you switch jobs and get a new laptop, you need to set up a machine for java development.  

For this, I like to use sdkman.  It helps install the tools you need to do java development.  It also helps switch between multiple versions of those tools.  I've installed java, groovy, gradle, and maven.
$ sdk c

Using:

gradle: 4.0
groovy: 2.4.11
java: 8u131-zulu
maven: 3.5.0

Create a new project

There are a few good ways to do this.  The typical way I do this is to copy another project.  Within an organization, or at least within a team, there is typically some amount of infrastructure and institutional knowledge built into existing projects that you want in a new project.  But for this project, I wanted to practice starting completely from scratch.  There are a couple of good options.

Gradle init

I like gradle as a build tool, and gradle has a built in project initializer.  It supports a few project archtypes, including pom (converting a maven project to gradle), java library, and java application.  It even explicitly supports spock testing framework.

$ gradle init --type java-application --test-framework spock

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed
$ tree
.
|____build.gradle
|____gradle
| |____wrapper
| | |____gradle-wrapper.jar
| | |____gradle-wrapper.properties
|____gradlew
|____gradlew.bat
|____settings.gradle
|____src
| |____main
| | |____java
| | | |____App.java
| |____test
| | |____groovy
| | | |____AppTest.groovy

You can see it also generates a demo App and AppTest.

Spring Boot Initializr

For Spring Boot apps, Spring provides the Spring Boot Initializr.  This lets you choose from a curated  (but extensive) set of options and dependencies, and then generates a project in a zip file for download.  Similarly to gradle init, it includes a default basic app and test.



$ unzip demo.zip
Archive:  demo.zip
   creating: demo/
  inflating: demo/gradlew
   creating: demo/gradle/
   creating: demo/gradle/wrapper/
   creating: demo/src/
   creating: demo/src/main/
   creating: demo/src/main/java/
   creating: demo/src/main/java/net/
   creating: demo/src/main/java/net/ryanmckay/
   creating: demo/src/main/java/net/ryanmckay/demo/
   creating: demo/src/main/resources/
   creating: demo/src/main/resources/static/
   creating: demo/src/main/resources/templates/
   creating: demo/src/test/
   creating: demo/src/test/java/
   creating: demo/src/test/java/net/
   creating: demo/src/test/java/net/ryanmckay/
   creating: demo/src/test/java/net/ryanmckay/demo/
  inflating: demo/.gitignore
  inflating: demo/build.gradle
  inflating: demo/gradle/wrapper/gradle-wrapper.jar
  inflating: demo/gradle/wrapper/gradle-wrapper.properties
  inflating: demo/gradlew.bat
  inflating: demo/src/main/java/net/ryanmckay/demo/DemoApplication.java
  inflating: demo/src/main/resources/application.properties
  inflating: demo/src/test/java/net/ryanmckay/demo/DemoApplicationTests.java

Pretty much the only thing I don't like here is that the test isn't spock and there doesn't seem to be a way to choose it. Not a big deal, its easy to change afterward.  I went with initializr for this project.

JHipster

JHipster is an opinionated full-stack project generator for Spring Boot + Angular apps.  It has a lot of features that I want to explore later, so for now I stuck with Spring Boot Initializr.

Add a Rest Controller

I could have gone straight to CI at this point, but I wanted to do a little extra work that I wish the Initializr could have done for me.  I copied in a basic hello world spring rest controller (and its tests) from the Spring Restful Web Service Guide.  The repo is now at this point https://github.com/ryanmckaytx/java-docker-example/tree/v0.1

Saturday, June 24, 2017

Interview Prep/Tips


I recently interviewed for a new job, which led me to review some notes I made for myself after some interviews a couple years ago.  I thought I would share them in case someone else might find something useful here.  The point of preparing is not to pretend to be anything you aren't or to know anything you don't, the point is just to have everything you do know on the tip of your tongue.  You only have probably an hour per panel, so you need to be on point.

Make a list of professional things that are important to you.  Out of those, pick the most important and write a few sentences.  Those are kind of your mission statement.  I put those sentences right at the top of my resume.  Here is my list:
  • Working with smart, motivated people
  • Agile
  • Mentoring
  • Being Mentored
  • DevOps
  • Physical fitness
  • Feeling like I'm contributing to company's success
  • Company mission
  • Competence of other departments
  • Trust in leadership
  • Trust from leadership
  • Advanced software architecture
  • Advanced technology
  • Work/Life balance
Make a list of the major lessons learned during your time at your current job.  These can include accomplishments, things you wish you had done better, or just experience gained.

Make a list of what areas you want to learn more/grow more in.  These should be things that really excite you.  It can be stuff you already do in your current job or not.  Its helpful if you've shown some initiative and done some learning on your own outside of work.

Make a list of recent videos watched, books read, conferences attended, and one or two sentences about each.  Try to tie the things you learned in these back to the items in the other lists. You want to show that you are passionate about your craft and always learning and improving.

Learn about the prospective company, their products, their market, their competition. Make a list of questions you have about the prospective company.  Some of these should tie back to what you are passionate about.  For me it was a lot of questions about how they do agile.  What the dev teams look like.  How they interact with other departments like product and sys ops.  Questions show you are interested, passionate, and evens out the power structure of the interview a bit, which makes everybody feel more comfortable.  Yes, they are interviewing you, but you are also interviewing them.

For design problems, stay calm and focus on fundamentals.  Treat the interviewer as a subject matter expert/product owner.  Don’t be afraid to ask questions!  Your job is to:
  • Capture the ubiquitous language of the domain.  Make sure you understand what the pieces are and what they do. 
  • Capture the functional requirements of the application.  I like to focus on user stories/use cases.  Just solve one at a time, and evolve the design to handle additional ones.
  • Nouns you hear are good candidates for objects/resources.  Verbs are good candidates for methods.
  • Keep it lean.  Start simple and try to deliver a minimal top to bottom slice that does something.  Then move on to more complicated scenarios iteratively.