I've been really fortunate through the years to be surrounded by people who are dedicated to continuous learning and improvement. A few years ago, I put together a reading list as part of an engineering department reboot. I updated it late last year. I have rough reading/discussion times for some of it. What am I missing?
Notes
Video time slots are roughly 80% watching, 20% discussion
Reading time slots are roughly 66% reading, 33% discussion
Now that I've got my project getting packaged up in a docker image, the next step in this POC is to look at platforms for running docker. The only PaaS that I am familiar with now is Pivotal Cloud Foundry, which we used at my last job to deploy Spring Boot executable jars. PCF was working on a docker story, not sure how far that got. It looks like they are pretty bought into Kubernetes these days. In fact it seems like the whole cloud world is moving in that direction, with the likes of Pivotal, VMware, Amazon, Microsoft, Dell, Alibaba, and Mesosphere joining the Cloud Native Computing Foundation. So, I set out to learn more about Kubernetes.
I started out by following the excellent Hello Minikube tutorial provided in the kubernetes docs. It steps you through installing local kubernetes (a.k.a. minikube), creating a docker image, deploying it to kubernetes, making it accessible outside the cluster, and live updating the running image. I followed the tutorial as written first, then applied it to my demo java project. Of course, I ran into some issues.
Making Minikube Aware of your Docker Image
Minikube runs its own Docker daemon. As outlined here, you have a few options for getting your docker images into minikube. Part of the hello minikube tutorial is to point your local docker client at the minikube docker daemon, and build your image there:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
So now you can build the docker image into kubernetes:
$ ./gradlew buildImage
And you can stop pointing at kubernetes' docker instance with:
$ eval $(minikube docker-env -u)
I'm not sure this is the long term strategy for local dev, but at least it makes gradle and docker cli work the same way, which seems appropriate.
Kubernetes Concepts
It's worth looking over the kubernetes concepts docs to understand the domain language. A deployment is a declaration of how you want your container deployed. It specifies things like which image to deploy, how many instances it should have, ports to expose, etc. A deployment is mutable. The configuration of a live deployment can be modified to, e.g. target a new docker image, change number of replicas, etc.
A deployment manages one or more replica sets. Each replica set corresponds to a distinct configuration of the deployment. So if the docker image config is changed on the deployment, a new replica set representing the new config is created. The deployment remembers the mapping from configuration to replica set, so if it sees the same configuration again, it will reuse an existing replica set. Replica sets managed by deployments should not be modified directly, even though the api allows it.
A replica set manages one or more pods, depending on the number of desired replicas. In most cases, a pod runs a single container, though it can be configured to run multiple containers that need to be colocated on the same cluster node.
Create a Deployment
A complete deployment spec is a lengthy document, but kubernetes provides a quick and easy way to create one with minimal input:
$ kubectl run java-docker-example --image=ryanmckay/java-docker-example:0.0.1-SNAPSHOT --port=8080
deployment "java-docker-example” created
Then you can look at the deployment on the cli with:
$ kubectl get deployment java-docker-example
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
java-docker-example 1 1 1 1 1d
$ kubectl describe deployment java-docker-example
Name: java-docker-example
Namespace: default
CreationTimestamp: Thu, 07 Sep 2017 00:00:37 -0500
Labels: run=java-docker-example
Annotations: deployment.kubernetes.io/revision=1
Selector: run=java-docker-example
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template<:
Labels: run=java-docker-example
Containers:
java-docker-example:
Image: ryanmckay/java-docker-example:0.0.1-SNAPSHOT
Port: 8080/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: java-docker-example-3948992014 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 1d 1 deployment-controller Normal ScalingReplicaSet Scaled up replica set java-docker-example-3948992014 to 1
Notice the "Pod Template" section that describes the type of pod that will be managed by this deployment (through a replica set). At any given time, a deployment may be managing multiple active replica sets, which may in turn be managing multiple pods. In this example, there is only one replica set, and it is only managing one pod. But if you configured higher replication and rolling update, then during a change to the deployment spec, it will be managing spinning down the old replica set while spinning up the new replica set, at a minimum. If the spec changes faster than kubernetes can apply it, it could be more than that.
The ownership relationship can be traversed at the command line. You can see the new and old replica set in the deployment description above. Replica set details can be obtained in similar fashion:
You can see the created pods in the replica set's Events log. It is worth noting that the "kubectl describe" command output is intended for human consumption. To get details in a machine readable format, use "kubectl get -o json".
Minikube Dashboard
Its good to know the cli, but there is also the very nice
$ minikube dashboard
That will launch your browser pointed at the minikube dashboard app. The information we saw at the cli is available and hyperlinked.
Internal Access to Container
At this point, the deployed container is running, and you can see logs with:
You can access it from within the cluster. Note the pod's IP address from the Pod image above. The following will start another pod running busybox.
$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
/ # telnet 172.17.0.4:8080
GET /greeting
{"id":4,"content":"Hello, World!"}
There are some issues here. We had to know the IP address of the pod. Also, if we were running more replicas, we wouldn't want to be reaching out to one specific instance. The way to expose pods in kubernetes is through a service. First, note the busybox pod's environment:
There are a couple of points to make here. First, if you plan for pods within the cluster to use a service, and you want to use the env variables for discovery, the service needs to be created before those consuming pods. Second, there are several different service types. Since we didn't specify a type, we got the default, ClusterIP. This exposes the service only within the cluster.
External Access to Container
At some point you're going to want to expose your containers outside the cluster. The service types build on each other.
NodePort Service Type
NodePort exposes the service externally on each node's IP at a static port. This supports managing your own load balancer in front of the nodes. Notice that it also set up a ClusterIP at 10.0.0.131.
This type will configure a cloud-based load balancer for you. I need to learn more about this, as I did all these exercises on minikube only. Even on minikube though, LoadBalancer type makes your life easier.
Converting from the transmode gradle plugin to the bmuschko remote api gradle plugin was pretty straightforward. Other than importing and applying the plugin, the code to get local docker image creation working is as follows:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Note that bmuschko does support multiple image tags, and I took advantage of that to get the versioned tag as well as the "latest" tag.
REPOSITORY TAG IMAGE ID CREATED SIZE
ryanmckay/java-docker-example 0.0.1-SNAPSHOT 7fd01d5b247f 6 seconds ago 115MB
ryanmckay/java-docker-example latest 7fd01d5b247f 6 seconds ago 115MB
In addition to the low-level remote api plugin, bmuschko offers an opinionated docker-java-application plugin based on the application gradle plugin. Using the opinionated plugin cuts down dramatically on the boilerplate in the build.gradle:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Unfortunately, this task only supports one tag. By default, you get the versioned one.
REPOSITORY TAG IMAGE ID CREATED SIZE
ryanmckay/java-docker-example 0.0.1-snapshot 415a9e4b201d 3 seconds ago 115MB
The generated Dockerfile looks like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
As an interesting side note, the ADD Dockerfile directive has special behavior when the file being added is a tar file. In that case, it unpacks it to the destination.
The application gradle plugin is a more generic method of packaging up a java application than that offered by the spring boot plugin. It creates a tar file containing the application jar and all the dependency jars. It also contains a shell script for launching the application, which has OS detection and some OS-specific config.
I started using gradle about the same time I started using spring boot (which has its own gradle plugin with executable jar packaging), so wasn't familiar with the application plugin. It makes sense that bmuschko would base the opinionated plugin on that, so it can support all types of java applications, not just spring boot. However, since I plan to exclusively use spring boot for the foreseeable future, and can completely specify the execution environment in Docker (so don't need the OS-related functionality provided by the application plugin), I want to stick with Spring Boot application packaging and running.
I left the modifications in a branch tagged as v0.4.2
Nebula docker gradle plugin
Netflix publishes a set of plugins for gradle called Nebula. The nebula-docker-plugin is another opinionated plugin built on top of the bmuschko and application plugins. It doesn't seem to add a lot beyond the bmuschko application plugin, other than the concept of separate test and production repositories for publishing docker images. I'm going to look into docker deployment models next so it might come into play there.
At last we get to some Docker in this Docker example.
Gradle Docker Plugin
There are a few prominent docker plugins for gradle: transmode, bmuschko, and Netflix nebula. First, I used transmode, as recommended in the spring boot guide Spring Boot with Docker. After adding the Dockerfile:
, and updating build.gradle as described in the guide, I was able to build a docker image for my application:
$ gw clean build buildDocker --info
Setting up staging directory.
Creating Dockerfile from file /Users/ryanmckay/projects/java-docker-example/java-docker-example/src/main/docker/Dockerfile.
Determining image tag: ryanmckay/java-docker-example:0.0.1-SNAPSHOT
Using the native docker binary.
Sending build context to Docker daemon 14.43MB
Step 1/5 : FROM openjdk:8-jdk-alpine
---> 478bf389b75b
Step 2/5 : VOLUME /tmp
---> Using cache
---> 136f2d4e58dc
Step 3/5 : ADD target/java-docker-example-0.0.1-SNAPSHOT.jar app.jar
---> b3b47b89bbf1
Removing intermediate container 92f637bc67e0
Step 4/5 : ENV JAVA_OPTS ""
---> Running in e90c9a3557eb
---> 1d3f6526e8e5
Removing intermediate container e90c9a3557eb
Step 5/5 : ENTRYPOINT sh -c java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
---> Running in 2fbfb52f836d
---> f001bdddc80b
Removing intermediate container 2fbfb52f836d
Successfully built f001bdddc80b
Successfully tagged ryanmckay/java-docker-example:0.0.1-SNAPSHOT
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ryanmckay/java-docker-example 0.0.1-SNAPSHOT f001bdddc80b 3 minutes ago 115MB
$ docker run -p 8080:8080 -t ryanmckay/java-docker-example:0.0.1-SNAPSHOT
Automatically Tracking Application Version
Note that the Dockerfile at this point has the application version hard-coded in it. This duplication must not stand. The transmode gradle plugin also supports a dsl for specifying the Dockerfile in build.gradle. Then as part of the build process, it produces the actual Dockerfile.
I set about moving line by line of the Dockerfile into the dsl. With one exception it went smoothly. You can see the result in v0.3 of the app. The relevant portion of build.gradle is listed here. You can see its pretty much a line for line translation of the Dockerfile. And since we have access to the jar filename in the build script, nothing needs to be hard coded for docker.
One little gotcha in the previous section lead to an interesting learning. My initial attempt to set the JAVA_OPTS env variable looked like this:
setEnvironment('JAVA_OPTS', '')
but that produced an illegal line in the Dockerfile:
ENV JAVA_OPTS
That led me to read about the ENV directive in the Dockerfile reference docs. I was confused about whether ENV directives are used at build time, run time, or both. Turns out, the answer is both, as I was able to prove to myself with the following. The ENV THE_FILE is used at build time to decide which file to add to the image, and at run time as an environment variable, which can be overriden at the command line.
So that all worked fine for creating and publishing a versioned docker image locally. But I really want that image tagged with the semantic version and also the latest tag. The transmode plugin currently does not support multiple tags for the produced docker image. I'm not the only one who wants this feature. I took a look at the source code, and it wouldn't be a minor change. At this point, I'm only publishing locally, so given the choice between version tag and latest tag, I'm going to go for latest for now. This is a simple matter of adding tagVersion = 'latest' to the buildDocker task.
The next step was to add some tests. The tests that came with the demo controller used a Spring feature I was not familiar with, MockMvc. The Spring Guide "Testing the Web Layer" provides a good discussion of various levels of testing, focusing on how much of the Spring context to load. There are 3 main levels: 1) start the full Tomcat server with full Spring context, 2) full Spring context without server, and 3) narrower MVC-focused context without server. I wanted to compare all three, plus add in variation in testing framework and assertion framework. Specifically I wanted to add Spock with groovy power assert. The aspects I wanted to compare were: test speed, readability of test code, readability of test output. I intentionally made one of the tests fail in each approach to compare output.
Spock with Full Tomcat Server
This is the approach I am most familiar with.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerSpec.groovy
Timing
I ran and timed the test in isolation with
$ ./gradlew test --tests '*GreetingControllerSpec' --profile
Total 'test' task time (reported by gradle profile output): 13.734s
Total test run time (reported by junit test output): 12.690s
Time to start GreetingControllerSpec (load full context and start tomcat): 12.157s
So, not fast. Maybe one of the other approaches can do better.
Test Code Readability
def "no Param greeting should return default message"() {
when:
ResponseEntity<Greeting> responseGreeting = restTemplate
.getForEntity("http://localhost:" + port + "/greeting", Greeting.class)
then:
responseGreeting.statusCode == HttpStatus.OK
responseGreeting.body.content == "blah"
}
I really like Spock. I like the plain English test names. I like the separate sections for given, when, then, etc. I think it reads well and makes it obvious what is under test.
Test Output Readability
When a test fails, you want to see why, right? In this aspect, groovy power assertions are simply unparalleled.
Note that the nice output for responseGreeting itself comes from ResponseEntity.toString(), and from Greeting.toString(), which is provided by Lombok.
Spock with MockMvc
By adding @AutoConfigureMockMvc to your test class, you can inject a MockMvc instance, which facilitates making calls directly to Springs HTTP request handling layer. This allows you to skip starting up a Tomcat server, so should save some time and/or memory. On the other hand, you are testing less of the round trip, so the time savings would need to be significant to justify this approach.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerMockMvcSpec.groovy
Timing
This approach was about 500ms faster than with tomcat. Not significant enough to justify for me, considering the overall time scale.
Total 'test' task time (reported by gradle profile output): 13.263s
Total test run time (reported by junit test output): 12.281s
Time to start GreetingControllerSpec (load full context, no tomcat): 11.804s
Test Code Readability
def "no Param greeting should return default message"() {
when:
def resultActions = mockMvc.perform(get("/greeting")).andDo(print())
then:
resultActions
.andExpect(status().isOk())
.andExpect(jsonPath('$.content').value("blah"))
}
This reads reasonably well. Capturing the resultActions in the when block to use later in the then block is a little awkward, but not too bad. Being able to express arbitrary JSON path expectations is convenient. I didn't see an obvious way to get a ResponseEntity as was done in the full Tomcat example.
This test output does not read well at all. Spock and the Spring MockMvc library are both tripping over each other trying to provide verbose output. I think you need choose either Spock or MockMvc, but not both.
JUnit with WebMvcTest and MockMvc
This configuration is on the far other end of the spectrum from full service Spock. With @WebMvcTest, not only does it not start a Tomcat server, it doesn't even load a full context. In the current state of the project this doesn't make much of a difference because the GreetingController has no injected dependencies. If it did, I would have to mock those out. Again, because of the differences from "real" configuration, time savings would need to be significant.
https://github.com/ryanmckaytx/java-docker-example/blob/v0.2/src/test/groovy/net/ryanmckay/demo/GreetingControllerTests.java
Timing
This approach was also about 500ms faster overall than full context with Tomcat.
Total 'test' task time (reported by gradle profile output): 13.275s
Total test run time (reported by junit test output): 0.269s
Time to start GreetingControllerSpec (load narrow context, no tomcat): 11.88s
This is the least readable for me. Again, I like separating the call under test from the assertions.
Test Output Readability
The failure message for MockMvc-based assertion failures isn't as informative as Spock in this case.
java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>" type="java.lang.AssertionError">java.lang.AssertionError: JSON path "$.content" expected:<blah> but was:<Hello, World!>
Because the test called .andDo(print()), some additional information is available in the standard out of the test, including the full response status code and body.
Conclusion
I'm as convinced as ever that Spock is the premier Java testing framework. I'm reserving judgment on the Spring annotations that let you avoid starting a Tomcat server or load the full context. If the project gets more complicated, those could potentially provide a nice speedup.
I've been wanting to learn more about Docker for a while. I'm almost done with this udemy course Docker for Java Developers. Its a good course, and the concepts are straightforward. To help me commit it to memory, I wanted to do my own project to apply what I'm learning. https://github.com/ryanmckaytx/java-docker-example
In this part, I'm just going to initialize a new project. Part 2 covers Spring Web MVC testing.
Set up your dev machine
Every once in a while, like when you switch jobs and get a new laptop, you need to set up a machine for java development.
For this, I like to use sdkman. It helps install the tools you need to do java development. It also helps switch between multiple versions of those tools. I've installed java, groovy, gradle, and maven.
There are a few good ways to do this. The typical way I do this is to copy another project. Within an organization, or at least within a team, there is typically some amount of infrastructure and institutional knowledge built into existing projects that you want in a new project. But for this project, I wanted to practice starting completely from scratch. There are a couple of good options.
Gradle init
I like gradle as a build tool, and gradle has a built in project initializer. It supports a few project archtypes, including pom (converting a maven project to gradle), java library, and java application. It even explicitly supports spock testing framework.
You can see it also generates a demo App and AppTest.
Spring Boot Initializr
For Spring Boot apps, Spring provides the Spring Boot Initializr. This lets you choose from a curated (but extensive) set of options and dependencies, and then generates a project in a zip file for download. Similarly to gradle init, it includes a default basic app and test.
Pretty much the only thing I don't like here is that the test isn't spock and there doesn't seem to be a way to choose it. Not a big deal, its easy to change afterward. I went with initializr for this project.
JHipster
JHipster is an opinionated full-stack project generator for Spring Boot + Angular apps. It has a lot of features that I want to explore later, so for now I stuck with Spring Boot Initializr.
I recently interviewed for a new job, which led me to review some notes I made for myself after some interviews a couple years ago. I thought I would share them in case someone else might find something useful here. The point of preparing is not to pretend to be anything you aren't or to know anything you don't, the point is just to have everything you do know on the tip of your tongue. You only have probably an hour per panel, so you need to be on point.
Make a list of professional things that are important to you. Out of those, pick the most important and write a few sentences. Those are kind of your mission statement. I put those sentences right at the top of my resume. Here is my list:
Working with smart, motivated people
Agile
Mentoring
Being Mentored
DevOps
Physical fitness
Feeling like I'm contributing to company's success
Company mission
Competence of other departments
Trust in leadership
Trust from leadership
Advanced software architecture
Advanced technology
Work/Life balance
Make a list of the major lessons learned during your time at your current job. These can include accomplishments, things you wish you had done better, or just experience gained.
Make a list of what areas you want to learn more/grow more in. These should be things that really excite you. It can be stuff you already do in your current job or not. Its helpful if you've shown some initiative and done some learning on your own outside of work.
Make a list of recent videos watched, books read, conferences attended, and one or two sentences about each. Try to tie the things you learned in these back to the items in the other lists. You want to show that you are passionate about your craft and always learning and improving.
Learn about the prospective company, their products, their market, their competition. Make a list of questions you have about the prospective company. Some of these should tie back to what you are passionate about. For me it was a lot of questions about how they do agile. What the dev teams look like. How they interact with other departments like product and sys ops. Questions show you are interested, passionate, and evens out the power structure of the interview a bit, which makes everybody feel more comfortable. Yes, they are interviewing you, but you are also interviewing them.
For design problems, stay calm and focus on fundamentals. Treat the interviewer as a subject matter expert/product owner. Don’t be afraid to ask questions! Your job is to:
Capture the ubiquitous language of the domain. Make sure you understand what the pieces are and what they do.
Capture the functional requirements of the application. I like to focus on user stories/use cases. Just solve one at a time, and evolve the design to handle additional ones.
Nouns you hear are good candidates for objects/resources. Verbs are good candidates for methods.
Keep it lean. Start simple and try to deliver a minimal top to bottom slice that does something. Then move on to more complicated scenarios iteratively.