Screencast: JBehave for the JavaParser

One open source project that I have made a couple of modest contribution to is a lightweight JavaParser. In the way that most open source contributions start – because it was broken for my particular use ;-).

That said, it is a great little library and the benefit as I mentioned is that it is lightweight compared to something like the org.eclipse ASTParser.

One of the concerns with the project though, is the quality of the tests; which means as a contributor you have little faith that you haven’t broken something else. FYI – some of the test were actually written using main() not even JUnit to give you an indication!

So while doing my best boy scout routine and trying to make the campsite tidier I noticed that the assertions of the tests are largely the same. Given this source code expect that the parser identifies N nodes of T type, Node one is attributed with the following comment etc.

I felt that it was more suited to a specification style of testing; given this class expect this outcome in the parser. It seems the main thing that changes is the class under test and the expected outcomes are largely variations on theme. As we’re into DRY let’s not maintain dozens of JUnit style tests repeating the same asserts.

So strictly speaking JBehave is for Behavior Driven Development defined with a given-when-then syntax. The key aspect for me was the ability to define scenarios in plain text, and have the ability to reuse the same expectations (then’s) once defined in Java repeatedly.

The other key aspect is that JBehave is POJO orientated; it crossed my mind to perhaps consider Spock. Although I felt that this was perhaps frivolous for an open source library that has had many casual, non-repeat committers. As such using JBehave is not a large leap away from regular JUnit.

In any case, JBehave is fairly niche so I wanted to socialize the development process with the other active contributors to alleviate concerns they might have about using a different test approach.

This actually tied in nicely with a personal goal I had set myself this year around becoming a better technical speaker. Although I wouldn’t consider myself a particularly bad speaker, I often fell I struggle with getting all the information out of my head – in verbal form.

So practice makes perfect and I created the following screen cast:


Areas for Improvement.

Overall I pretty pleased with the result, perhaps not as fluid as I would like. I have identified three areas for improvement in my presentation. There are obviously more, but I think these are the most important:

The omnipresent “um”.

The ultimate time filler while your brain catches up with your mouth. I think this is conditioning and practice will help reduce it. You hear it in even the most established of speakers. Remembering to slow down my speech significantly when presenting should also help. A pause of even a few seconds in a video is barely noticeable compared to the repeated um.

Relax and go with it.

Sounds obvious, but I noticed the first 30 seconds of the recording are the most difficult. I had several false starts during the introduction and seen setting, where I ended up terminated the recording. I think this was apprehension saying something out of place; once I accepted that will happen anyway and just kept going I managed to reel off 25 minutes or so without pause.

Under the breath commentary.

Tech demos are inevitable plagued by something not going as you quite expect. My tendency seemed to be to mumble an “oops let’s just change that”. This needs to be changed to a clear explanation of the correction and what the problem was, to turn it into a positive.

Technical Challenge:

I have my developer setup on my Ubuntu partition. This was particularly painful to create a screencast, admittedly because I know little about video and Linux is generally unforgiving to the novice.

Initially I tried Kazam, which seemed the popular choice. Unfortunately, I could only get this working with the built in microphone on my laptop; which is poor quality compared to an external one.

I then tried RecordMyDesktop; which is basic compared to Kazam. It lacks some nice to have features like a countdown timer, but crucially though you have to save to an *.ogv video format. This format doesn’t even seem to be very usable even on Linux.

Kdenlive the video editing software I was using seemed to struggle with it, although the first video in the sequence rendered fine the second segment I spliced hung on the first frame with the audio continuing without the video.

I then tried OpenShot video editor. This has the opposite problem of the video being fine, but losing the audio. Sigh

So I then employed ffmeg do convert the *.ogv and finally create a complete render from Kdenlive.

The video is lot grainier than the originals, but just about adequate.

Up Next

Next I am going to make some recordings for, a website I have been putting together to teach Java basics to people through the IDE.


Speaking out Skill Cast

Screen Cast Guide

Screencast: JBehave for the JavaParser