draptik

mostly tech stuff

IntelliJ and Gnome Keyboard Shortcut Conflict: Ctrl Alt S

I am currenty experimenting with Jetbrains Rider under Linux. Sticking to the default window manager GNOME for my linux distro (Arch Linux), I ran into some problems with conflicting keyboard shortcuts. Even though Jetbrains’ IDEs come with a keyboard setting named Default for GNOME

Starting with the most obvious: I was not able to open Rider’s settings using the keyboard shortcut Ctrl Alt s.

Ctrl Alt s

This command opens the settings in most IntelliJ products (File –> Settings).

Pressing Crtl Alt s “rolled up the window”: The current window was minimized to the title bar (an effect I have never seen before and don’t need). Double clicking the window title bar expanded the window again. So obviously the keyboard shortcut was already in use. The question being: by which application?

Gnome settings did not reveal any conflicting bindings in the keyboard section!

After some searching the gnome extension screenshot-window-sizer turned out being the culprit. I don’t know if I installed this extension on purpose or if it was installed as a dependency by some other package or if belongs to gnome’s default. The following solution keeps the extension installed: Only the keyboard shortcut is disabled.

Full name of the extension:

org.gnome.shell.extensions.screenshot-window-sizer

Here is a short summary how to figure out if your environment is affected by this extension:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Find out if screenshot extension is installed
gsettings list-schemas | grep screenshot-window-sizer

# List keys
gsettings list-keys org.gnome.shell.extensions.screenshot-window-sizer
cycle-screenshot-sizes-backward
cycle-screenshot-sizes

## Show key value
gsettings get org.gnome.shell.extensions.screenshot-window-sizer cycle-screenshot-sizes
['<Alt><Control>s']

## Show key value
gsettings get org.gnome.shell.extensions.screenshot-window-sizer cycle-screenshot-sizes-backward
['<Shift><Alt><Control>s']

The following snippet disables the hijacked Ctrl Alt s binding from screenshot-window-sizer (found here)

1
2
3
4
5
6
7
# disable:
gsettings set org.gnome.shell.extensions.screenshot-window-sizer cycle-screenshot-sizes []
gsettings reset org.gnome.desktop.wm.keybindings toggle-shaded

# To re-enable: 
gsettings reset org.gnome.shell.extensions.screenshot-window-sizer cycle-screenshot-sizes
gsettings set org.gnome.desktop.wm.keybindings toggle-shaded []

Now the keybinding Ctrl Alt s works as expected in IntelliJ products — even with Gnome.

Rewarding Moments During Lunch & Learn…

Today I witnessed a C++ developer explaining the essence of Test-Driven Development (TDD) to a Haskell developer. And a bunch of other developers pitched in! It was over lunch. We don’t talk about TDD everyday while eating. But when we do, we do it purposefully and we call it “Lunch & Learn”. Lunch & Learn is an inhouse event I introduced 6 months ago to our company.

Motivation

I learn a lot from visiting local meetups, such as Softwerkskammer. Clean Code is “a thing”: I was not the only one who was aware of SOLID, DRY, TDD, etc. But: Not everybody has the time to visit such events in their spare time. Especially since many of these meetups take place in the evening or during weekends. So my idea was simple: the people at work can eat lunch, and learn. During their lunch break. It’s still spare time, but a lot easier to fit into a normal work schedule.

Lunch & Learn

We try to meet for an hour during lunch break once a week. Before each meeting we decide on a topic: Everything from classic Coding Katas with mob-programming, to discussing new technologies, or pain points in our projects. We normally have around 5 to 20 attendees and we organize food for everybody.

Rewarding

The company I work for employs hardware and software developers/architects. Imagine a dozen developers from different fields (enterprise OO, FP, C/C++, experts and novices), starting a TDD session. Each with different TDD and mob-programming experience. So why do I and a couple of colleagues of mine burden ourself with this extra unpaid work?

Here are just a few of the reasons I find this format so rewarding:

  • Everybody can practice their communication skills in a familiar environment.
  • People who normally do not engage with each other starting a conversation about something like “best practices”, but from very different points of view.
  • Junior developers slowly getting the hang of topics like Clean Code.
  • Seasoned developers pitching in with additional information (f. ex. pros and cons of certain techniques).
  • In the long run, having a well trained team will make everybody’s life easier. Ask yourself: Who do you want to work with in the future?

For me, one of those moments I witnessed last week: A C++ developer explaining the essence of TDD to an FP developer. And everybody pitched in! Another discussion was on it’s way!

Give it a try!

Try it in your company!

Pro tip: Ask your boss to sponsor the food for the session! As soon as your boss sees that people are investing their spare time to become better at their job, he/she will probably notice that spending some money on pizza is a good long term investment.

Questions: Don’t hesitate to contact me!

(Article rewritten on 2018-03-18 after valuable feedback from Tim).

Visual Studio’s Default Path for New Projects

Today I took the time to fix something very simple: Visual Studio’s default path.

In the past decade there has never been a single project I wanted to save to:

C:\Users\<username>\Documents\Visual Studio <vs version>\Projects

I was expecting having to dive into the windows registry to fix this, but it turned out to be rather simple to change this.

In Visual Studio navigate to Tools -> Options -> Projects and Solutions -> Locations

Change Project Location to the desired folder:

Et voila:

TL/DR: In Visual Studio: Tools -> Options -> Projects and Solutions -> Locations

F# Test Setup for FizzBuzz

In my previous post we setup a basic F# project in Linux.

In this post I would like to show how to setup an idiomatic F# testing environment using FsUnit.

Side note for people unfamiliar with .NET

Actually, it’s not a project, but a “solution”. To clear things up for people not familiar with the .NET ecosystem: In .NET, the top level configuration is called a “solution” and resides in a *.sln file. A solution references “projects”. Each project configuration is stored in a *.fsproj file (F#) or a *.csproj file (C#). Projects can reference each other. This information is stored in the *.[f|c]sproj file.

We have 2 projects (FizzBuzz and FizzBuzz.Tests), each with a *.fsproj file. The FizzBuzz.Tests.fsproj references the FizzBuzz.fsproj file (see the previous post for details):

1
2
3
4
5
6
7
8
.
├── FizzBuzz
│   ├── FizzBuzz.fsproj
│   ├── ...
├── FizzBuzz.Tests
│   ├── FizzBuzz.Tests.fsproj
│   ├── ...
└── fsharp-kata-fizzbuzz.sln

Current state

This is the current state of our test:

1
2
3
4
[<Fact>]
let ``Array with Number 1 returns 'one'`` () =
    let result = FizzBuzz.Generate [1]
    Assert.Equal(result, "one")
  • [<Fact>]: this is F#’s annotation style. The same as C# [Fact] or Java @Fact
  • Array with Number 1 returns 'one': Method name in double back-ticks improves readability, especially in unit tests. No CamelCasing or snake_casing needed. It’s an F# language feature.
  • Assert.Equal(...): This is probably familiar to everyone who has ever written a unit test. Every assertion library has a different signature: Is it Equal(expected, actual) or Equal(actual, expected)? I hate this! Thankfully there are alternative assertion libraries. Example: In C# you can write actual.Should().Be(expected) (using FluentAssertions). The same is true for F#.

FsUnit: Idiomatic assertions

What does “idiomatic” mean? For programming languages, it means: Writing code as most people, who are used to the language, would write the code (how a “native” would express an idea, a concept, an algorithm, etc). Simple example: In Java and JS, the first character of a method name should be lower case. In C#, the first character should be upper case (yes, even if the method is private!). The code will still compile if you don’t comply to these conventions, but it’s not “idiomatic”. Same goes for “For Loops” vs using a “Map” functions: In some languages one concept is preferred over the other.

FsUnit brings pipes to F# unit tests. Pipes are used extensively in F# and should be familiar to most linux shell users: Bash uses the | symbol as operator to redirect the output of one expression to the input of another expression. In F# the pipe operator is |>. The concept might seem similar to using “Method Chaining” in C# (it’s not, but close enough in this context).

Example:

1
2
3
4
5
// instead of
Assert.Equal(1 + 1, 2)

// idiomatic F# (using pipe) with FsUnit:
1 + 1 |> should equal 2

Installing FsUnit

1
2
cd FizzBuzz.Tests
dotnet add package FsUnit.Xunit

File FizzBuzz.Tests/FizzBuzz.Tests.fsproj should now look like this (plus/minus some version numbers):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>

    <IsPackable>false</IsPackable>
  </PropertyGroup>

  <ItemGroup>
    <Compile Include="Tests.fs" />
    <Compile Include="Program.fs" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="FsUnit.Xunit" Version="3.0.0" />
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
    <PackageReference Include="xunit" Version="2.3.1" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.1" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\FizzBuzz\FizzBuzz.fsproj" />
  </ItemGroup>

</Project>

Note the line <PackageReference Include="FsUnit.Xunit" Version="3.0.0" /> (your version number might differ).

Using FsUnit

Modify the test file FizzBuzz.Tests/Tests.fs to look like this:

1
2
3
4
5
6
7
8
9
10
11
module Tests

open System
open FsUnit.Xunit // <-- add FsUnit.Xunit
open Xunit
open FizzBuzz

[<Fact>]
let ``Array with Number 1 returns 'one'`` () =
    FizzBuzz.Generate [1]
    |> should equal "one" // using "|>" and "should" syntax

Running the unit tests within the test folder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
dotnet test
Build started, please wait...
Build completed.

Test run for /home/patrick/projects/fsharp-blog-fizzbuzz/fsharp-kata-fizzbuzz/FizzBuzz.Tests/bin/Debug/netcoreapp2.0/FizzBuzz.Tests.dll(.NETCoreApp,Version=v2.0)
Microsoft (R) Test Execution Command Line Tool Version 15.5.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Starting test execution, please wait...
[xUnit.net 00:00:00.7436128]   Discovering: FizzBuzz.Tests
[xUnit.net 00:00:00.8627111]   Discovered:  FizzBuzz.Tests
[xUnit.net 00:00:00.8695487]   Starting:    FizzBuzz.Tests
[xUnit.net 00:00:01.1888259]   Finished:    FizzBuzz.Tests

Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.4787 Seconds

Summary

We can now write unit tests in an F# way (“idiomatic”) by using the library FsUnit.

Have fun with F# and linux!

Get the source code here

F# Setup Linux: FizzBuzz

One of the first things I always struggle with when learning new languages is the environment. Here is a simple setup for playing with F# and Linux.

Prerequisite: .NET Core with Linux

I won’t go into setting up .NET Core for linux. This should be straightforward either by following Microsoft instructions or, in my case, the Arch Linux homepage. dotnet --info should return something similar to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.NET Command Line Tools (2.1.3)

Product Information:
 Version:            2.1.3
 Commit SHA-1 hash:  a0ca411ca5

Runtime Environment:
 OS Name:     arch
 OS Version:
 OS Platform: Linux
 RID:         linux-x64
 Base Path:   /opt/dotnet/sdk/2.1.3/

Microsoft .NET Core Shared Framework Host

  Version  : 2.0.5
  Build    : 17373eb129b3b05aa18ece963f8795d65ef8ea54

Creating a Kata

Let’s create a project for the FizzBuzz Kata.

1
2
3
4
5
6
7
8
9
10
mkdir fsharp-kata-fizzbuzz
cd fsharp-kata-fizzbuzz
dotnet new classlib -lang f# -o FizzBuzz
dotnet new xunit -lang f# -o FizzBuzz.Tests
cd FizzBuzz.Tests
dotnet add reference ../FizzBuzz/FizzBuzz.fsproj
cd ..
dotnet new sln
dotnet sln add FizzBuzz/FizzBuzz.fsproj
dotnet sln add FizzBuzz.Tests/FizzBuzz.Tests.fsproj

(I really love this new “CLI first” approach! It makes live so much easier for DevOps)

This is our project structure after templating:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
tree . -L 4
.
├── FizzBuzz
│   ├── bin
│   │   └── Debug
│   │       └── netstandard2.0
│   ├── FizzBuzz.fsproj
│   ├── Library.fs
│   └── obj
│       ├── Debug
│       │   └── netstandard2.0
│       ├── FizzBuzz.fsproj.nuget.cache
│       ├── FizzBuzz.fsproj.nuget.g.props
│       ├── FizzBuzz.fsproj.nuget.g.targets
│       └── project.assets.json
├── FizzBuzz.Tests
│   ├── bin
│   │   └── Debug
│   │       └── netcoreapp2.0
│   ├── FizzBuzz.Tests.fsproj
│   ├── obj
│   │   ├── Debug
│   │   │   └── netcoreapp2.0
│   │   ├── FizzBuzz.Tests.fsproj.nuget.cache
│   │   ├── FizzBuzz.Tests.fsproj.nuget.g.props
│   │   ├── FizzBuzz.Tests.fsproj.nuget.g.targets
│   │   └── project.assets.json
│   ├── Program.fs
│   └── Tests.fs
└── fsharp-kata-fizzbuzz.sln

The 3 project files (top – down)…

fsharp-kata-fizzbuzz.sln (nothing new here)

1
2
3
4
5
6
7
8
9
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.26124.0
MinimumVisualStudioVersion = 15.0.26124.0
Project("{F2A71F9B-5D33-465A-A702-920D77279786}") = "FizzBuzz", "FizzBuzz\FizzBuzz.fsproj", "{C64F3370-DE54-4D58-BDD4-33C4B02F7290}"
EndProject
Project("{F2A71F9B-5D33-465A-A702-920D77279786}") = "FizzBuzz.Tests", "FizzBuzz.Tests\FizzBuzz.Tests.fsproj", "{4AA6DACD-EA0E-4938-BB41-7B055A9A0C8C}"
EndProject
[...]

FizzBuzz/FizzBuzz.fsproj (not relevant here, but keep in mind that fsharp files have to be in the correct order):

1
2
3
4
5
6
7
8
9
10
11
<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <Compile Include="Library.fs" />
  </ItemGroup>

</Project>

FizzBuzz.Tests/FizzBuzz.Tests.fsproj:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>

    <IsPackable>false</IsPackable>
  </PropertyGroup>

  <ItemGroup>
    <Compile Include="Tests.fs" />
    <Compile Include="Program.fs" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.5.0" />
    <PackageReference Include="xunit" Version="2.3.1" />
    <PackageReference Include="xunit.runner.visualstudio" Version="2.3.1" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.1" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\FizzBuzz\FizzBuzz.fsproj" />
  </ItemGroup>

</Project>

Running dotnet test returns

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ dotnet test
Build started, please wait...
Build started, please wait...
Build completed.

Test run for /home/patrick/projects/fsharp-blog-fizzbuzz/fsharp-kata-fizzbuzz/FizzBuzz/bin/Debug/netstandard2.0/FizzBuzz.dll(.NETStandard,Version=v2.0)
Microsoft (R) Test Execution Command Line Tool Version 15.5.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Starting test execution, please wait...
No test is available in /home/patrick/projects/fsharp-blog-fizzbuzz/fsharp-kata-fizzbuzz/FizzBuzz/bin/Debug/netstandard2.0/FizzBuzz.dll. Make sure test project has a nuget reference of package "Microsoft.NET.Test.Sdk" and framework version settings are appropriate and try again.

Test Run Aborted.
Build completed.

Test run for /home/patrick/projects/fsharp-blog-fizzbuzz/fsharp-kata-fizzbuzz/FizzBuzz.Tests/bin/Debug/netcoreapp2.0/FizzBuzz.Tests.dll(.NETCoreApp,Version=v2.0)
Microsoft (R) Test Execution Command Line Tool Version 15.5.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Starting test execution, please wait...
[xUnit.net 00:00:00.9263576]   Discovering: FizzBuzz.Tests
[xUnit.net 00:00:01.0646319]   Discovered:  FizzBuzz.Tests
[xUnit.net 00:00:01.0733357]   Starting:    FizzBuzz.Tests
[xUnit.net 00:00:01.2961789]   Finished:    FizzBuzz.Tests

Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.5956 Seconds

Ok, dotnet test does not recognize which project actually contains tests. But it runs all tests!

Let’s add a test.

The file FizzBuzz.Tests/Tests.fs (generated by dotnet new xunit...) looks like this:

1
2
3
4
5
6
7
8
module Tests

open System
open Xunit

[<Fact>]
let ``My test`` () =
    Assert.True(true)

TDD approach: We will create a failing test first, then implement something.

Replace the content of FizzBuzz.Tests/Tests.fs with

1
2
3
4
5
6
7
8
9
10
module Tests

open System
open Xunit
open FizzBuzz

[<Fact>]
let ``Array with Number 1 returns 'one'`` () =
    let result = FizzBuzz.Generate [1]
    Assert.Equal(result, "one")

We verify 2 aspects:

  • we are invoking another library (FizzBuzz) from our test class
  • we are learning to use the test library

This does not compile. Let’s implement the simplest solution:

Replace FizzBuzz/Library.fs with

1
2
3
module FizzBuzz

let Generate i = "one"

Running dotnet test should now confirm 1 passing test.

Have fun with F# on Linux!

Get the source code here

Docker and Octopress

This post describes how I created my first customized docker image(s).

I have been watching the docker space for a while and finally found a private use-case: This blog uses Octopress, which is a ruby-based convenience-wrapper around Jekyll. Jekyll is a static web-site generator provided by GitHub. Octopress requires some old libs: Ruby 1.9.3, Python 2.7, and nodejs.

So, to use Octopress on any machine, I have to either:

  • configure the machine to use specific versions of Ruby, Python and NodeJs. Works.
    • Drawback: Other projects using different versions of Ruby, Python, NodeJs won’t work out of the box.
  • use version managers for Ruby, Python and NodeJs (f.ex. rvm, virtualenv, nvm). Works.
    • Drawback: Tedious setup which differs between OSes.
  • use a virtual machine. Works.
    • Drawback: Not easily portable due to size of virtual machine image.
  • Or, I could use docker.

I decided to give docker a spin.

My primary goal was to be able to blog from any (linux) machine running docker.

From a birds-eye view my goal is to:

  • install a docker image on any machine
  • and run a docker container with my blog mounted as shared folder (so I can edit the content on the host system, but compilation, preview and publishing is accomplished from within the docker container)

My secondary goal was to get my hands dirty with docker :–)

Obviously docker also has potential usage for other development setups (i.e. testing application code in local docker container before pushing to CI to reduce roundtrip time).

Overview

I created 3 docker images, which build upon each other:

  • Docker 00: base image including Ruby, Python and NodeJs
  • Docker 01: image with docker ENTRYPOINT
  • Docker 02: image optimized for octopress usage

Here is the folder structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
├── 00_ruby_base
│   ├── build-image.sh
│   └── Dockerfile
├── 01_user
│   ├── build-image.sh
│   ├── Dockerfile
│   └── entrypoint.sh
├── 02_octopress
│   ├── build-image.sh
│   ├── Dockerfile
│   ├── post-install.sh
│   └── run-container.sh
└── share
    └── octopress

Each image (00*, 01*, 02*) contains a Dockerfile and a build-image.sh file. Only the last image (02*) contains a run-container.sh file.

  • Dockerfiles contain the instructions for building a docker image.
  • build-image.sh files invoke the Dockerfile.

Docker 00: base image

Since I couldn’t find a simple Ruby image of 1.9.3 at docker hub I decided to create my own.

Knowing my use-case (Octopress), I also installed Python2.7 and NodeJs for my base docker image. This image is the only one that takes quite some time to build.

Dockerfile

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
FROM debian:jessie

# Get the dependencies for Octopress page generation
##
## Notes:
##
## - Python 2.7 is required for using pygments gem.
## - NodeJs is required for execjs Gem
##
RUN apt-get update && \
    apt-get --no-install-recommends -y install \
    autoconf \
    bison \
    build-essential \
    libssl-dev \
    libyaml-dev \
    locales \
    libreadline6-dev \
    zlib1g-dev \
    libncurses5-dev \
    libffi-dev \
    libgdbm3 \
    libgdbm-dev \
    nodejs \
    python2.7 \
    wget \
    ca-certificates \
    curl && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Set LOCALE to UTF8
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
    locale-gen en_US.UTF-8 && \
    dpkg-reconfigure --frontend=noninteractive locales && \
    /usr/sbin/update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8

# Install ruby (adopted from https://hub.docker.com/r/liaisonintl/ruby-1.9.3/~/dockerfile/)
ENV RUBY_MAJOR=1.9 \
    RUBY_VERSION=1.9.3-p551 \
    RUBY_DOWNLOAD_SHA256=bb5be55cd1f49c95bb05b6f587701376b53d310eb1bb7c76fbd445a1c75b51e8 \
    RUBYGEMS_VERSION=2.6.6 \
    PATH=/usr/local/bundle/bin:$PATH
RUN set -ex && \
    curl -SL -o ruby.tar.gz "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz" && \
    echo "$RUBY_DOWNLOAD_SHA256 ruby.tar.gz" | sha256sum -c - && \
    mkdir -p /usr/src/ruby && \
    tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 && \
    rm -f ruby.tar.gz && \
    cd /usr/src/ruby && \
    autoconf && \
    ./configure --disable-install-doc --sysconfdir=/etc/ && \
    make && \
    make install && \
    gem update --system $RUBYGEMS_VERSION && \
    rm -rf /usr/src/ruby

# Create soft link for python
RUN ln -s /usr/bin/python2.7 /usr/bin/python

Here is a short description of what happens in this Dockerfile:

1
RUN apt-get ...

…retrieves required packages from the debian package repository.

1
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen ...

…ensures the default system language uses UTF8 (required by some packages).

1
RUN set -ex && curl... && make ...

…downloads, compiles and installs Ruby from scratch (this step takes some time!).

1
RUN ln -s /usr/bin/python2.7 /usr/bin/python

…creates a soft link to Python2.7.

Docker build

To execute the previous Dockerfile, run ./build-image.sh.

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/ruby1.9.3-python2.7-nodejs:0.1 .

Make the file executable (chmod 744 build-image.sh).

Ensure to replace draptik with some other string (f.ex. your name, initials or company) to build your own image. F.ex. docker build -t homersimpson/ruby1.9.3-python2.7-nodejs:0.1 .

Since this image is going to be the base image for the next step, ensure to always use the same name (f.ex. homersimpson).

You can verify that the docker build step worked as expected by listing all docker images using docker images. The output should be similar to:

1
2
3
4
$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
homersimpson/ruby1.9.3-python2.7-nodejs:0.1   0.1                 641ca1a59e87        8 days ago          486 MB
debian              jessie              e5599115b6a6        4 weeks ago         123 MB

Docker 01: user

Here is where things start getting difficult. Sharing a folder from the host system with docker. And keeping permissions/users in sync…

Some things to know about sharing a volume in docker

Sharing data between host and docker container is normally accomplished by docker run -v host-location/folder:container-location/folder.

Be aware, though:

  • The volume will be owned by the container
  • The container’s default user is root (UID/GID 1)!
  • The container will change the UID/GID on the host system!

My workaround

I found this post by Deni Bertovic. In short, the post proposes to use docker’s ENTRYPOINT to pipe all RUN commands through the ENTRYPOINT. Which in turn is a bash script (entrypoint.sh, see below), creating a new user, and executing all docker commands as user. This is where I start walking on very thin ice… Nevertheless, I created another docker image based on the base image from the previous step.

Dockerfile

Make sure to replace draptik in the FROM string…

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
FROM draptik/ruby1.9.3-python2.7-nodejs:0.1

# For details see https://denibertovic.com/posts/handling-permissions-with-docker-volumes/

RUN apt-get update && apt-get -y --no-install-recommends install \
    ca-certificates \
    curl

RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.10/gosu-$(dpkg --print-architecture)" \
    && curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.10/gosu-$(dpkg --print-architecture).asc" \
    && gpg --verify /usr/local/bin/gosu.asc \
    && rm /usr/local/bin/gosu.asc \
    && chmod +x /usr/local/bin/gosu

COPY entrypoint.sh /usr/local/bin/entrypoint.sh

ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

For further details about the above Dockerfile refer to aforementioned post by Deni.

The entrypoint.sh file should be located beside the Dockerfile:

entrypoint.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash

# Add local user
# Either use the LOCAL_USER_ID if passed in at runtime or
# fallback

USER_ID=${LOCAL_USER_ID:-9001}

echo "Starting with UID : $USER_ID"
useradd --shell /bin/bash -u $USER_ID -o -c "" -m user
export HOME=/home/user

exec /usr/local/bin/gosu user "$@"

Docker build

…and the corresponding docker build command (again, wrapped in a file):

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/ruby1.9.3-python2.7-nodejs-user:0.1 .

…again, make sure to replace draptik

Docker 02: octopress

Because, in addition to mounting the content of my blog, I also mount the blog-engine itself (using docker run -v <orig-location>:<container-location>) I also have to execute an initial script within the mounted folder to setup the blog-engine. To prepare the environment for this script I create a customized ~/.gemrc and ~/.bashrc file in the Dockerfile. For this purpose I mount another file from the docker run script (post-install.sh), which must be executed from within the container.

Dockerfile

(Make sure to replace draptik in the FROM string…)

Dockerfile
1
2
3
4
5
6
7
8
9
10
FROM draptik/ruby1.9.3-python2.7-nodejs-user:0.1

# I am not really sure why this is needed, because we have an ENTRYPOINT in the parent image.
RUN useradd -ms /bin/bash user

# Setup ruby/bundler to work with non-admin user
RUN echo "gem: --user-install" > /home/user/.gemrc && chown user:user /home/user/.gemrc
RUN echo "PATH=\"/home/user/.gem/ruby/1.9.1/bin:$PATH\"" >> /home/user/.bashrc && chown user:user /home/user/.bashrc

WORKDIR /octopress

You might be wondering why I am explicitly creating a new user (RUN useradd -ms /bin/bash user). Valid question. In the next 2 lines I write some config values to files which are located in the /home/user/ folder. I was not able to do this without first explicitly creating the user. Probably not best practice, but it works. I would be very grateful for feedback on this issue.

Docker build

(…again, make sure to replace draptik…)

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/octopress:0.1 .

Starting the final image as docker container

The following script starts the docker container:

run-container.sh
1
2
3
4
5
6
7
8
9
10
#!/bin/bash
docker run \
    --rm \
    -it \
    -e LOCAL_USER_ID=`id -u $USER` \
    -p 4001:4001 \
    -v ${PWD}/../share/octopress:/octopress \
    -v ${PWD}/post-install.sh:/home/user/post-install.sh \
    draptik/octopress:0.1 \
    /bin/bash

Some notes about the docker run options:

  • --rm ensures that the docker container is removed once exited
  • -it runs an interactive terminal as soon as the container starts
  • -e LOCAL_USER... sets the host user’s ID within the docker container
  • -p ... maps the port numbers
  • -v ${PWD}/../share/octopress:/octopress mounts the blog volume
  • -v ${PWD}/post-install.sh:/home/user/post-install.sh mounts the post install script

Mounting volumes in docker using docker-machine or Docker for Windows on windows requires some extra path-tweaking. I intend to add these tweaks in the future…

post-install

Yet another step… After docker mounted the external volumes from the host, they have to be configured (including our blogging engine).

That is the reason for the post-install.sh script. It must be run from within the container!

IMPORTANT: Ensure to replace the git user name/email in post-install.sh. Otherwhise you will not be able to deploy!

post-install.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash
#
# This script must be executed in ~ folder (not in /octopress)!
gem install --no-ri --no-rdoc \
  bundler \
  execjs

cd /octopress
# Important: use `--path...`!
bundle install --path $HOME/.gem

git config --global user.name "FirstName LastName"
git config --global user.email "your@mail.com"

Final usage for Octopress users

All images are published to docker hub.

Just run docker pull draptik/octopress:0.1.

  • Create a folder for your blog: mkdir blog && cd blog
  • Create a folder for the blog content: mkdir share && cd share

Initially clone octopress in share folder:

1
2
3
git clone -b source <octopress-git-repo> octopress
cd octopress
git clone -b master <octopress-git-repo> _deploy

Then, run the run-container.sh script.

From within the newly created docker container, follow the steps from the post-install section above.

You should now be able to use Octopress from within the docker container (i.e. rake new_post["test"], rake generate, rake preview, rake deploy, etc.) while being able to edit your posts on the host machine.

Summary

It helps if you have a linux background, since all docker images are linux based. Setting up a customized docker image can be a bit tedious (especially configuring user privileges and mounting host folders), but once the image works you have an automated and reproducible environment. I think this makes it worth the effort.

Obviously I am just starting with docker, so take my example above with a grain of salt. But maybe the example gives you a starting point for your own docker experiments.

As always: Thankful for feedback!

Links

You can find the complete source code at Github here: https://github.com/draptik/octopress-docker

The docker images are hosted at Docker Hub: https://hub.docker.com/u/draptik/

Learning by Meeting People

In my day job I’m a full-stack .NET developer. In the past I mainly learned new concepts in programming by reading books, blog posts, watching video tutorials (i.e. Pluralsight), listening to podcasts and visiting conferences. Lately I started to expand my horizon by meeting people face to face. This turned out to be very rewarding.

These kind of meetups, which often take place in your spare time, are a very efficient way of learning to think outside the box. Meeting other developers in person — in an informal setting — enables me to interact directly: Asking questions, as soon as they pop into my mind (everything from “what was that keyboard shortcut?” to “Stop, I did not understand what a monad is”).

Here is a short resume of some the meetings I’ve been to in the last couple of weeks.

Softwerkskammer – Mocking Frameworks

https://www.softwerkskammer.org/

  • Open for all: yes
  • Audience: Software developers (PHP, Java, .NET, Ruby, Python, etc)
  • Audience size: 10-15
  • Time: 3h
  • Action: Code Kata

I learned: Object Calistenics; cool and simple kata; Cool croud!

Make Night – Arduino 101

Health Hackers Erlangen

  • Open for all: yes
  • Audience: Nice mix of medical researchers, a few physicists, some software developers
  • Audience size: ~20
  • Time: 3h
  • Action: Arduino basics with different sensors, organizer provided kits for everybody

I learned: The organizers want to accomplish something. Real medical research supported by “hackers” (RPi, Arduino, etc). Very interdisciplinary!

JUG Nuernberg – Dropwizard

JUG Nuernberg

  • Open for all: yes
  • Audience: Software developers
  • Audience size: ~10
  • Time: 2h
  • Action: presentation & discussion

I learned: Wow, very impressive framework. I was really fascinated by the so called devops features (healthchecks, logging, etc)! Wish there was something similar in .NET!

Hackerkegeln DATEV – Scala Kata

  • Open for all: no (internal company event)
  • Audience: Software developers (C/C++, JS, Java, .NET, COBOL, etc)
  • Audience size: ~10
  • Time: 4h
  • Action: interactive Kata in Scala

I learned: Scala is a cool language! Learned pattern matching (now also available in C# 7). Very relaxed audience ;–)

Thanks to Latti for the invitation!

MATHEMA Freitag – Microservices

  • Open for all: no (internal company event)
  • Audience: Software developers
  • Audience size: ~20
  • Time: 6h
  • Action: presentations, discussions, interactive microservice setup

I learned: incredible cool tooling: Consul, Ribbon, Hystrix, Graphite, ELK

What to expect?

First time at such an event? Ask a lot of questions. It might be that you are the expert! If not: you are likely to find an expert. Or somebody will point you to a local expert.

One of the main differences between these local user group meetings and big conferences is that the audience size is a lot smaller. Everybody wants to share and/or gain knowledge at these meetings. The format is often very open: Sometimes there is no official topic for the event. Instead, people decide on the spot: “I have no idea how X works. Can somedody explain it?” — “Sure”. And then the knowledge transfer starts…

How to find meetings in your area

  • google <your technology> user group <your town> for example java user group london
  • Meetup is a plattform for meeting people face to face. Try it: https://www.meetup.com/
  • google Software Craftsmanship <your town>
  • Germany: Softwerkskammer
  • google Unconference or BarCamp in <your town>

What do you do to learn new technologies and stay up-to-date?

Pi Hole - Simple Ad Blocker for Your Network

Advertisements in web pages can be a nuisance. But they are a necessary evil because companies/bloggers producing high quality content have to earn a living. Troy Hunt recently wrote a nice post on the subject.

Most desktop browsers provide “ad blockers” as plugins. These plugins normally also have configurable whitelists (whitelist: a list of sites exluded from being blocked) — which you should use for those sites which (a) provide high quality content and (b) rely on advertisements for a living.

Other devices in our home network also communicate with the web:

  • cell/smart phones (f.ex. iPhone, Android)
  • smart TVs (f.ex. Samsung)
  • and all those new IoT devices (IoT: Internet of Things): those “smart” home automation devices that communicate with “the cloud” and your “smart phone”

We can’t install “ad blockers” on these devices.

Here is where Pi Hole comes into play: We can plug a Raspberry Pi straight into our router!

My experience:

On my phone (an old Galaxy-S4) web pages load a hell of a lot faster when I’m in my home network compared to outside of my home network.

The Pi Hole web interface looks like this:

As you can see in the screenshot: ~20% traffic is blocked (!) and you can configure whitelists just like with those browser plugins. Nobody in my household noticed that 20% traffic was blocked.

Installing the software on your Raspberry Pi is straightforward: Just follow the instructions on the Pi Hole website.

Configuring your router is another beast: OpenWRT and DD-WRT router instructions are available (example – short youtube promo).

Bash Tricks

Over the holidays I stumbled across 2 neat bash tricks to simplify navigation between folders:

  • autocd
  • autojump

Neither of these features is new.

autocd

autocd is very simple: It’s a built-in bash feature which prepends cd in case you enter a valid path.

1
2
$ cd /etc
$ /etc # the same

Setup

Add this to your ~/.bashrc:

1
shopt -s autocd

autojump

A cd command that learns – easily navigate directories from the command line

Directly from the source at autojump:

Usage:

j is a convenience wrapper function around autojump. Any option that can be used with autojump can be used with j and vice versa.

Jump To A Directory That Contains foo:

1
j foo

Jump To A Child Directory:

Sometimes it’s convenient to jump to a child directory (sub-directory of current directory) rather than typing out the full name.

1
jc bar

Open File Manager To Directories (instead of jumping):

Instead of jumping to a directory, you can open a file explorer window (Mac Finder, Windows Explorer, GNOME Nautilus, etc.) to the directory instead.

1
jo music

Opening a file manager to a child directory is also supported:

1
jco images

Using Multiple Arguments:

Let’s assume the following database:

1
2
30   /home/user/mail/inbox
10   /home/user/work/inbox

j in would jump into /home/user/mail/inbox as the higher weighted entry. However you can pass multiple arguments to autojump to prefer a different entry. In the above example, j w in would then change directory to /home/user/work/inbox.

For more options refer to help:

1
autojump --help

Testing That Different Objects Have the Same Properties

Sometimes you want to ensure that 2 unrelated objects share a set of properties — without using an interface.

Here is an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
namespace Demo
{
    public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

First thought for C# developers: AutoMapper

Let’s do that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
using AutoMapper;

namespace Demo
{
    public class MyMapping
    {
        public static IMapper Mapper;

        public static void Init()
        {
            var cfg = new MapperConfiguration(x =>
            {
                x.CreateMap<Customer, Person>();
            });
            Mapper = cfg.CreateMapper();
        }
    }
}

Now we can write a unit test to see if we can convert a Customer to a Person:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
using Xunit;

namespace Demo
{
    public class SomeTests
    {
        [Fact]
        public void Given_Customer_Should_ConvertTo_Person()
        {
            // Arrange
            const string firstname = "foo";
            const string lastname = "bar";

            var customer = new Customer
            {
                FirstName = firstname,
                LastName = lastname
            };

            MyMapping.Init();

            // Act
            var person = MyMapping.Mapper.Map<Customer, Person>(customer);

            // Assert
            person.FirstName.Should().Be(firstname);
            person.LastName.Should().Be(lastname);
        }
  }
}  

This test passes.

But what happens when we want to ensure that a new Customer property (for example Email) is reflected in the Person object?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
namespace Demo
{
    public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; } // <-- new property
    }

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

Our unit test still passes. ☹

Wouldn’t it be nice to have our unit test fail if the classes are not in sync?

Here is where FluentAssertions ShouldBeEquivalentTo comes in handy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
using FluentAssertions;
using Xunit;

[Fact]
public void Given_Customer_Should_ConvertTo_Person_With_CurrentProperties()
{
    //Arrange
    const string firstname = "foo";
    const string lastname = "bar";

    var customer = new Customer
    {
        FirstName = firstname,
        LastName = lastname,
        Email = "foo@bar.com"
    };

    MyMapping.Init();

    // Act
    var person = MyMapping.Mapper.Map<Customer, Person>(customer);

    // Assert
    customer.ShouldBeEquivalentTo(person);
}

Subject has a member Email that the other object does not have.

Cool: This is the kind of message I want to have from a unit test!

ShouldBeEquivalentTo also takes an optional Lambda expression in case you need more fine grained control which properties are included in the comparison. Here is an example where we exlude the Email property on purpose:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using FluentAssertions;
using Xunit;

[Fact]
public void Given_Customer_Should_ConvertTo_Person_With_CurrentProperties_Excluding_Email()
{
    //Arrange
    const string firstname = "foo";
    const string lastname = "bar";

    var customer = new Customer
    {
        FirstName = firstname,
        LastName = lastname,
        Email = "foo@bar.com"
    };

    MyMapping.Init();

    // Act
    var person = MyMapping.Mapper.Map<Customer, Person>(customer);

    // Assert
    customer.ShouldBeEquivalentTo(person,
        options =>
            options.Excluding(x => x.Email));
}

This test passes.

The complete documentation for FluentAssertions’ ShouldBeEquivalentTo method can be found here.

Source code for this post

You can clone a copy of this project here: https://github.com/draptik/blog-demo-shouldbeequivalentto.

1
git clone https://github.com/draptik/blog-demo-shouldbeequivalentto.git