draptik

mostly tech stuff

Docker and Octopress

This post describes how I created my first customized docker image(s).

I have been watching the docker space for a while and finally found a private use-case: This blog uses Octopress, which is a ruby-based convenience-wrapper around Jekyll. Jekyll is a static web-site generator provided by GitHub. Octopress requires some old libs: Ruby 1.9.3, Python 2.7, and nodejs.

So, to use Octopress on any machine, I have to either:

  • configure the machine to use specific versions of Ruby, Python and NodeJs. Works.
    • Drawback: Other projects using different versions of Ruby, Python, NodeJs won’t work out of the box.
  • use version managers for Ruby, Python and NodeJs (f.ex. rvm, virtualenv, nvm). Works.
    • Drawback: Tedious setup which differs between OSes.
  • use a virtual machine. Works.
    • Drawback: Not easily portable due to size of virtual machine image.
  • Or, I could use docker.

I decided to give docker a spin.

My primary goal was to be able to blog from any (linux) machine running docker.

From a birds-eye view my goal is to:

  • install a docker image on any machine
  • and run a docker container with my blog mounted as shared folder (so I can edit the content on the host system, but compilation, preview and publishing is accomplished from within the docker container)

My secondary goal was to get my hands dirty with docker :–)

Obviously docker also has potential usage for other development setups (i.e. testing application code in local docker container before pushing to CI to reduce roundtrip time).

Overview

I created 3 docker images, which build upon each other:

  • Docker 00: base image including Ruby, Python and NodeJs
  • Docker 01: image with docker ENTRYPOINT
  • Docker 02: image optimized for octopress usage

Here is the folder structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
├── 00_ruby_base
│   ├── build-image.sh
│   └── Dockerfile
├── 01_user
│   ├── build-image.sh
│   ├── Dockerfile
│   └── entrypoint.sh
├── 02_octopress
│   ├── build-image.sh
│   ├── Dockerfile
│   ├── post-install.sh
│   └── run-container.sh
└── share
    └── octopress

Each image (00*, 01*, 02*) contains a Dockerfile and a build-image.sh file. Only the last image (02*) contains a run-container.sh file.

  • Dockerfiles contain the instructions for building a docker image.
  • build-image.sh files invoke the Dockerfile.

Docker 00: base image

Since I couldn’t find a simple Ruby image of 1.9.3 at docker hub I decided to create my own.

Knowing my use-case (Octopress), I also installed Python2.7 and NodeJs for my base docker image. This image is the only one that takes quite some time to build.

Dockerfile

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
FROM debian:jessie

# Get the dependencies for Octopress page generation
##
## Notes:
##
## - Python 2.7 is required for using pygments gem.
## - NodeJs is required for execjs Gem
##
RUN apt-get update && \
    apt-get --no-install-recommends -y install \
    autoconf \
    bison \
    build-essential \
    libssl-dev \
    libyaml-dev \
    locales \
    libreadline6-dev \
    zlib1g-dev \
    libncurses5-dev \
    libffi-dev \
    libgdbm3 \
    libgdbm-dev \
    nodejs \
    python2.7 \
    wget \
    ca-certificates \
    curl && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Set LOCALE to UTF8
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
    locale-gen en_US.UTF-8 && \
    dpkg-reconfigure --frontend=noninteractive locales && \
    /usr/sbin/update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8

# Install ruby (adopted from https://hub.docker.com/r/liaisonintl/ruby-1.9.3/~/dockerfile/)
ENV RUBY_MAJOR=1.9 \
    RUBY_VERSION=1.9.3-p551 \
    RUBY_DOWNLOAD_SHA256=bb5be55cd1f49c95bb05b6f587701376b53d310eb1bb7c76fbd445a1c75b51e8 \
    RUBYGEMS_VERSION=2.6.6 \
    PATH=/usr/local/bundle/bin:$PATH
RUN set -ex && \
    curl -SL -o ruby.tar.gz "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz" && \
    echo "$RUBY_DOWNLOAD_SHA256 ruby.tar.gz" | sha256sum -c - && \
    mkdir -p /usr/src/ruby && \
    tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 && \
    rm -f ruby.tar.gz && \
    cd /usr/src/ruby && \
    autoconf && \
    ./configure --disable-install-doc --sysconfdir=/etc/ && \
    make && \
    make install && \
    gem update --system $RUBYGEMS_VERSION && \
    rm -rf /usr/src/ruby

# Create soft link for python
RUN ln -s /usr/bin/python2.7 /usr/bin/python

Here is a short description of what happens in this Dockerfile:

1
RUN apt-get ...

…retrieves required packages from the debian package repository.

1
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen ...

…ensures the default system language uses UTF8 (required by some packages).

1
RUN set -ex && curl... && make ...

…downloads, compiles and installs Ruby from scratch (this step takes some time!).

1
RUN ln -s /usr/bin/python2.7 /usr/bin/python

…creates a soft link to Python2.7.

Docker build

To execute the previous Dockerfile, run ./build-image.sh.

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/ruby1.9.3-python2.7-nodejs:0.1 .

Make the file executable (chmod 744 build-image.sh).

Ensure to replace draptik with some other string (f.ex. your name, initials or company) to build your own image. F.ex. docker build -t homersimpson/ruby1.9.3-python2.7-nodejs:0.1 .

Since this image is going to be the base image for the next step, ensure to always use the same name (f.ex. homersimpson).

You can verify that the docker build step worked as expected by listing all docker images using docker images. The output should be similar to:

1
2
3
4
$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
homersimpson/ruby1.9.3-python2.7-nodejs:0.1   0.1                 641ca1a59e87        8 days ago          486 MB
debian              jessie              e5599115b6a6        4 weeks ago         123 MB

Docker 01: user

Here is where things start getting difficult. Sharing a folder from the host system with docker. And keeping permissions/users in sync…

Some things to know about sharing a volume in docker

Sharing data between host and docker container is normally accomplished by docker run -v host-location/folder:container-location/folder.

Be aware, though:

  • The volume will be owned by the container
  • The container’s default user is root (UID/GID 1)!
  • The container will change the UID/GID on the host system!

My workaround

I found this post by Deni Bertovic. In short, the post proposes to use docker’s ENTRYPOINT to pipe all RUN commands through the ENTRYPOINT. Which in turn is a bash script (entrypoint.sh, see below), creating a new user, and executing all docker commands as user. This is where I start walking on very thin ice… Nevertheless, I created another docker image based on the base image from the previous step.

Dockerfile

Make sure to replace draptik in the FROM string…

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
FROM draptik/ruby1.9.3-python2.7-nodejs:0.1

# For details see https://denibertovic.com/posts/handling-permissions-with-docker-volumes/

RUN apt-get update && apt-get -y --no-install-recommends install \
    ca-certificates \
    curl

RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.10/gosu-$(dpkg --print-architecture)" \
    && curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.10/gosu-$(dpkg --print-architecture).asc" \
    && gpg --verify /usr/local/bin/gosu.asc \
    && rm /usr/local/bin/gosu.asc \
    && chmod +x /usr/local/bin/gosu

COPY entrypoint.sh /usr/local/bin/entrypoint.sh

ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

For further details about the above Dockerfile refer to aforementioned post by Deni.

The entrypoint.sh file should be located beside the Dockerfile:

entrypoint.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash

# Add local user
# Either use the LOCAL_USER_ID if passed in at runtime or
# fallback

USER_ID=${LOCAL_USER_ID:-9001}

echo "Starting with UID : $USER_ID"
useradd --shell /bin/bash -u $USER_ID -o -c "" -m user
export HOME=/home/user

exec /usr/local/bin/gosu user "$@"

Docker build

…and the corresponding docker build command (again, wrapped in a file):

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/ruby1.9.3-python2.7-nodejs-user:0.1 .

…again, make sure to replace draptik

Docker 02: octopress

Because, in addition to mounting the content of my blog, I also mount the blog-engine itself (using docker run -v <orig-location>:<container-location>) I also have to execute an initial script within the mounted folder to setup the blog-engine. To prepare the environment for this script I create a customized ~/.gemrc and ~/.bashrc file in the Dockerfile. For this purpose I mount another file from the docker run script (post-install.sh), which must be executed from within the container.

Dockerfile

(Make sure to replace draptik in the FROM string…)

Dockerfile
1
2
3
4
5
6
7
8
9
10
FROM draptik/ruby1.9.3-python2.7-nodejs-user:0.1

# I am not really sure why this is needed, because we have an ENTRYPOINT in the parent image.
RUN useradd -ms /bin/bash user

# Setup ruby/bundler to work with non-admin user
RUN echo "gem: --user-install" > /home/user/.gemrc && chown user:user /home/user/.gemrc
RUN echo "PATH=\"/home/user/.gem/ruby/1.9.1/bin:$PATH\"" >> /home/user/.bashrc && chown user:user /home/user/.bashrc

WORKDIR /octopress

You might be wondering why I am explicitly creating a new user (RUN useradd -ms /bin/bash user). Valid question. In the next 2 lines I write some config values to files which are located in the /home/user/ folder. I was not able to do this without first explicitly creating the user. Probably not best practice, but it works. I would be very grateful for feedback on this issue.

Docker build

(…again, make sure to replace draptik…)

build-image.sh
1
2
#!/bin/bash
docker build -t draptik/octopress:0.1 .

Starting the final image as docker container

The following script starts the docker container:

run-container.sh
1
2
3
4
5
6
7
8
9
10
#!/bin/bash
docker run \
    --rm \
    -it \
    -e LOCAL_USER_ID=`id -u $USER` \
    -p 4001:4001 \
    -v ${PWD}/../share/octopress:/octopress \
    -v ${PWD}/post-install.sh:/home/user/post-install.sh \
    draptik/octopress:0.1 \
    /bin/bash

Some notes about the docker run options:

  • --rm ensures that the docker container is removed once exited
  • -it runs an interactive terminal as soon as the container starts
  • -e LOCAL_USER... sets the host user’s ID within the docker container
  • -p ... maps the port numbers
  • -v ${PWD}/../share/octopress:/octopress mounts the blog volume
  • -v ${PWD}/post-install.sh:/home/user/post-install.sh mounts the post install script

Mounting volumes in docker using docker-machine or Docker for Windows on windows requires some extra path-tweaking. I intend to add these tweaks in the future…

post-install

Yet another step… After docker mounted the external volumes from the host, they have to be configured (including our blogging engine).

That is the reason for the post-install.sh script. It must be run from within the container!

IMPORTANT: Ensure to replace the git user name/email in post-install.sh. Otherwhise you will not be able to deploy!

post-install.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash
#
# This script must be executed in ~ folder (not in /octopress)!
gem install --no-ri --no-rdoc \
  bundler \
  execjs

cd /octopress
# Important: use `--path...`!
bundle install --path $HOME/.gem

git config --global user.name "FirstName LastName"
git config --global user.email "your@mail.com"

Final usage for Octopress users

All images are published to docker hub.

Just run docker pull draptik/octopress:0.1.

  • Create a folder for your blog: mkdir blog && cd blog
  • Create a folder for the blog content: mkdir share && cd share

Initially clone octopress in share folder:

1
2
3
git clone -b source <octopress-git-repo> octopress
cd octopress
git clone -b master <octopress-git-repo> _deploy

Then, run the run-container.sh script.

From within the newly created docker container, follow the steps from the post-install section above.

You should now be able to use Octopress from within the docker container (i.e. rake new_post["test"], rake generate, rake preview, rake deploy, etc.) while being able to edit your posts on the host machine.

Summary

It helps if you have a linux background, since all docker images are linux based. Setting up a customized docker image can be a bit tedious (especially configuring user privileges and mounting host folders), but once the image works you have an automated and reproducible environment. I think this makes it worth the effort.

Obviously I am just starting with docker, so take my example above with a grain of salt. But maybe the example gives you a starting point for your own docker experiments.

As always: Thankful for feedback!

Links

You can find the complete source code at Github here: https://github.com/draptik/octopress-docker

The docker images are hosted at Docker Hub: https://hub.docker.com/u/draptik/

Learning by Meeting People

In my day job I’m a full-stack .NET developer. In the past I mainly learned new concepts in programming by reading books, blog posts, watching video tutorials (i.e. Pluralsight), listening to podcasts and visiting conferences. Lately I started to expand my horizon by meeting people face to face. This turned out to be very rewarding.

These kind of meetups, which often take place in your spare time, are a very efficient way of learning to think outside the box. Meeting other developers in person — in an informal setting — enables me to interact directly: Asking questions, as soon as they pop into my mind (everything from “what was that keyboard shortcut?” to “Stop, I did not understand what a monad is”).

Here is a short resume of some the meetings I’ve been to in the last couple of weeks.

Softwerkskammer – Mocking Frameworks

https://www.softwerkskammer.org/

  • Open for all: yes
  • Audience: Software developers (PHP, Java, .NET, Ruby, Python, etc)
  • Audience size: 10-15
  • Time: 3h
  • Action: Code Kata

I learned: Object Calistenics; cool and simple kata; Cool croud!

Make Night – Arduino 101

Health Hackers Erlangen

  • Open for all: yes
  • Audience: Nice mix of medical researchers, a few physicists, some software developers
  • Audience size: ~20
  • Time: 3h
  • Action: Arduino basics with different sensors, organizer provided kits for everybody

I learned: The organizers want to accomplish something. Real medical research supported by “hackers” (RPi, Arduino, etc). Very interdisciplinary!

JUG Nuernberg – Dropwizard

JUG Nuernberg

  • Open for all: yes
  • Audience: Software developers
  • Audience size: ~10
  • Time: 2h
  • Action: presentation & discussion

I learned: Wow, very impressive framework. I was really fascinated by the so called devops features (healthchecks, logging, etc)! Wish there was something similar in .NET!

Hackerkegeln DATEV – Scala Kata

  • Open for all: no (internal company event)
  • Audience: Software developers (C/C++, JS, Java, .NET, COBOL, etc)
  • Audience size: ~10
  • Time: 4h
  • Action: interactive Kata in Scala

I learned: Scala is a cool language! Learned pattern matching (now also available in C# 7). Very relaxed audience ;–)

Thanks to Latti for the invitation!

MATHEMA Freitag – Microservices

  • Open for all: no (internal company event)
  • Audience: Software developers
  • Audience size: ~20
  • Time: 6h
  • Action: presentations, discussions, interactive microservice setup

I learned: incredible cool tooling: Consul, Ribbon, Hystrix, Graphite, ELK

What to expect?

First time at such an event? Ask a lot of questions. It might be that you are the expert! If not: you are likely to find an expert. Or somebody will point you to a local expert.

One of the main differences between these local user group meetings and big conferences is that the audience size is a lot smaller. Everybody wants to share and/or gain knowledge at these meetings. The format is often very open: Sometimes there is no official topic for the event. Instead, people decide on the spot: “I have no idea how X works. Can somedody explain it?” — “Sure”. And then the knowledge transfer starts…

How to find meetings in your area

  • google <your technology> user group <your town> for example java user group london
  • Meetup is a plattform for meeting people face to face. Try it: https://www.meetup.com/
  • google Software Craftsmanship <your town>
  • Germany: Softwerkskammer
  • google Unconference or BarCamp in <your town>

What do you do to learn new technologies and stay up-to-date?

Pi Hole - Simple Ad Blocker for Your Network

Advertisements in web pages can be a nuisance. But they are a necessary evil because companies/bloggers producing high quality content have to earn a living. Troy Hunt recently wrote a nice post on the subject.

Most desktop browsers provide “ad blockers” as plugins. These plugins normally also have configurable whitelists (whitelist: a list of sites exluded from being blocked) — which you should use for those sites which (a) provide high quality content and (b) rely on advertisements for a living.

Other devices in our home network also communicate with the web:

  • cell/smart phones (f.ex. iPhone, Android)
  • smart TVs (f.ex. Samsung)
  • and all those new IoT devices (IoT: Internet of Things): those “smart” home automation devices that communicate with “the cloud” and your “smart phone”

We can’t install “ad blockers” on these devices.

Here is where Pi Hole comes into play: We can plug a Raspberry Pi straight into our router!

My experience:

On my phone (an old Galaxy-S4) web pages load a hell of a lot faster when I’m in my home network compared to outside of my home network.

The Pi Hole web interface looks like this:

As you can see in the screenshot: ~20% traffic is blocked (!) and you can configure whitelists just like with those browser plugins. Nobody in my household noticed that 20% traffic was blocked.

Installing the software on your Raspberry Pi is straightforward: Just follow the instructions on the Pi Hole website.

Configuring your router is another beast: OpenWRT and DD-WRT router instructions are available (example – short youtube promo).

Bash Tricks

Over the holidays I stumbled across 2 neat bash tricks to simplify navigation between folders:

  • autocd
  • autojump

Neither of these features is new.

autocd

autocd is very simple: It’s a built-in bash feature which prepends cd in case you enter a valid path.

1
2
$ cd /etc
$ /etc # the same

Setup

Add this to your ~/.bashrc:

1
shopt -s autocd

autojump

A cd command that learns – easily navigate directories from the command line

Directly from the source at autojump:

Usage:

j is a convenience wrapper function around autojump. Any option that can be used with autojump can be used with j and vice versa.

Jump To A Directory That Contains foo:

1
j foo

Jump To A Child Directory:

Sometimes it’s convenient to jump to a child directory (sub-directory of current directory) rather than typing out the full name.

1
jc bar

Open File Manager To Directories (instead of jumping):

Instead of jumping to a directory, you can open a file explorer window (Mac Finder, Windows Explorer, GNOME Nautilus, etc.) to the directory instead.

1
jo music

Opening a file manager to a child directory is also supported:

1
jco images

Using Multiple Arguments:

Let’s assume the following database:

1
2
30   /home/user/mail/inbox
10   /home/user/work/inbox

j in would jump into /home/user/mail/inbox as the higher weighted entry. However you can pass multiple arguments to autojump to prefer a different entry. In the above example, j w in would then change directory to /home/user/work/inbox.

For more options refer to help:

1
autojump --help

Testing That Different Objects Have the Same Properties

Sometimes you want to ensure that 2 unrelated objects share a set of properties — without using an interface.

Here is an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
namespace Demo
{
    public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

First thought for C# developers: AutoMapper

Let’s do that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
using AutoMapper;

namespace Demo
{
    public class MyMapping
    {
        public static IMapper Mapper;

        public static void Init()
        {
            var cfg = new MapperConfiguration(x =>
            {
                x.CreateMap<Customer, Person>();
            });
            Mapper = cfg.CreateMapper();
        }
    }
}

Now we can write a unit test to see if we can convert a Customer to a Person:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
using Xunit;

namespace Demo
{
    public class SomeTests
    {
        [Fact]
        public void Given_Customer_Should_ConvertTo_Person()
        {
            // Arrange
            const string firstname = "foo";
            const string lastname = "bar";

            var customer = new Customer
            {
                FirstName = firstname,
                LastName = lastname
            };

            MyMapping.Init();

            // Act
            var person = MyMapping.Mapper.Map<Customer, Person>(customer);

            // Assert
            person.FirstName.Should().Be(firstname);
            person.LastName.Should().Be(lastname);
        }
  }
}  

This test passes.

But what happens when we want to ensure that a new Customer property (for example Email) is reflected in the Person object?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
namespace Demo
{
    public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; } // <-- new property
    }

    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

Our unit test still passes. ☹

Wouldn’t it be nice to have our unit test fail if the classes are not in sync?

Here is where FluentAssertions ShouldBeEquivalentTo comes in handy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
using FluentAssertions;
using Xunit;

[Fact]
public void Given_Customer_Should_ConvertTo_Person_With_CurrentProperties()
{
    //Arrange
    const string firstname = "foo";
    const string lastname = "bar";

    var customer = new Customer
    {
        FirstName = firstname,
        LastName = lastname,
        Email = "foo@bar.com"
    };

    MyMapping.Init();

    // Act
    var person = MyMapping.Mapper.Map<Customer, Person>(customer);

    // Assert
    customer.ShouldBeEquivalentTo(person);
}

Subject has a member Email that the other object does not have.

Cool: This is the kind of message I want to have from a unit test!

ShouldBeEquivalentTo also takes an optional Lambda expression in case you need more fine grained control which properties are included in the comparison. Here is an example where we exlude the Email property on purpose:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using FluentAssertions;
using Xunit;

[Fact]
public void Given_Customer_Should_ConvertTo_Person_With_CurrentProperties_Excluding_Email()
{
    //Arrange
    const string firstname = "foo";
    const string lastname = "bar";

    var customer = new Customer
    {
        FirstName = firstname,
        LastName = lastname,
        Email = "foo@bar.com"
    };

    MyMapping.Init();

    // Act
    var person = MyMapping.Mapper.Map<Customer, Person>(customer);

    // Assert
    customer.ShouldBeEquivalentTo(person,
        options =>
            options.Excluding(x => x.Email));
}

This test passes.

The complete documentation for FluentAssertions’ ShouldBeEquivalentTo method can be found here.

Source code for this post

You can clone a copy of this project here: https://github.com/draptik/blog-demo-shouldbeequivalentto.

1
git clone https://github.com/draptik/blog-demo-shouldbeequivalentto.git

Remotly Measuring Temperatures With a Raspberry Pi Using Radio Frequency Modules From Ciseco (Part 3: UI)

Part 1 describes how to setup the hardware, part 2 describes how to to record/persist the sensor information.

In this post I’ll describe how to display the data.

TL;DR

Should be similar to http://rpi-temperatures-website-demo.firebaseapp.com/.

Choosing the right technology stack

This really depends on your individual needs. Here are some points to consider:

  • How many people will be accessing the site?
  • Do you have to access the site from outside of your LAN? Do you need a login mechanism?
  • Which technology stack are you comfortable with? Which technology stack is supported on the server?
  • Database interaction possible (this demo uses SQLite3)?

If you know that you’ll have many requests I would discourage you from using the Raspberry Pi (RPi) as a web server.

Otherwhise, the RPi is a good choice for a web server.

Some of the technology stacks available on the RPi are:

  • JVM: Java, Scala
  • .NET/Mono: C#, F#
  • Python
  • JS: Node.js

Since I only want to display data in my LAN I decided to use Javascript: Node.js in combination with the Express framework provides all possible features and is very lightweight.

No matter which stack you choose: Running the web site on the same RPi as the temperature recording from the previous posts saves you the hassle of installing software on a different machine. And it obviously saves energy, since the RPi is running 24/7 anyway recording temperature data.

User Interface

My primary goal was explorative data visualization. For this purpose I decided to show 2 plots:

  • an overview plot showing the past 14 days
  • and a detail plot, showing the selection of the overview plot

You can test the website with some sample data at http://rpi-temperatures-website-demo.firebaseapp.com/

Some of the UI features:

  • The detail plot can be dragged and the overview plot has a selection region which can be resized and dragged.
  • Changes to either plot are reflected in the other.
  • Mouse movement in the detail plot updates the legend.

All charting features are implemented using Flot.

Prerequisites: Node.js

Here is a very concise manual on how to install Node.js on the RPi (this gives you a more up to date version of Node.js than default Raspbian does): http://weworkweplay.com/play/raspberry-pi-nodejs/

Installation

All further instructions are expected to be executed on the RPi.

Download and unzip the source code from

https://github.com/draptik/rpi-temperature-website/archive/v1.0.zip

1
2
3
4
5
cd ~
mkdir website && cd website
wget https://github.com/draptik/rpi-temperature-website/archive/v1.0.zip
unzip *.zip
cd rpi*

Install backend packages (node packages are defined in packages.json):

1
npm install

Node.js packages are installed to folder node_modules.

  • the folder app_server contains the basic web site.
  • the folder app_api provides the REST backend.

Install frontend packages (bower packages are defined in bower.json):

1
bower install

Bower packages are installed to folder public/vendor/bower.

You should now be able to start the application (using the provided sample data in folder sample_data):

1
npm start

Configuration (development vs production)

The application uses a single switch between development mode and production mode:

NODE_ENV

This information is currently used in the following places in the application:

REST URL

Setting the URL for the REST service (in app.js):

1
var url = process.env.NODE_ENV === 'production' ? 'http://camel:3000' : 'http://localhost:3000';

Within my LAN the RPi is named camel

And in case you’re not familiar with the syntax

1
var result = someCondition ? 'string1' : 'string2';

It’s just a shorthand for

1
2
3
4
5
6
var result;
if (someCondition) {
  result = 'string1';
} else {
  result = 'string2';
}

Database location

Setting the database location (in app_api/models/db.js):

1
var dbLocation = process.env.NODE_ENV === 'production' ? '/var/www/templog.db' : 'sample_data/templog.db';

Usage

Once you’ve configured the LAN URL and the database location, you can set the environment variable NODE_ENV to production and start the application:

1
2
export NODE_ENV=production
npm start

Customizing

You will probably want to customize the UI, as my design skills are limited at best. ;–)

Here’s an overview of the project, so you know where to change things:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
├── app_api                     //  REST API
│   ├── controllers
│   │   └── temperatures.js
│   ├── models
│   │   ├── db.js               //  DATABASE
│   └── routes
│       └── index.js
├── app.js                      //  MAIN ENTRY POINT FOR THE APPLICATION
├── app_server                  //  web server (express.js)
│   ├── controllers
│   │   ├── main.js
│   │   └── temperatures.js
│   ├── models
│   ├── routes
│   │   ├── index.js
│   │   ├── temperatures.js
│   └── views
│       ├── error.jade
│       ├── index.jade
│       ├── layout.jade
│       └── temperatures-list.jade
├── bower.json                  //  Bower configuration (frontend)
├── node_modules                //  Location of node modules
├── nodemon.json                //  nodemon configuration
├── package.json                //  node configuration
├── public                      //  frontend stuff...
│   ├── images                  //  images
│   ├── scripts                 //  Javascript code
│   │   ├── chart.js            //  This file includes all charting code
│   │   ├── rest.js             //  wrapper code to access REST API
│   │   └── suncalc.js          //  calc sunrise/sunset on the fly
│   ├── stylesheets             //  ...
│   │   ├── app.css
│   │   ├── chart.css
│   └── vendor                  //  3rd party libraries
│       ├── bower               //  ...installed via bower
│       └── custom              //  ...other 3rd party libraries
├── sample_data                 //  sample data
│   └── templog.db              //  sqlite3 sample data set

That’s it. Have fun!

Remotly Measuring Temperatures With a Raspberry Pi Using Radio Frequency Modules From Ciseco (Part 2: Software)

In the previous post I described how to setup the hardware for measuring temperatures at home. This post will use a small python program to save the recorded temperatures to a database.

For simplicity’s sake we’ll be using SQLite3 as our database.

Aside from our (indoor) temperature sensors, we’ll also retrieve outdoor temperatures using the free service weather underground (registration required).

Note: In case your Raspberry Pi does not have access to the internet you can just exlude the weather underground parts in the code below.

All following code is intended to run on the Raspberry Pi receiving the data.

Create database (SQLite3)

The database schema is simple. We need one table to store the temperatures, and another to store sensor information.

Create a new file called templog.db:

1
$ touch templog.db

There are different ways to interact with SQLite3 (cli (=interactive), script, api).

The goal is to execute the following sql statements:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE TABLE sensors
(
    name TEXT NOT NULL,
    id TEXT NOT NULL,
    baudrate INTEGER,
    port TEXT NOT NULL,
    active INTEGER
);
CREATE TABLE temps
(
    timestamp TEXT,
    temp REAL,
    ID TEXT
);

Simplest solution is to open the newly created file templog.db with sqlite3 (cli/interactive) …

1
sqlite3 templog.db

…and past the previous code block. Should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ sqlite3 demo.db
SQLite version 3.8.10.2 2015-05-20 18:17:19
Enter ".help" for usage hints.
sqlite> CREATE TABLE sensors
   ...> (
   ...>     name TEXT NOT NULL,
   ...>     id TEXT NOT NULL,
   ...>     baudrate INTEGER,
   ...>     port TEXT NOT NULL,
   ...>     active INTEGER
   ...> );
sqlite> CREATE TABLE temps
   ...> (
   ...>     timestamp TEXT,
   ...>     temp REAL,
   ...>     ID TEXT
   ...> );

We’ve created a database.

The important table is temps: It will contain the measurements.

The other table (sensors) contains informations about the sensors, which are currently only needed for the weather underground ‘sensor’. And yes: the column names/types are not optimal.

Note: timestamp TEXT will bite us in the ass, but SQLite3 does NOT have any date type.

Monitor script

I found this nice script somewhere on Github. So Thank You kal001!

Here’s my gist link for the script below.

Place this script side-by-side to temps.log.

Save it as monitor.py.

You will probably want to modify the values for dbname, TIMEOUT, debug.txt, etc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
#!/usr/bin/env python

import sqlite3
import threading
from time import time, sleep, gmtime, strftime

import serial
import requests

# global variales

# sqlite database location
dbname = 'templog.db'

# serial device
DEVICE = '/dev/ttyAMA0'
BAUD = 9600

ser = serial.Serial(DEVICE, BAUD)

# timeout in seconds for waiting to read temperature from sensors
TIMEOUT = 30

# weather underground data
WUKEY = ''
STATION = ''
# time between weather underground samples in seconds
SAMPLE = 30 * 60


def log_temperature(temp):
    """
    Store the temperature in the database.
    """

    conn = sqlite3.connect(dbname)
    curs = conn.cursor()

    curs.execute("INSERT INTO temps values(datetime('now', 'localtime'), '{0}', '{1}' )".format(temp['temperature'], temp['id']))

    conn.commit()
    conn.close()


def get_temp():
    """
    Retrieves the temperature from the sensor.

    Returns -100 on error, or the temperature as a float.
    """

    global ser

    tempvalue = -100
    deviceid = '??'
    voltage = 0

    fim = time() + TIMEOUT

    while (time() < fim) and (tempvalue == -100):
        n = ser.inWaiting()
        if n != 0:
            data = ser.read(n)
            nb_msg = len(data) / 12
            for i in range(0, nb_msg):
                msg = data[i*12:(i+1)*12]
                deviceid = msg[1:3]

                if msg[3:7] == "TMPA":
                    tempvalue = msg[7:]

                if msg[3:7] == "BATT":
                    voltage = msg[7:11]
                    if voltage == "LOW":
                        voltage = 0
        else:
            sleep(5)

    return {'temperature':tempvalue, 'id':deviceid}


def get_temp_wu():
    """
    Retrieves temperature(s) from weather underground (wu) and stores it to the database
    """

    try:
        conn = sqlite3.connect(dbname)
        curs = conn.cursor()
        query = "SELECT baudrate, port, id, active FROM sensors WHERE id like 'W_'"

        curs.execute(query)
        rows = curs.fetchall()

        #print(rows)

        conn.close()

        if rows != None:
            for row in rows[:]:
                WUKEY = row[1]
                STATION = row[0]

                if int(row[3]) > 0:
                    try:
                        url = "http://api.wunderground.com/api/{0}/conditions/q/{1}.json".format(WUKEY, STATION)
                        r = requests.get(url)
                        data = r.json()
                        log_temperature({'temperature': data['current_observation']['temp_c'], 'id': row[2]})
                    except Exception as e:
                        raise

    except Exception as e:
        text_file = open("debug.txt", "a+")
        text_file.write("{0} ERROR:\n{1}\n".format(strftime("%Y-%m-%d %H:%M:%S", gmtime()), str(e)))
        text_file.close()


def main():
    """
    Program starts here.
    """

    get_temp_wu()
    t = threading.Timer(SAMPLE, get_temp_wu)
    t.start()

    while True:
        temperature = get_temp()

        if temperature['temperature'] != -100:
            log_temperature(temperature)

        if t.is_alive() == False:
            t = threading.Timer(SAMPLE, get_temp_wu)
            t.start()

if __name__ == "__main__":
    main()

Run the script. Open another shell, take a look inside the database (new data arriving once an hour?).

Don’t forget to start the script after turning off the Raspberry Pi. Or include the script in your boot process (init, systemd).

Part 3 will provide a UI for the collected data.

Remotly Measuring Temperatures With a Raspberry Pi Using Radio Frequency Modules From Ciseco (Part 1: Hardware)

Since I’m a software developer, I’ve always been wanting to do some hardware stuff (including some soldering) with my Raspberry Pi. So, for starters, I picked something that was useful and only involved sensors (no actors – yet):

Measuring temperatures at home.

Neither rocket science nor cool IoT “coffe is ready when I get out of the shower in the morning”, I know.

The sensor(s) should send a signal once an hour, without cable, and run on battery.

As the whole IoT thing is still relatively new, there are no standards yet. I picked the product line from Ciseco (currently being rebranded to Wireless Things (www.wirelessthings.com)). Affordable and good documentation. And, more important: These people are passionate about their product!

Before we get started here is a quick preview of what we want to accomplish (for details about the UI see part3): http://rpi-temperatures-website-demo.firebaseapp.com/

So let’s get started:

  • 2 battery powered sensors transmitting temperature data once per hour via radio frequency.
  • Raspberry Pi is powered 24/7 and records signals from sensors.

Parts & Costs

Total: £56.54

If you only want a single sensor (1 Slice of Pi, 1 Sensor, 2 XRF modules): £36.16

Setup overview

  • On the left is the Raspberry Pi with an XRF module mounted to the Slice of Pi.
    • The Slice of Pi acts as a breakout board.
    • The XRF module will receive data.
    • The Raspberry Pi will be continously monitoring input and storing the information locally to an SQLite3 database. See part 2 for details.
  • On the right are two sensors with XRF modules.
    • These modules will send temperature data once per hour to the Raspberry Pi.
    • These modules run on battery power.
    • You can have as many of these sensors as you want.

Each XRF module requires a unique id (‘XRF ID’ in the image above).

Hardware: Fitting the pieces & soldering

The official documentation is pretty good. If you know what you’re doing.

In case you are not really sure what you are doing with the soldering iron: Keep calm. There is a lot of information on the internet. I found the following step-by-step instruction useful:

Raspberry Pi – Assemble your temperature THERMISTOR with an XRF transmitter probe

Here is what the assembled sensor looks like:

Here is a picture of the sensor in the box. You have to create the holes in the top of the box yourself. I just used the soldering iron to melt both holes, which is probably not considered best practice ;–)

And here is a picture of the Raspberry Pi with an XRF module mounted on the Slice of Pi:

Note: The sensor comes with a box. In case you want to place the sensor inside the box make sure not to solder the thermistor to close to the board. You will want to make a hole in the box and have the thermistor stick out. Otherwise the thermistor will be inside the closed box and measure the temperature inside the box (instead of outside the box). Here is a picture illustrating the issue:

OS

This is one of the reasons I picked Ciseco for my simple project: They provide a standard Linux distribution (Raspbian) including their drivers.

This means the “Slice of Pi” works out of the box.

And the rest of the system behaves like a normal Raspbian system. There is no vendor “lock-in”.

Ciseco’s patched version of Raspbian here. Currently this is http://openmicros.org/Download/2015-05-05-raspbain-wheezy-raswik-shrunk.img.zip. I used the slightly older version http://openmicros.org/Download/2014-12-24-wheezy-raspbian-ciseco-shrunk.img.zip.

Software (well, actually Firmware)

All parts are soldered and the RPi has the correct operating system.

Our next steps (from a bird’s eye perspective) are:

  • uploading the correct firmware onto the sensors’ XRF module
  • configuring the sensors’ XRF module

The following instructions are mostly taken from Sean Landsman’s excellent tutorial.

Upload firmware to sensor(s)‘ XRF module

Why do we have to do this?

The sensor board is generic and can be configured for working with different types of sensors. We’re using a thermistor, in case you forgot… The thing with the antenna is the XRF module. We have 3 XRF modules: 1 for receiving data, 2 for sending data. We’ll take care of the sending modules first.

The tool for uploading the appropriate firmware to XRF modules is called xrf_uploader.

Download the xrf_uploader source code from Ciseco’s Github page at https://github.com/CisecoPlc/XRF-Uploader to the Raspberry Pi.

Then compile the file xrf_uploader.cpp:

1
2
g++ xfr_uploader.cpp -o xrf_uploader
chmod +x xrf_uploader

Next, get a copy of the thermistor firmware from Ciseco’s Github page at https://github.com/CisecoPlc/XRF-Firmware-downloads/tree/master/XRFV2%20ARF%20SRF%20-%20LLAP. At the time of writing, the most current version of the termistor firmware was llapThermistor-V0.73-24MHz.bin.

  • Shutdown the Raspberry Pi (sudo init 0)
  • Connect the first XRF ‘sensor’ module with the Slice of Pi
  • Start the Raspberry Pi again

Copy the thermistor firmware into the same folder as the xrf_uploader and upload the firmware to the newly connected XRF module:

1
./xrf_uploader -d /dev/ttyAMA0 -f llapThermistor-V0.73-24MHz.bin

Note: /dev/ttyAMA0 is the Raspberry Pi’s location of the UART.

The upload should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
pi@raspberrypi ~/xrf_loader $ ./xrf_uploader -d /dev/ttyAMA0 -f llapThermistor-V0.73-24MHz.bin
Writing new firmware file llapThermistor-V0.50-24MHz.bin to device /dev/ttyAMA0 with baud rate 9600...
Reading firmware file...
Read 1300 lines from firmware file
Opening device...
Setting serial parameters...
Waiting for device to settle...

<> Entering command modus
<- OK
-> ATVR
<- 0.63B XRF
<- OK
-> ATPG
<- OK
<> Sent 1300 of 1300 lines...

All OK, XRF successfully reprogrammed!

Waiting for device to settle...

<> Entering command modus
<- OK
-> ATVR
<- 0.50B APTHERM
<- OK
  • Shutdown the Raspberry Pi
  • Detach the freshly configured XRF ‘sensor’ module and replace it with the XRF module which will be receiving temperature information (this XRF module is called the ‘pass-through’).
  • Connect the the ‘sensor’ XRF module used for temperature measurement with thermistor board.
  • Do not insert the battery yet!
  • Start the Raspberry Pi again.

Configure sensors (XRF modules)

We now have

  • fully equipped sensor(s) without battery power
  • Raspberry Pi with receiving sensor

Again: Do not insert batteries into the sensors yet.

We first have to install a protocol to communicate between the ‘sensor’ and the ‘pass-through’ (aka ‘receiving’) XRF device.

Download pySerial to the Raspberry Pi.

Unpack and install pySerial:

1
2
3
$ tar xvzf pyserial-2.5.tar.gz
$ cd pyserial-2.5
$ sudo python setup.py install

Using pySerial/miniterm

pySerial comes with miniterm.py, a small serial terminal. Attach to the terminal:

1
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0

Press Ctrl+T, followed by Ctrl+E to enable the echo area. This helps during debugging.

Note: The terminal is not intended for typing commands: Always paste the commands from somewhere else.

Note: Once the battery is inserted it will drain very quickly during debugging. Ensure to unplug the battery if it’s not needed.

miniterm: First contact

While miniterm is running, insert the battery. The output should look something like this:

1
a--STARTED--a--STARTED--a--STARTED--a--STARTED--a--STARTED--a--STARTED--

What we are seeing is an example of LLAP (Ciseco’s lightweight local automation protocol). A complete documentation of the protocol can be found here and here.

From Ciseco’s documentation:

1
2
3
4
5
6
7
8
9
10
11
12
The message structure

[...] the message is 12 characters long and in 3 distinct sections. To illustrate the 3 separate parts to the message, see this example:

aXXDDDDDDDD
1.     "a" is lower case and shows the start of the message

2.     XX is the device identifier (address A-Z & A-Z)

3.      DDDDDDDDDD is the data being exchanged.

Note: Only the first "a" character is lowercase, the remaining message always uses uppercase.

Paste a--HELLO---- into the miniterm. In case your wondering: The default identifier is --. We’ll change that later.

1
2
3
4
5
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0
--- Miniterm on /dev/ttyAMA0: 9600,8,N,1 ---
--- Quit: Ctrl+]  |  Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
--- local echo active ---
a--HELLO----

If the remote device is running and configured correctly the output should immediatly change to:

1
2
3
4
5
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0
--- Miniterm on /dev/ttyAMA0: 9600,8,N,1 ---
--- Quit: Ctrl+]  |  Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
--- local echo active ---
a--HELLO----a--HELLO----

Note the duplicate a--HELLO---- in the last line. The second a--HELLO---- is the answer from the remote device.

miniterm: Change device ID

Since all devices have the same initial ID, it is a good idea to change the device ID in case you intend to use more than one remote device.

The following code changes the device ID to ZZ:

1
2
a--CHDEVIDZZ
a--REBOOT---

The terminal output should look like this:

1
2
3
4
5
6
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0
--- Miniterm on /dev/ttyAMA0: 9600,8,N,1 ---
--- Quit: Ctrl+]  |  Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
--- local echo active ---
a--CHDEVIDZZ
a--REBOOT---aZZSTARTED--aZZSTARTED--aZZSTARTED--aZZSTARTED--aZZSTARTED--aZZSTARTED--
miniterm: Read temperature

Assuming the device ID is ZZ reading the temperature is accomplished by aZZTEMP-----:

1
2
3
4
5
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0
--- Miniterm on /dev/ttyAMA0: 9600,8,N,1 ---
--- Quit: Ctrl+]  |  Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
--- local echo active ---
aZZTEMP-----aZZTMPA24.21

In the above answer from the remote device the temperature is 24.21 degrees Celsius.

miniterm: Configure measurement interval

Currently the remote device is contiously sending information. And draining the battery. To preserve battery power the interval can be configured to send information periodically using the command a--INTVL. The interval is defined with a 3 digit number followed by the time period: S(seconds), M(minutes), H(hours), D(days). For example the command aZZINTVL005S would set the interval of the remote device to 5 seconds.

Additionally the device should be sent to sleep in between cylces by issuing the command aZZCYCLE----:

1
2
3
4
5
6
$ python ~/pyserial-2.5/examples/miniterm.py /dev/ttyAMA0
--- Miniterm on /dev/ttyAMA0: 9600,8,N,1 ---
--- Quit: Ctrl+]  |  Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
--- local echo active ---
aZZINTVL001H
aZZCYCLE----

The above example sets the interval to one hour (001H).

Repeat the sensor setup and configuration for each sensor which needs to send data. The ‘pass-through’ (aka receiving) sensor does not have to be configured.

Part 2 describes a simple program to monitor the data being produced by this setup.

.NET Backend Providing REST

TL;DR

My AngularJS demo app has a new backend implementation using .NET Web API.

Recap

Our goals:

  • server side: minimal working REST API providing
    • GET dummy
    • CRUD users
  • client side (angular): communicate with server side

Setup

Creating a Web API project is straightforward: Just follow the instructions at

The final project structure will look like this:

Adding Models

Create two new POCOs for User and Dummy:

Models/Dummy.cs
1
2
3
4
5
6
7
8
9
namespace WebService.Models
{
    public class Dummy
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}
Models/User.cs
1
2
3
4
5
6
7
8
9
namespace WebService.Models
{
    public class User
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

Adding Service Layer

Create a new folder Service and add a UserService with corresponding interface IUserService.

Service/IUserService.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
using System.Collections.Generic;
using WebService.Models;

namespace WebService.Service
{
    public interface IUserService
    {
        ICollection<User> GetAllUsers();
        User GetById(int userId);
        User UpdateUser(User user);
        User CreateNewUser(User user);
        void RemoveUserById(int userId);
    }
}
Service/UserService.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using WebService.Models;

namespace WebService.Service
{
    public class UserService : IUserService
    {
        public UserService()
        {
            this.Users = new Collection<User>();
            this.CreateUsers();
        }

        private ICollection<User> Users { get; set; }

        public ICollection<User> GetAllUsers()
        {
            return this.Users;
        }

        public User GetById(int userId)
        {
            return this.Users.SingleOrDefault(x => x.Id.Equals(userId));
        }

        public User UpdateUser(User user)
        {
            var u = this.Users.SingleOrDefault(x => x.Id.Equals(user.Id));
            if (u != null) {
                u.FirstName = user.FirstName;
                u.LastName = user.LastName;
            }
            return u;
        }

        public User CreateNewUser(User user)
        {
            var newUser = new User
            {
                Id = this.Users.Max(x => x.Id) + 1,
                FirstName = user.FirstName,
                LastName = user.LastName
            };

            this.Users.Add(newUser);

            return newUser;
        }

        public void RemoveUserById(int userId)
        {
            this.Users.Remove(this.Users.SingleOrDefault(x => x.Id.Equals(userId)));
        }

        private void CreateUsers()
        {
            const int numberOfUsers = 10;
            for (int id = 1; id <= numberOfUsers; id++) {
                this.Users.Add(new User {Id = id, FirstName = "Foo" + id, LastName = "Bar" + id});
            }
        }
    }
}

Note: This is just a quick and dirty setup to get a working REST API without much overhead. In a real application the service will probably be a bit more fine grained. For example: In this demo app the users are simply stored in memory and not persisted to a database.

Adding IoC for Web API

Add inversion of control (IoC) to Web API:

IoC/UnityResolver.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
using System;
using System.Collections.Generic;
using System.Web.Http.Dependencies;
using Microsoft.Practices.Unity;

namespace WebService.IoC
{
    /// <summary>
    /// http://www.asp.net/web-api/overview/extensibility/using-the-web-api-dependency-resolver
    /// </summary>
    public class UnityResolver : IDependencyResolver
    {
        private readonly IUnityContainer container;

        public UnityResolver(IUnityContainer container)
        {
            if (container == null) {
                throw new ArgumentNullException("container");
            }
            this.container = container;
        }

        public void Dispose()
        {
            this.container.Dispose();
        }

        public object GetService(Type serviceType)
        {
            try {
                return this.container.Resolve(serviceType);
            }
            catch (ResolutionFailedException) {
                return null;
            }
        }

        public IEnumerable<object> GetServices(Type serviceType)
        {
            try {
                return this.container.ResolveAll(serviceType);
            }
            catch (ResolutionFailedException) {
                return new List<object>();
            }
        }

        public IDependencyScope BeginScope()
        {
            var child = this.container.CreateChildContainer();
            return new UnityResolver(child);
        }
    }
}

Adding Controllers

Create controllers UsersController and DummyController:

Controllers/Dummy.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
using System.Web.Http;
using System.Web.Http.Cors;
using WebService.Models;

namespace WebService.Controllers
{
    [EnableCors(origins: "http://localhost:9000", headers: "*", methods: "*")]
    public class DummyController : ApiController
    {
        public Dummy Get()
        {
            return new Dummy
            {
                Id = 0,
                FirstName = "JonFromREST",
                LastName = "Doe"
            };
        }
    }
}
Controllers/UsersController.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
using System.Collections.Generic;
using System.Web.Http;
using System.Web.Http.Cors;
using WebService.Models;
using WebService.Service;

namespace WebService.Controllers
{
    [EnableCors(origins: "http://localhost:9000", headers: "*", methods: "*")]
    public class UsersController : ApiController
    {
        private readonly IUserService userService;

        public UsersController(IUserService userService)
        {
            this.userService = userService;
        }

        public ICollection<User> Get()
        {
            return this.userService.GetAllUsers();
        }

        public User Get(int id)
        {
            return this.userService.GetById(id);
        }

        public User Put(User user)
        {
            return this.userService.UpdateUser(user);
        }

        public User Post(User user)
        {
            return this.userService.CreateNewUser(user);
        }

        public void Delete(int id)
        {
            this.userService.RemoveUserById(id);
        }
    }
}

Putting the pieces together

Within WebApiConfig.cs:

  • configure the IoC container
  • activate CORS
  • return JSON
  • configure routes
App_Start/WebApiConfig.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
using System.Net.Http.Headers;
using System.Web.Http;
using Microsoft.Practices.Unity;
using WebService.IoC;
using WebService.Service;

namespace WebService
{
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // IoC container
            //
            // http://www.asp.net/web-api/overview/extensibility/using-the-web-api-dependency-resolver
            var container = new UnityContainer();
            // Note: for this demo we want the user service to be a singleton ('ContainerControlledLifetimeManager' in Unity syntax)
            container.RegisterType<IUserService, UserService>(new ContainerControlledLifetimeManager());
            config.DependencyResolver = new UnityResolver(container);

            // Web API configuration and services

            config.EnableCors();

            // Return JSON instead of XML http://stackoverflow.com/a/13277616/1062607
            config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/html"));

            // Web API routes
            config.MapHttpAttributeRoutes();

            const string baseUrl = "ngdemo/web";

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: baseUrl + "/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );
        }
    }
}

Returning Lower Case JSON from .NET Web API

C# uses upper case property names by default. JavaScript uses lower case property names by default.

To automatically convert between both worlds you can add a ContractResolver to your Global.asax.cs:

Global.asax.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using System.Web;
using System.Web.Http;
using Newtonsoft.Json.Serialization;

namespace WebService
{
    public class WebApiApplication : HttpApplication
    {
        protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);

            // lower case property names in serialized JSON: http://stackoverflow.com/a/22130487/1062607
            GlobalConfiguration.Configuration
                .Formatters
                .JsonFormatter
                .SerializerSettings
                .ContractResolver = new CamelCasePropertyNamesContractResolver();
        }
    }
}

Done?

Almost: For the ASP.NET backend to be reachable by the same URL as the other backends (NodeJs backend and Java backend) we have to change the default port of the application to 8080:

Now we can start the Web API backend from Visual Studio (F5).

In the newly openend browser, check the URL http:localhost:8080/nodedemo/web/dummy/. The dummy JSON object should be visible:

Check the API

Start the backend

Start the Web API backend from Visual Studio.

Start the frontend

Note: For setting up the frontend, you will need to install NodeJS and Grunt. Please have a look at the README.md file in the frontend folder for further details.

Open a command prompt and navigate to the frontend folder.

Run grunt server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
>grunt server
Running "server" task

Running "clean:server" (clean) task
Cleaning .tmp...OK

Running "concurrent:server" (concurrent) task

Running "coffee:dist" (coffee) task

Done, without errors.

Running "copy:styles" (copy) task


Done, without errors.

Running "compass:server" (compass) task
directory .tmp/styles/
       create .tmp/styles/main.css (1.718s)
    Compilation took 1.802s

Done, without errors.

Running "autoprefixer:dist" (autoprefixer) task
File ".tmp/styles/main.css" created.

Running "connect:livereload" (connect) task
Started connect web server on localhost:9000.

Running "open:server" (open) task

Running "watch" task
Waiting...

Visit URL http://localhost:9000/#/dummy:

That’s it.

Source code for this post

You can clone a copy of this project here: https://github.com/draptik/angulardemorestful.

To checkout the correct version for this demo, use the following code:

1
2
3
git clone git@github.com:draptik/angulardemorestful.git
cd angulardemorestful
git checkout -f step7-aspnet-webapi-backend

In case you are not using git you can also download the project as ZIP or tar.gz file here: https://github.com/draptik/angulardemorestful/releases/tag/step7-aspnet-webapi-backend

Link Collection #4

I’m a bit behind on this section, I know. Here’s the first batch:

currently reading:

c# stuff:

sql stuff:

meta:

‘Important’ images/quotes:

other fun stuff:

not so fun:

And the best tv show I’ve seen in ages is called True Detective.