Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Syndicate content
A weblog about software engineering, Architecture, Technology an other things we like.
Updated: 22 hours 29 min ago

Oh no, more logs, start with logstash

Sun, 11/10/2013 - 20:26

How many posts have you seen about logging? And how many have your read about logging? Recently logging became cool again. Nowadays everybody talks about logstash, elasticsearch and kibana. It feels like everybody is playing with these tools. If you are not among the people playing around with it, than this is your blog post. I am going to help you get started with logstash, get familiar with the configuration and configuring the input as well as output. Than when you are familiar with the concepts and know how to play around with logstash, I move on to storing things in elasticsearch. There are some interesting steps to take there as well. When you have a way to put data in elasticsearch we move on to looking at the data. Before you can understand the power of Kibana, you have to create some queries on your own. I’ll help you there as well. In the end we will also have a look at Kibana.


Logstash comes with a number of different components. You can run them all using the executable jar. But logstash is very pluggable, therefore you can also use other components to replace the internal logstash components. Logstash contains the following components:

  • Shipper – sends events to logstash
  • broker/indexer – sends events to an output, elasticsearch for instance
  • search/storage – provides search capabilities using an internal elasticsearch
  • Web interface – provided a guy using a version of kibana

Logstash is created using jRuby, so you need a jvm to run logstash. When you have the executable jar all you need to do is create a basic config file and you can start experimenting. The config file consists of three main parts:

  • Input – the way we receive messages or events
  • Filters – how we leave out or convert messages
  • Output – the way to send out messages

The next code block gives the most basic config, use the standard input from the terminal where you run logstash and output the messages to the same console.

input {
  stdin { }
output {
  stdout { }

Time to run logstash using this config:

java -jar logstash-1.2.2-flatjar.jar agent -v -f basic.conf

Then when typing Hello World!! we get (I did remove some debug info):

2013-11-08T21:58:13.178+0000 Hello World!!

With the 1.2.2 release it is still annoying that you cannot just stop the agent using ctrl+c. I have to really kill it with the -9 option.

It is important to understand that the input and output contents are plugins. We can add other plugins to handle other input sources as well as plugins for outputting data. One is to output data to elasticsearch we will see later on.


I am not going to explain how to install elasticsearch. There are so many resources available online, especially on the elasticsearch website. So I tak it you have a running elasticsearch installation. Now we are going to update the logstash configuration to send all events to elasticsearch. The following code block shows the config for sending events as entered in the standard in to elasticsearch.

input {
  stdin { }
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"

Now when we have elasticsearch running with auto discovery enabled and a cluster name equal to logstash we can start logstash again. The output should resemble the following.

[~/javalibs/logstash-indexer]$ java -jar logstash-1.2.2-flatjar.jar agent -v -f basic.conf 
Pipeline started {:level=>:info}
New ElasticSearch output {:cluster=>"logstash", :host=>nil, :port=>"9300-9305", :embedded=>false, :level=>:info}
log4j, [2013-11-09T17:51:30.005]  INFO: org.elasticsearch.node: [Nuklo] version[0.90.3], pid[32519], build[5c38d60/2013-08-06T13:18:31Z]
log4j, [2013-11-09T17:51:30.006]  INFO: org.elasticsearch.node: [Nuklo] initializing ...
log4j, [2013-11-09T17:51:30.011]  INFO: org.elasticsearch.plugins: [Nuklo] loaded [], sites []
log4j, [2013-11-09T17:51:31.897]  INFO: org.elasticsearch.node: [Nuklo] initialized
log4j, [2013-11-09T17:51:31.897]  INFO: org.elasticsearch.node: [Nuklo] starting ...
log4j, [2013-11-09T17:51:31.987]  INFO: org.elasticsearch.transport: [Nuklo] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/]}
log4j, [2013-11-09T17:51:35.052]  INFO: org.elasticsearch.cluster.service: [Nuklo] detected_master [jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/]], added {[jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/]],}, reason: zen-disco-receive(from master [[jc-server][wB-3FWWEReyhWYRzWyZggw][inet[/]]])
log4j, [2013-11-09T17:51:35.056]  INFO: org.elasticsearch.discovery: [Nuklo] logstash/5pzXIeDpQNuFqQasY6jFyw
log4j, [2013-11-09T17:51:35.056]  INFO: org.elasticsearch.node: [Nuklo] started

Now when typing Hello World !! in the terminal the following is logged to the standard out.

output received {:event=>#"Hello World !!", "@timestamp"=>"2013-11-09T16:55:19.180Z", "@version"=>"1", "host"=>""}>, :level=>:info}
2013-11-09T16:55:19.180+0000 Hello World !!

This time however, the same thing is also send to elasticsearch. When we can check that by doing the following query to determine the index that is created.

curl -XGET "http://localhost:9200/_mapping?pretty"

The response than is the mapping. The index is the date of today, there is a type called logs and it has the fields that are also written out in the console.

  "logstash-2013.11.09" : {
    "logs" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date",
          "format" : "dateOptionalTime"
        "@version" : {
          "type" : "string"
        "host" : {
          "type" : "string"
        "message" : {
          "type" : "string"

Now that we know the name of the index, we can create a query to see if our message got through.

curl -XGET "http://localhost:9200/logstash-2013.11.09/logs/_search?q=message:hello&pretty"

The response for this query now is

  "took" : 2,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  "hits" : {
    "total" : 1,
    "max_score" : 0.19178301,
    "hits" : [ {
      "_index" : "logstash-2013.11.09",
      "_type" : "logs",
      "_id" : "9l8N2tIZSuuLaQhGrLhg7A",
      "_score" : 0.19178301, "_source" : {"message":"Hello World !!","@timestamp":"2013-11-09T16:55:19.180Z","@version":"1","host":""}
    } ]

There you go, we can enter messages in the console where logstash is running and query elasticsearch to see the messages are actually in the system. Not sure if this is useful, but at least you have seen the steps. Next step is to have a look at our data using a tool called kibana.


There are a multitude of ways to install kibana. Depending on your environment one is easier than the other. I like to install kibana as a plugin in elasticsearch on a development machine. So in the plugins folder create the folder kibana/_site and store all the content of the downloaded kibana tar in there. Now browse to http://localhost:9200/_plugin/kibana. In the first screen look for the logstash dashboard. When you open the dashboard it looks a bit different than mine, I made some changes to make it easier to present on the screen. Later on I will show how to create your own dashboard and panels. The following screen shows Kibana.

Screen Shot 2013 11 09 at 18 43 34

logstash also comes with an option to run kibana from the logstash executable. I personally prefer to have it as a separate install, that way you can always use the latest and greatest version.

Using tomcat access logs

This is all nice, but we are not implementing a system like this to enter a few messages, therefore we want to attach another input source to logstash. I am going to give an example with tomcat access logs. If you want to obtain access logs in tomcat you need to add a valve to the configured host in server.xml.

<Valve className="org.apache.catalina.valves.AccessLogValveDC" 
       pattern="%h %t %S &quot;%r&quot; %s %b" />

An example output from the logs than is, the table shows what the pattern we have means

0:0:0:0:0:0:0:1 [2013-11-10T16:28:00.580+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/partials/dashboard.html HTTP/1.1" 200 988
0:0:0:0:0:0:0:1 [2013-11-10T16:28:00.580+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/api/settings HTTP/1.1" 200 90
0:0:0:0:0:0:0:1 [2013-11-10T16:28:02.753+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/partials/users.html HTTP/1.1" 200 7160
0:0:0:0:0:0:0:1 [2013-11-10T16:28:02.753+0100] C054CED0D87023911CC07DB00B2F8F75 "GET /admin/api/users HTTP/1.1" 200 1332
h remote host t timestamp S session id r first line of request s http status code of response b bytes send

If you want mote information about the logging options check the tomcat configuration.

First step is get the contents of this file into logstash. Therefore we have to make a change to add an input coming from a file.

input {
  stdin { }
  file {
    type => "tomcat-access"
    path => ["/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"]
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"

The debug output now becomes.

output received {:event=>#"0:0:0:0:0:0:0:1 [2013-11-10T17:15:11.028+0100] 9394CB826328D32FEB5FE1F510FD8F22 \"GET /static/js/mediaOverview.js HTTP/1.1\" 304 -", "@timestamp"=>"2013-11-10T16:15:20.554Z", "@version"=>"1", "type"=>"tomcat-access", "host"=>"", "path"=>"/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"}>, :level=>:info}

Now we have stuff in elasticsearch, but we have just one string, the message. We now we have more interesting data in the message. Let us move on to the following component in logstash, filtering.

Logstash filtering

You can use filters to enhance the received events. The following configuration shows how to extract client, timestamp, session id, method, uri path, uri param, protocol, status code and bytes. As you can see we use grok to match these fields from the input.

input {
  stdin { }
  file {
    type => "tomcat-access"
    path => ["/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt"]
filter {
  if [type] == "tomcat-access" {
    grok {
      match => ["message","%{IP:client} \[%{TIMESTAMP_ISO8601:timestamp}\] (%{WORD:session_id}|-) \"%{WORD:method} %{URIPATH:uri_path}(?:%{URIPARAM:uri_param})? %{DATA:protocol}\" %{NUMBER:code} (%{NUMBER:bytes}|-)"]
output {
  stdout { }

  elasticsearch {
    cluster => "logstash"

Now compare the new output.

output received {:event=>#"0:0:0:0:0:0:0:1 [2013-11-10T17:46:19.000+0100] 9394CB826328D32FEB5FE1F510FD8F22 \"GET /static/img/delete.png HTTP/1.1\" 304 -", "@timestamp"=>"2013-11-10T16:46:22.112Z", "@version"=>"1", "type"=>"tomcat-access", "host"=>"", "path"=>"/Users/jcoenradie/temp/dpclogs/localhost_access_log.txt", "client"=>"0:0:0:0:0:0:0:1", "timestamp"=>"2013-11-10T17:46:19.000+0100", "session_id"=>"9394CB826328D32FEB5FE1F510FD8F22", "method"=>"GET", "uri_path"=>"/static/img/delete.png", "protocol"=>"HTTP/1.1", "code"=>"304"}>, :level=>:info}

Now if we go back to kibana, we can see we have more fields. The message is now replace with the mentioned fields. So now we can easily filter on for instance session_id. The following image shows that we can select the new fields.

Screen Shot 2013 11 10 at 17 56 24

That is it for now, later on I’ll blog about more logstash options and creating dashboards with kibana.

The post Oh no, more logs, start with logstash appeared first on Gridshore.

Categories: Architecture, Programming

Setting up keys to sign emails in Samsung’s Android email app

Mon, 09/30/2013 - 17:37
Introduction For almost half a year now, I’ve been the proud owner of a Samsung Galaxy SIII Mini (bought it just before the release of the S4, because my phone died and I couldn’t wait for the S4). Since then I’ve got it doing most of what I want it to do, except sign my outgoing emails when I want it to (sign them cryptographically, obviously — I got it to add a text signature within two seconds). The problem here is that setting up the Samsung stock mail app (I don’t use the GMail app) is not immediately obvious. But today I finally got it working, after a long and frustrating day. Read on to find out how…

To sign or not to sign… First of all, let’s take a look at the basic infrastructure for securing your outgoing mail in Samsung’s mail client. This infrastructure is found in the mail application’s settings, which are accessed using the menu key once you start the mail client: OpenSettings.png After you access the settings, find the security options item and tap that:


You should now see a screen like this: Screenshot_2013-09-30-17-41-29.png Hooray, you can manage keys that allow you to sign and/or encrypt your mails!! But this is where things start to get awkward. There are two competing standards out there (both endorsed by the IETF) for signing and encrypting mail. First, there is S/MIME, which uses the same PKI interface also used to secure web traffic and which requires yu to use RSA keypairs and signed certificates. On the other hand there is Pretty Good Privacy (PGP) which uses many types of keypairs, keyservers and a web of trust. So the first question that you run into here is: which do you use? The answer to that is that you use PGP, because S/MIME is not supported by this mail client except for Exchange servers. But you have to dig long and hard on the web to find that out, because there is no official documentation to tell you that. So your next move is going to be to use a tool like GPG to generate your public/private keypair with a passphrase, publish it on a server if you wish and export the public and private keys as .ASC files. After that, you can follow the instructions you find all across the web to place these files in the root of your SD card and import the keys. Which you do by going to Private keys or Public keys in the menu shown above, hitting the menu button and selecting Import keys. And then you will discover that this does not work because no key file is found. You see, for some bizarre reason Samsung chose not to use the onboard key management facilities of Android to manage their keys, instead opting to roll their own. To import the keys into the Samsung mail client, place your key files on your SD card in the directory
Yes, that is correct, export. Then, make sure your keyfiles have the correct name. They should be called
<your email address>_<your name as you filled it in in the mail account settings>_0x<the ID of your PGP keypair>_Private_Key.asc


<your email address>_<your name as you filled it in in the mail account settings>_0x<the ID of your PGP keypair>_Public_Key.asc

respectively for the private and public keys. If you use other names, the mail app will not find them. You can generate an example if you want: in the mail app, use the Create keys¬†option and export the keys to see what the names look like. You’ll have to get the ID from your GPG tool.

After all that, you should be able to import your keys. Then use the Set default key¬†option to choose a default keypair. You can either select to sign all your mails, or you can use the settings per mail to sign and/or encrypt. Don’t lose your passphrase, you have to fill it in every time you sign a mail!

The post Setting up keys to sign emails in Samsung’s Android email app appeared first on Gridshore.

Categories: Architecture, Programming

Cloning an OpenSuSE 12.3 virtual machine using GRUB2, VirtualBox and cryptfs

Mon, 08/19/2013 - 10:49

So I just bought myself a new laptop (an Asus N76VB-T4038H) which I love so far. It’s a great machine and real value for money. Of course it comes preloaded with Windows 8 ‚ÄĒ which I hate so far, but am holding on to for the occasional multimedia thingie and because I might have to install Microsoft Office at some point. But my main OS is still Linux and in particular OpenSuSE. And given the way I want to use this laptop in the near future, the idea hit me that it would be a good idea to virtualize my Linux machines so that I can clone them and set up new and separate environments when needed. So I installed¬†VirtualBox¬†on my guest OS, downloaded the¬†OpenSuSE 12.3 ISO, set up a new machine to create a golden image, pointed it to the ISO and installed OpenSuSE. Smooth sailing, no problems. Even set up an encrypted home partition without any pain. And then I tried to clone the machine in VirtalBox and start the clone… and found that it couldn’t find my cloned hard drive.

This is actually a reasonably well-known problem in OpenSuSE and some other Linux distros. You’ll find another blog about what is going on¬†here and there are several forum posts. But none of the ones I found cover a solution involving GRUB2, so I thought I’d post my experiences here.

The problem

The basic problem is this: Linux has a single file system tree which spans all of the available hard drives, partitions and so on in one, large virtual structure. To make this happen different parts of the physical storage infrastructure of your machine are mapped to branches of the file system tree (so called mount point). So, for example, the /home directory in your file system may be designated a mount point for the second partition on you primary SCSI drive. Which means that you can type /home and under water the OS will start looking at the partition also known as /dev/sda2. This trick can be applied at any time by the way, using the mount command. This is also what happens, for instance, when you insert a USB drive or DVD: a previously normal directory like /media/USB suddenly and magically becomes the file system location for the USB drive.

Now, recently, Linux has acquired different ways of naming partitions and drives. It all used to be /dev/hdaX and /dev/sdaX, but nowadays several partitions have introduced additional naming schemes using symlinks. For example, on my system, there is a directory /dev/disk which includes several subdirectories containing symlinks to actual device files, all using different naming schemes:

bzt@linux-akf6:/dev/disk> ll
total 0
drwxr-xr-x 2 root root 260 Aug 19 11:27 by-id
drwxr-xr-x 2 root root 140 Aug 19 11:27 by-path
drwxr-xr-x 2 root root 120 Aug 19 11:27 by-uuid

The by-id directory for instance includes symbolic names for the device files:

bzt@linux-akf6:/dev/disk> ll by-id/
total 0
lrwxrwxrwx 1 root root  9 Aug 19 11:27 ata-VBOX_CD-ROM_VB2-01700376 -> ../../sr0
lrwxrwxrwx 1 root root  9 Aug 19 11:27 ata-VBOX_HARDDISK_VB3ceba069-ef9ac38e -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 19 11:27 ata-VBOX_HARDDISK_VB3ceba069-ef9ac38e-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 19 11:27 ata-VBOX_HARDDISK_VB3ceba069-ef9ac38e-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 19 11:27 ata-VBOX_HARDDISK_VB3ceba069-ef9ac38e-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Aug 19 11:27 dm-name-cr_home -> ../../dm-0
lrwxrwxrwx 1 root root 10 Aug 19 11:27 dm-uuid-CRYPT-LUKS1-22b6b0f3ad0c433e855383ca2e64bef1-cr_home -> ../../dm-0
lrwxrwxrwx 1 root root  9 Aug 19 11:27 scsi-SATA_VBOX_HARDDISK_VB3ceba069-ef9ac38e -> ../../sda
lrwxrwxrwx 1 root root 10 Aug 19 11:27 scsi-SATA_VBOX_HARDDISK_VB3ceba069-ef9ac38e-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Aug 19 11:27 scsi-SATA_VBOX_HARDDISK_VB3ceba069-ef9ac38e-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Aug 19 11:27 scsi-SATA_VBOX_HARDDISK_VB3ceba069-ef9ac38e-part3 -> ../../sda3

Some of these mount points are fixed, meaning that the operating system automatically remounts them on system reboot. These mount points are recorded in the /etc/fstab file (file system table, a tab-separated table of the information needed to mount the mount points) and in the boot loader (because the boot loader has to know where to start the operating system from). The boot loader files are in /boot/grub2 if you are using GRUB2.

Now, some Linux distributions (including OpenSuSE 12.3) have chosen to generate these configuration files using the new, symbolic names for device files (found in /dev/disk/by-id) rather than the actual device file names (e.g. /dev/sda2). Which is usually no problem. Except when you are on a virtual computer which has been cloned and is using a virtual disk which has also been cloned. Because when you clone a disk, its symbolic name changes. But the configuration files are not magically updated during cloning to reflect this change.

The solution

The solution to this problem is to switch back to the device file names (because those are correct, even after cloning).

To make your Linux virtual machine with GRUB2 clone-ready, perform the following steps:

  1. Take a snapshot of your virtual machine.
  2. Become root on your machine.
  3. Examine the /etc/fstab file. Find all the symbolic names that occur in that file.
  4. Refer to the /dev/disk/by-id directory and examine the symlinks to figure out which device file is equivalent to which symbolic name.
  5. Use vi to edit the /etc/fstab file.
  6. Replace all symbolic names with the correct device file names.
  7. Save the new /etc/fstab file.
  8. Go to the /boot/grub2 directory.
  9. Make the same changes to the and grub.cfg files.
You should now be able to clone the machine and boot the clone and have it find your drives. Encrypted partitions

There is one exception though: if you have encrypted partitions, they still will not work. This is because Linux uses some indirection in the file system table for crypted file systems. To get your encrypted partitions to work, you have to edit the /etc/crypttab file and make the same symbolic-for-device-file name substitutions there.

The post Cloning an OpenSuSE 12.3 virtual machine using GRUB2, VirtualBox and cryptfs appeared first on Gridshore.

Categories: Architecture, Programming