Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 2 hours 35 min ago

Dazzle Your Audience By Doodling

Sun, 09/28/2014 - 10:29

When we were kids, we loved to doodle. Most of us did anyway. I doodled all the time, everywhere, and, to the dismay of my mother, on everything. I still love to doodle. In fact, I believe doodling is essential.

The tragedy of the doodle lies in its definition: "A doodle is an unfocused or unconscious drawing while a person's attention is otherwise occupied." That's why most of us have been taught not to doodle. Seems logical, right? Teacher sees you doodling, that is not paying attention in class, thus not learning as much as you should, so he puts a stop to it. Trouble is though, it's wrong. And it's not just a little bit wrong, it's totally and utterly wrong. Exactly how wrong was shown in a case study by Jackie Andrade. She discovered that doodlers have 29% better recall. So, if you don't doodle, you're doing yourself a disservice.

And you're not just doing yourself a disservice, you're also doing your audience a disservice. Neurologists have discovered a phenomenon dubbed "mirror neurons." When you see something, the same neurons fire as if you were doing it. So, if someone shows you a picture, let's say a slide in a presentation, it is as if you're showing that picture to yourself.

Wait, what? That doesn't sound special at all, now does it? That's why presentations using only slides can be so unintentionally relaxing.

Now, if you see someone write or draw something on a flip chart, dry erase board or any other surface in plain sight, it is as if you're writing or drawing it yourself. And that ensures 29% better recall. Better yet, you'll remember what the presenter wants you to rememeber. Especially if he can trigger an emotional response.

Now, why is that? At EUVIZ in Berlin last month, I attended a presentation by Barbara Siegel from Look2Listen that changed my life. Barbara talked about the latest insights from neuroscience that prove that everyone feels first and thinks later. So, if you want your audience to tune in to your talk, show some emotion! Want people to remember specific points of your talk? Trigger and capture emotion by writing and drawing in real-time. Emotion runs deep and draws firm neurological paths in the brain that help you recreate the memory. Memories are recreated, not stored and retrieved.

Another thing that helps you draw firm neurological paths is exercise. If you get your audience to stand up and move, you increase their brain activity by 7%, hightening alertness and motivation. By getting your audience to sit down again after physical exercise, you trigger a rebalancing of neurotransmitters and other neurochemicals, so they can use the newly spawned neurons in their brain to combine into memories of your talk. Now that got me running every other day! Well, jogging is more like it, but hey: I'm hitting my target heart-rate regularly!

How does this help you become a better public speaker? Remember these two key points:

  1. At the start of your speech, get your audience to stand up and move to ensure 7% more brain activity and prime them for maximum recall.
  2. Make sure to use visuals and metaphors and create most, if not all, of them in real-time to leverage the mirror neuron effect and increase recall by 29%.

Xebia KnowledgeCast Episode 4: Scrum Day Europe 2013, OpenSpace Knowledge Exchange, and Fun With Stickies!

Mon, 09/22/2014 - 16:44

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this fourth episode, we share some impressions of Scrum Day Europe 2013 and Xebia's OpenSpace Knowledge Exchange. And of course, Serge Beaumont will have Fun With Stickies! First, we interview Frank Bakker and Evelien Roos at Scrum Day Europe 2013. Then, Adriaan de Jonge and Jeroen Leenarts talk about continuous delivery and iOS development at the OpenSpace XKE. And in between, Serge Beaumont has Fun With Stickies!

Frank Bakker and Evelien Roos give their first impressions of the Keynotes at Scrum Day Europe 2013. Yes, that was last year, I know. New, more current interviews are coming soon. In fact, this is the last episode in which I use interviews that were recorded last year.

In this episode's Fun With Stickies Serge Beaumont talks about hypothesis stories. Using those, ensures you keep your Agile really agile. A very relevant topic, in my opinion, and it jells nicely with my missing line of the Agile Manifesto: Experimentation over implementation!

Adriaan de Jonge explains how automation in general, and test automation in particular, is useful for continuous delivery. He warns we should focus on the process and customer interaction, not the tool(s). That's right before I can't help myself and ask him which tool to use.

Jeroen Leenarts talks about iOS development. Listening to the interview, which was recorded a year ago, it's amazing to realize that, with the exception of iOS8 having come out in the mean time, all of Jeroen's comments are as relevant today as they were last year. How's that for a world class developer!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Hands-on Test Automation Tools session wrap up - Part1

Sun, 09/21/2014 - 15:57

Last week we had our first Hands-on Test Automation sessions.
Developers and Testers were challenged to show and tell their experiences in Test Automation.
That resulted in lots of in depth discussions and hands-on Test Automation Tool shoot-outs.

In this blogpost we'll share the outcome of the different sessions, like the famous Cucumber vs. FitNesse debat.

Stay tuned for upcoming updates!

Test Automation Frameworks

The following Test Automation Frameworks were demoed and discussed

1. FitNesse

FitNesse is a test management and execution tool.
You'll have to write/use fixture code if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
You can have a good drill down in test results.
You can make use of scenario's and scenario libraries to make test automation steps reusable.
But refactoring is hard when scenario's are used extensively since there is no IDE support (yet)

2. Cucumber

Cucumber is a specification tool which describes how software should behave.
You'll have to write/use step definitions if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
Cucumber forces you to write specifications / tests with scenarios (Behaviour in human readable language).
You can drill down into test results, but you'll need reporting libraries like Cucumber Reporting
We recommend using IntelliJ IDEA with the Cucumber plugin since it supports Cucumber seamlessly.
Refactoring becomes less problematic since you're using a IDE.

3. Selenium / WebDriver IDE

Selenium / WebDriver automates human interactions with web browser.
With the Selenium IDE you can record and play your tests in Firefox

Pros and Cons
It can get you started very quickly. You can record and play your tests scripts without writing any code.
Reusability of test automation code is not possible. You'll need to export it into an IDE to introduce reusability.

Must haves in Test Automation Tools

During the parallel sessions we've collected the following must haves in test automation tools.

Testers and Developers becoming best friends

When developers do not feel comfortable with the test automation tool, testers will try to fill the gap all by themselves. Most of the time these efforts result in hard to maintain test automation code. At some point in time test automation will become a bottleneck in Continuous Delivery. When picking a test automation tool consider each other's needs and pick a tool together. Feeling comfortable in writing and maintaining test automation code is really important to make test automation beneficial.

Separating What from How

Tools like FitNesse and Cucumber were designed to separate the What from the How in test automation. When combining both in those tools, you'll end up getting lost in details and you'll lose focus of what you are testing.
Use tools like FitNesse and Cucumber to describe What you are testing and put all details about How you are testing in test automation code (like fixture code and step definitions)

Other interesting tools
  • Thucydides: Reporting tests and results (including functional coverage)
  • Vagrant:¬†Provisioning System Under Test instances
  • Liquibase:¬†Treating database schema changes as 'code'

Stay tuned for upcoming updates!

 

Installing Oracle on Docker (Part 1)

Fri, 09/19/2014 - 12:56

I’ve spent Xebia’s Innovation Day last August experimenting with Docker in a team with two of my colleagues and a guest. We thought Docker sounds really cool, especially if your goal is to run software that doesn’t require lots of infrastructure and can be easily installed, e.g. because it runs from a jar file. We wondered however what would happen if we tried to run enterprise-software, like an Oracle database. Software that is notoriously difficult to install and choosy about the infrastructure it runs on. Hence our aim for the day: install an Oracle database on CoreOS and Docker.

We chose CoreOS because of its small footprint and the fact that it is easily installed in a VM using Vagrant (see https://coreos.com/docs/running-coreos/platforms/vagrant/). We used default Vagrantfile and CoreOS files with one modification: $vb_memory = 2024 in config.rb which allows the Oracle’s pre installer to run. The config files we used can be found here: https://github.com/jvermeir/OraDocker/

Starting with a default CoreOS install we then implemented the steps described here: http://www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/.
Below is a snippet from the first version of our Dockerfile (tag: b0a7b56).
FROM centos:centos6
# Step 1: Setting Hostname
ENV HOSTNAME oracle.docker.xebia.com
# Step 2
RUN yum -y install wget
RUN wget --no-check-certificate https://public-yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo
RUN wget --no-check-certificate https://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
RUN yum -y install oracle-rdbms-server-11gR2-preinstall

Note that this takes awhile because the pre installer downloads a number of CentOS packages that are missing in CoreOS.
Execute this in a shell:
vagrant up
vagrant ssh core-01
cd ~/share/docker
docker build -t oradocker .

This seemed like a good time to do a commit to save our work in Docker:
docker ps # note the container-id and substitute for the last parameter in the line below this one.
docker commit -m "executed pre installer" 07f7389c811e janv/oradocker

At this point we studiously ignore some of the advice listed under ‚ÄėStep 2‚Äô in Tecmint‚Äôs install manual, namely adding the HOSTNAME to /etc/syconfig/network, allowing access to the xhost (what would anyone want that for?) and mapping an IP address to a hostname in /etc/hosts (setting the hostname through ‚ÄėENV HOSTNAME‚Äô had no real effect as far as we could tell). We tried that but it didn‚Äôt seem to work. Denying reality and substituting our own we just ploughed on‚Ķ

Next we added Docker commands to the Dockerfile that creates the oracle user, copy the relevant installer files and unzip them. Docker starts by sending a build context to the Docker daemon. This takes quite some time because the Oracle installer files are large. There’s probably some way to avoid this, but we didn’t look into that. Unfortunately Docker copies the installer files each time you run docker -t … only to conclude later on that nothing changed.

The next version of our Dockerfile sort of works in the sense that it starts up the installer. The installer then complains about missing swap space. We fixed this temporarily at the CoreOS level by running the following commands:
sudo su -
swapfile=$(losetup -f)
truncate -s 1G /swap
losetup $swapfile /swap
mkswap $swapfile
swapon $swapfile

found here: http://www.spinics.net/lists/linux-btrfs/msg28533.html
This works but it doesn’t survive a reboot.
Now the installer continues only to conclude that networking is configured improperly (one of those IgnoreAndContinue decisions coming back to bite us):
[FATAL] PRVF-0002 : Could not retrieve local nodename

For this to work you need to change /etc/hosts which our Docker version doesn’t allow us to do. Apparently this is fixed in a later version, but we didn’t get around to testing that. And maybe changing /etc/sysconfig/network is even enough, but we didn't try that either.

The latest version of our work is on github (tag: d87c5e0). The repository does not include the Oracle installer files. If you want to try for yourself you can download the files here: ">http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html and adapt the docker file if necessary.

Below is a list of ToDo’s:

  1. Avoid copying large installer files if they’re not gonna be used anyway.
  2. Find out what should happen if you call ‚ÄėENV HOSTNAME oracle.docker.xebia.com‚Äô.
  3. Make swap file setting permanent on CoreOS.
  4. Upgrade Docker version so we can change /etc/hosts

All in all this was a useful day, even though the end result was not a running database. We hope to continue working on the ToDo’s in the future.

Full width iOS Today Extension

Thu, 09/18/2014 - 10:57

Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.

Source: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/NotificationCenter.html

This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

override func viewWillAppear(animated: Bool) {
    super.viewWillAppear(animated)
    if let superview = view.superview {
        var frame = superview.frame
        frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
        superview.frame = frame
    }
}

This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

Become high performing. By being happy.  

Thu, 09/18/2014 - 03:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can‚Äôt turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find ‚Äútime‚ÄĚ for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Wed, 09/17/2014 - 15:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.

iOS Today Widget in Swift - Tutorial

Sat, 09/13/2014 - 18:51

Since both the new app extensions of iOS 8 and Swift are both fairly new, I created a sample app that demonstrates how a simple Today extension can be made for iOS 8 in Swift. This widget will show the latest blog posts of the Xebia Blog within the today view of the notification center.

The source code of the app is available on GitHub. The app is not available on the App Store because it's an example app, though it might be quite useful for anyone following this Blog.

In this tutorial, I'll go through the following steps:

New Xcode project

Even though an extension for iOS 8 is a separate binary than an app, it's not possible to create an extension without an app. That makes it unfortunately impossible to create stand alone widgets, which this sample would be since it's only purpose is to show the latest posts in the Today view. So we create a new project in Xcode and implement a very simple view. The only thing the app will do for now is tell the user to swipe down from the top of the screen to add the widget.

iOS Simulator Screen Shot 11 Sep 2014 23.13.21

Time to add our extension target. From the File menu we choose New > Target... and from the Application Extension choose Today Extension.

Screen Shot 2014-09-11 at 23.20.34

We'll name our target XebiaBlogRSSWidget and of course use Swift as Language.

XebiaBlogRSS

The created target will have the following files:

      • TodayViewController.swift
      • MainInterface.storyboard
      • Info.plist

Since we'll be using a storyboard approach, we're fine with this setup. If however we wanted to create the view of the widget programatically we would delete the storyboard and replace the NSExtensionMainStoryboard key in the Info.plist with NSExtensionPrincipalClass and TodayViewController as it's value. Since (at the time of this writing) Xcode cannot find Swift classes as extension principal classes, we also would have to add the following line to our TodayViewController:

@objc (TodayViewController)

Update: Make sure to set the "Embedded Content Contains Swift Code" build setting of the main app target to YES. Otherwise your widget written in Swift will crash.

Add dependencies with cocoapods

The widget will get the latest blog posts from the RSS feed of the blog: http://blog.xebia.com/feed/. That means we need something that can read and parse this feed. A search on RSS at Cocoapods gives us the BlockRSSParser as first result. Seems to do exactly what we want, so we don't need to look any further and create our Podfile with the following contents:

platform :ios, "8.0"

target "XebiaBlog" do

end

target "XebiaBlogRSSWidget" do
pod 'BlockRSSParser', '~> 2.1'
end

It's important to only add the dependency to the XebiaBlogRSSWidget target since Xcode will build two binaries, one for the app itself and a separate one for the widget. If we would add the dependency to all targets it would be included in both binaries and thus increasing the total download size for our app. Always only add the necessary dependencies to both your app target and widget target(s).

Note: Cocoapods or Xcode might give you problems when you have a target without any Pod dependencies. In that case you may add a dependency to your main target and run pod install, after which you might be able to delete it again.

The BlockRSSParser is written in objective-c, which means we need to add an objective-c bridging header in order to use it from Swift. We add the file XebiaBlogRSSWidget-Bridging-Header.h to our target and add the import.

#import "RSSParser.h"

We also have to tell the Swift compiler about it in our build settings:

Screen Shot 2014-09-12 at 15.35.21

Load RSS feed

Finally time to do some coding. The generated TodayViewController has a function called widgetPerformUpdateWithCompletionHandler. This function gets called every once in awhile to ask for new data. It also gets called right after viewDidLoad when the widget is displayed. The function has a completion handler as parameter, which we need to call when we're done loading data. A completion handler is used instead of a return function so we can load our feed asynchronously.

In objective-c we would write the following code to load our feed:

[RSSParser parseRSSFeedForRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://blog.xebia.com/feed/"]] success:^(NSArray *feedItems) {
  // success
} failure:^(NSError *error) {
  // failure
}];

In Swift this looks slightly different. Here the the complete implementation of widgetPerformUpdateWithCompletionHandler:

func widgetPerformUpdateWithCompletionHandler(completionHandler: ((NCUpdateResult) -> Void)!) {
  let url = NSURL(string: "http://blog.xebia.com/feed/")
  let req = NSURLRequest(URL: url)

  RSSParser.parseRSSFeedForRequest(req,
    success: { feedItems in
      self.items = feedItems as? [RSSItem]
      completionHandler(.NewData)
    },
    failure: { error in
      println(error)
      completionHandler(.Failed)
  })
}

We assign the result to a new optional variable of type RSSItem array:

var items : [RSSItem]?

The completion handler gets called with either NCUpdateResult.NewData if the call was successful or NCUpdateResult.Failed when the call failed. A third option is NCUpdateResult.NoData which is used to indicate that there is no new data. We'll get to that later in this post when we cache our data.

Show items in a table view

Now that we have fetched our items from the RSS feed, we can display them in a table view. We replace our normal View Controller with a Table View Controller in our Storyboard and change the superclass of TodayViewController and add three labels to the prototype cell. No different than in iOS 7 so I won't go into too much detail here (the complete project is on GitHub).

We also create a new Swift class for our custom Table View Cell subclass and create outlets for our 3 labels.

import UIKit

class RSSItemTableViewCell: UITableViewCell {

    @IBOutlet weak var titleLabel: UILabel!
    @IBOutlet weak var authorLabel: UILabel!
    @IBOutlet weak var dateLabel: UILabel!

}

Now we can implement our Table View Data Source functions.

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return items.count
  }
  return 0
}

Since items is an optional, we use Optional Binding to check that it's not nil and then assign it to a temporary non optional variable: let items. It's fine to give the temporary variable the same name as the class variable.

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  let cell = tableView.dequeueReusableCellWithIdentifier("RSSItem", forIndexPath: indexPath) as RSSItemTableViewCell

  if let item = items?[indexPath.row] {
    cell.titleLabel.text = item.title
    cell.authorLabel.text = item.author
    cell.dateLabel.text = dateFormatter.stringFromDate(item.pubDate)
  }

  return cell
}

In our storyboard we've set the type of the prototype cell to our custom class RSSItemTableViewCell and used RSSItem as identifier so here we can dequeue a cell as a RSSItemTableViewCell without being afraid it would be nil. We then use Optional Binding to get the item at our row index. We could also use forced unwrapping since we know for sure that items is not nil here:

let item = items![indexPath.row]

But the optional binding makes our code saver and prevents any future crash in case our code would change.

We also need to create the date formatter that we use above to format the publication dates in the cells:

let dateFormatter : NSDateFormatter = {
        let formatter = NSDateFormatter()
        formatter.dateStyle = .ShortStyle
        return formatter
    }()

Here we use a closure to create the date formatter and to initialise it with our preferred date style. The return value of the closure will then be assigned to the property.

Preferred content size

To make sure we that we can actually see the table view we need to set the preferred content size of the widget. We'll add a new function to our class that does this.

func updatePreferredContentSize() {
  preferredContentSize = CGSizeMake(CGFloat(0), CGFloat(tableView(tableView, numberOfRowsInSection: 0)) * CGFloat(tableView.rowHeight) + tableView.sectionFooterHeight)
}

Since the widgets all have a fixed width, we can simply specify 0 for the width. The height is calculated by multiplying the number of rows with the height of the rows. Since this will set the preferred height greater than the maximum allowed height of a Today widget it will automatically shrink. We also add the sectionFooterHeight to our calculation, which is 0 for now but we'll add a footer later on.

When the preferred content size of a widget changes it will animate the resizing of the widget. To have the table view nicely animating along this transition, we add the following function to our class which gets called automatically:

override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
  coordinator.animateAlongsideTransition({ context in
    self.tableView.frame = CGRectMake(0, 0, size.width, size.height)
  }, completion: nil)
}

Here we simply set the size of the table view to the size of the widget, which is the first parameter.

Of course we still need to call our update method as well as reloadData on our tableView. So we add these two calls to our success closure when we load the items from the feed

success: { feedItems in
  self.items = feedItems as? [RSSItem]
  
  self.tableView.reloadData()
  self.updatePreferredContentSize()

  completionHandler(.NewData)
},

Let's run our widget:

Screen Shot 2014-09-09 at 13.12.50

It works, but we can make it look better. Table views by default have a white background color and black text color and that's no different within a Today widget. We'd like to match the style with the standard iOS Today widget so we give the table view a clear background and make the text of the labels white. Unfortunately that does make our labels practically invisible since the storyboard editor in Xcode will still show a white background for views that have a clear background color.

If we run again, we get a much better looking result:

Screen Shot 2014-09-09 at 13.20.05

Open post in Safari

To open a Blog post in Safari when tapping on an item we need to implement the tableView:didSelectRowAtIndexPath: function. In a normal app we would then use the openURL: method of UIApplication. But that's not available within a Today extension. Instead we need to use the openURL:completionHandler: method of NSExtensionContext. We can retrieve this context through the extensionContext property of our View Controller.

override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
  if let item = items?[indexPath.row] {
    if let context = extensionContext {
      context.openURL(item.link, completionHandler: nil)
    }
  }
}
More and Less buttons

Right now our widget takes up a bit too much space within the notification center. So let's change this by showing only 3 items by default and 6 items maximum. Toggling between the default and expanded state can be done with a button that we'll add to the footer of the table view. When the user closes and opens the notification center, we want to show it in the same state as it was before so we need to remember the expand state. We can use the NSUserDefaults for this. Using a computed property to read and write from the user defaults is a nice way to write this:

let userDefaults = NSUserDefaults.standardUserDefaults()

var expanded : Bool {
  get {
    return userDefaults.boolForKey("expanded")
  }
  set (newExpanded) {
    userDefaults.setBool(newExpanded, forKey: "expanded")
    userDefaults.synchronize()
  }
}

This allows us to use it just like any other property without noticing it gets stored in the user defaults. We'll also add variables for our button and number of default and maximum rows:

let expandButton = UIButton()

let defaultNumRows = 3
let maxNumberOfRows = 6

Based on the current value of the expanded property we'll determine the number of rows that our table view should have. Of course it should never display more than the actual items we have so we also take that into account and change our function into the following:

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return min(items.count, expanded ? maxNumberOfRows : defaultNumRows)
  }
  return 0
}

Then the code to make our button work:

override func viewDidLoad() {
  super.viewDidLoad()

  updateExpandButtonTitle()
  expandButton.addTarget(self, action: "toggleExpand", forControlEvents: .TouchUpInside)
  tableView.sectionFooterHeight = 44
}

override func tableView(tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
  return expandButton
}

Depending on the current value of our expanded property we wil show either "Show less" or "Show more" as button title.

func updateExpandButtonTitle() {
  expandButton.setTitle(expanded ? "Show less" : "Show more", forState: .Normal)
}

When we tap the button, we'll invert the expanded property, update the button title and preferred content size and reload the table view data.

func toggleExpand() {
  expanded = !expanded
  updateExpandButtonTitle()
  updatePreferredContentSize()
  tableView.reloadData()
}

And as a result we can now toggle the number of rows we want to see.

Screen Shot 2014-09-13 at 18.53.39Screen Shot 2014-09-13 at 18.53.43

Caching

At this moment, each time we open the widget, we first get an empty list and then once the feed is loaded, the items are displayed. To improve this, we can cache the retrieved items and display those once the widget is opened before we load the items from the feed. The TMCache library makes this possible with little effort. We can add it to our Pods file and bridging header the same way we did for the BlockRSSParser library.

Also here, a computed property works nice for caching the items and hide the actual implementation:

var cachedItems : [RSSItem]? {
  get {
    return TMCache.sharedCache().objectForKey("feed") as? [RSSItem]
  }
  set (newItems) {
    TMCache.sharedCache().setObject(newItems, forKey: "feed")
  }
}

Since the RSSItem class of the BlockRSSParser library conforms to the NSCoding protocol, we can use them directly with TMCache. When we retrieve the items from the cache for the first time, we'll get nil since the cache is empty. Therefore cachedItems needs to be an optional as well as the downcast and therefore we need to use the as? operator.

We can now update the cache once the items are loaded simply by assigning a value to the property. So in our success closure we add the following:

self.cachedItems = self.items

And then to load the cached items, we add two more lines to the end of viewDidLoad:

items = cachedItems
updatePreferredContentSize()

And we're done. Now each time the widget is opened it will first display the cached items.

There is one last thing we can do to improve our widget. As mentioned earlier, the completionHandler of widgetPerformUpdateWithCompletionHandler can also be called with NCUpdateResult.NoData. Now that we have the items that we loaded previously we can compare newly loaded items with the old and use NoData in case they haven't changed. Here is our final implementation of the success closure:

success: { feedItems in
  if self.items == nil || self.items! != feedItems {
    self.items = feedItems as? [RSSItem]
    self.tableView.reloadData()
    self.updatePreferredContentSize()
    self.cachedItems = self.items
    completionHandler(.NewData)
  } else {
    completionHandler(.NoData)
  }
},

And since it's Swift, we can simply use the != operator to see if the arrays have unequal content.

Source code on GitHub

As mentioned in the beginning of this post, the source code of the project is available on GitHub with some minor changes that are not essential to this blog post. Of course pull requests are always welcome. Also let me know in the comments below if you'd wish to see this widget released on the App Store.

React In Modern Web Applications: Part 2

Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

Xcode 6 GM & Learning Swift (with the help of Xebia)

Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Running unit tests on iOS devices

Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

Xebia KnowledgeCast Episode 3

Mon, 09/08/2014 - 15:41

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this third episode,
we get a bit more technical with me interviewing some of the most excellent programmers in the known universe: Age Mooy and Barend Garvelink. Then, I talk education and Smurfs with Femke Bender. Also, I toot my own horn a bit by providing you with a summary of my PMI Netherlands Summit session on Building Your Parachute On The Way Down. And of course, Serge will have Fun With Stickies!

It's been a while, and for those that are still listening to this feed: The Xebia Knowledgecast has been on hold due to personal circumstances. Last year, my wife lost her job, my father and mother in-law died, and we had to take our son out of the daycare center he was in due to the way they were treating him there. So, that had a little bit of an effect on my level of podfever. That said, I'm back! And my podfever is up!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Getting started with Salt

Mon, 09/08/2014 - 09:07

A couple of days ago I had the chance to spend a full day working with Salt(stack). On my current project we are using a different configuration management tool and my colleagues there claimed that Salt was simpler and more productive. The challenge was easily set, they claimed that a couple of people with no Salt experience, albeit with a little configuration management knowledge, would be productive in a single day.

My preparation was easy, I had to know nothing about Salt....done!

During the day I was working side by side with another colleague who knew little to nothing about Salt. When the day started, the original plan was to do a quick one hour introduction into Salt. But as we like to dive in head first this intro was skipped in favor of just getting started. We used an existing Vagrant box that spun up a Salt master & minion we could work on. The target was to get Salt to provision a machine for XL Deploy, complete with the customizations we were doing at our client. Think of custom plugins, logging configuration and custom libraries.

So we got cracking, running down the steps we needed to get XL Deploy installed. The steps were relatively simple, create a user & group, get the installation files from somewhere, install the server, initialize the repository and run it as a service.

First thing I noticed is that we simply just could get started. For the tasks we needed to do (downloading, unzipping etc.) we didn't need any additional states. Actually, during the whole exercise we never downloaded any additional states. Everything we needed was provided by Salt from the get go. Granted, we weren't doing anything special but it's not a given that everything is available.

During the day we approached the development of our Salt state like we would a shell script. We started from the top and added the steps needed. When we ran into issues with the order of the steps we'd simply move things around to get it to work. Things like creating a user before running a service as that user were easily resolved this way.

Salt uses yaml to define a state and that was fairly straight forward to use. Sometimes the naming used was strange. For example the salt.state.archive uses the parameter "source" for it's source location but "name" for it's destination. It's clearly stated in the docs what the parameter is used for, but a strange convention nonetheless.

We also found that the feedback provided by Salt can be scarce. On more than one occasion we'd enter a command and nothing would happen for a good while. Sometimes there would eventually be a lot of output but sometimes there wasn't.  This would be my biggest gripe with Salt, that you don't always get the feedback you'd like. Things like using templates and hierarchical data (the so-called pillars) proved easy to use. Salt uses jinja2 as it's templating engine, since we only needed simple variable replacement it's hard to comment on how useful jinja is. For our purposes it was fine.  Using pillars proved equally straightforward. The only issue we encountered here was that we needed to add our pillar to our machine role in the top.sls. Once we did that we could use the pillar data where needed.

The biggest (and only real) problem we encountered was to get XL Deploy to run as a service. We tried two approaches, one using the default service mechanism on Linux and the second using upstart. Upstart made it very easy to get the service started but it wouldn't stop properly. Using the default mechanism we couldn't get the service to start during a Salt run. When we send it specific commands it would start (and stop) properly but not during a run. We eventually added a post-stop script to the upstart config to make sure all the (child) processes stopped properly.

At the end of the day we had a state running that provisioned a machine with XL Deploy including all the customizations we wanted. Salt basically did what we wanted. Apart from the service everything went smooth. Granted, we didn't do anything exotic and stuck to rudimentary tasks like downloading, unzipping and copying, but implementing these simple tasks remained simple and straightforward. Overall Salt did what one might expect.

From my perspective the goal of being productive in a single day was easily achieved. Because of how straightforward it was to implement I feel confident about using Salt for more complex stuff. So, all in all Salt left a positive impression and I would like to do more with it.

Are Testers still relevant?

Sun, 09/07/2014 - 22:07

Last week I talked to one of my colleagues about a tester in his team. He told me that the tester was bored, because he had nothing to do. All the developers wrote and executed their tests themselves. Which makes sense, because the tester 2.0 tries to make the team test infected.

So what happens if every developer in the team has the Testivus? Are you still relevant on the Continuous Delivery train?
Come and join the discussion at the Open Kitchen Test Automation: Are you still relevant?

Testers 1.0

Remember the days when software testers where consulted after everything was built and released for testing. Testing was a big fat QA phase, which was a project by itself. The QA department consisted of test managers analyzing the requirements first. Logical test cases were created and were subordinated to test executors, who created physical test cases and executed them manually. Testers discovered conflicting requirements and serious issues in technical implementations. Which is good obviously. You don't want to deliver low quality software. Right?

So product releases were being delayed and the QA department documented everything in a big fat test report. And we all knew it: The QA department had to do it all over again after the next release.

I remember being a tester during those days. I always asked myself: Why am I always the last one thinking about ways to break the system? Does the developer know how easily this functionality can be broken? Does the product manager know that this requirement does not make sense at all?
Everyone hated our QA department. We were portrayed as slow, always delivering bad news and holding back the delivery cycle. But the problem was not delivering the bad news. The timing was.

The way of working needed to be changed.

Testers 2.0

We started training testers to help Agile teams deliver high quality software during development: The birth of the Tester 2.0 - The Agile Tester.
These testers master the Agile Manifesto, processes and methods that come with it. Collaboration about quality is the key here. Agile Testing is a mindset. And everyone is responsible for the quality of the product. Testers 2.0 helped teams getting (more) test infected. They thought like a researcher instead of a quality gatekeeper. They became part of the software development and delivery teams and they looked into possibilities to speed up testing efforts. So they practiced several exploratory testing techniques. Focused on reasonable and required tests, given the constraints of a sprint.

When we look back at several Agile teams having a shared understanding about Agile Testing, we saw many multidisciplinary teams becoming two separate teams: One is for developers and the other for QA / Testers.
I personally never felt comfortable in those teams. Putting testers with a group of developers is not Agile Testing. Developers still left testing for testers, and testers passively waited for developers to deploy something to be tested. At some point testers became a bottleneck again so they invested in test automation. So testers became test automators and build their own test code in a separate code base than development code. Test automators also picked tools that did not foster team responsibility. Therefore Test Automation got completely separated from product development. We found ways to help testers, test automators and developers to collaborate by improving the process. But that was treating the symptom of the problem. Developers were not taking responsibility in automated tests. And testers did not help developers designing testable software.

We want test automators and developers to become the same people.

Testers 3.0

If you want to accelerate your business you'll need to become iterative, delivering value to production as soon as possible by doing Continuous Delivery properly. So we need multidisciplinary teams to shorten feedback loops coming from different point of views.

Testers 3.0 are therefore required to accelerate the business by working in these following areas:

Requirement inspection: Building the right thing

The tester 3.0 tries to understand the business problem of requirements by describing examples. It's important to get common understanding between the business and technical context. So the tester verifies them as soon as possible and uses this as input for Test Automation and use BDD as a technique where a Human Readable Language is fostered. These testers work on automated acceptance tests as soon as possible.

Test Automation: Boosting the software delivery process

When common understanding is reached and the delivery team is ready to implement the requested feature, the tester needs programming skills to make the acceptance tests in a clean code state. The tester 3.0 uses appropriate Acceptance Test Driven Development tools (like Cucumber), which the whole team supports. But the tester keeps an eye out for better, faster and easier automated testing frameworks to support the team.

At the Xebia Craftsmanship Seminar (a couple of months ago) someone asked me if testers should learn how to write code.
I told him that no one is good at everything. But the tester 3.0 has good testing skills and enough technical baggage to write automated acceptance tests. Continuous Delivery teams have a shared responsibility and they automate all boring steps like manual test scripts, performance and security tests. It's very important to know how to automate; otherwise you'll slow down the team. You'll be treated the same as anyone else in the delivery team.
Testers 3.0 try to get developers to think about clean code and ensuring high quality code. They look into (and keep up with) popular development frameworks and address the testability of it. Even the test code is evaluated for quality attributes continuously. It needs the same love and caring as getting code into production.

Living documentation: Treating tests as specifications

At some point you'll end up with a huge set of automated tests telling you everything is fine. The tester 3.0 treats these tests as specifications and tries to create a living document, which is used for long term requirements gathering. No one will complain about these tests when they are all green and passing. The problem starts when tests start failing and no one can understand why. Testers 3.0 think about their colleague when they write a specification or test. They need to clearly specify what is being tested in a Human Readable Language.
They are used to changing requirements and specifications. And they don't make a big deal out of it. They understand that stakeholders can change their mind once a product comes alive. So the tester makes sure that important decisions made during new requirement inspections and development are stored and understood.

Relevant test results: Building quality into the process

Testers 3.0 focus on getting extreme fast feedback to determine the software quality of software products every day. Every night. Every second.
Testers want to deploy new working software features into production more often. So they do whatever it takes to build a high quality pipeline decreasing the quality feedback time during development.
Everyone in your company deserves to have confidence in the software delivery pipeline at any moment. Testers 3.0 think about how they communicate these types of feedback to the business. They provide ways to automatically report these test results about quality attributes. Testers 3.0 ask the business to define quality. Knowing everything was built right, how can they measure they've built the right thing? What do we need to measure when the product is in production?

How to stay relevant as a Tester

So what happens when all of your teammates are completely focused on high quality software using automation?

Testing does not require you to manually click, copy and paste boring scripted test steps you didn't want to do in the first place. You were hired to be skeptical about anything and make sure that all risks are addressed. It's still important to keep being a researcher for your team and test curiosity accordingly.

Besides being curious, analytical and having great communication skills, you need to learn how to code. Don't work harder. Work smarter by analyzing how you can automate all the boring checks so you'll have more time discovering other things by using your curiosity.

Since testing drives software development, and should no longer be treated as a separate phase in the development process, it's important that teams put test automation in the center of all design decisions. Because we need Test Automation to boost the software delivery by building quality sensors in every step of the process. Every day. Every night. Every second!

Do you want to discuss this topic with other Testers 3.0?  Come and join the Open Kitchen: Test Automation and get on board the Continuous Delivery Train!

 

Welcome to the Swift Playground

Thu, 09/04/2014 - 19:16

Imagine... You're working in swift code and you need to explain something to a co-worker. Easiest would be to just explain it and show the code right. So you grab your trusty editor and type some markdown.

let it = "be awesome"

Great!

So now you have a file filled with content:
swift-markdown

But it would be better if it looked like:

playground

Well you can and it's super simple, all you need is some Markdown and:

npm install -g swift-playground-builder

After that it's a matter of running:

playground my-super-nice-markdown-with-swift-code.md

Enjoy!

More info: https://github.com/jas/swift-playground-builder

React in modern web applications: Part 1

Tue, 09/02/2014 - 12:00

At Xebia we love to share knowledge! One of the ways we do this is by organizing 1-day courses during the summer. Together with Frank Visser we decided to do a training about full stack development with Node.js, AngularJS and Facebook's React. The goal of the training was to show the students how one could create a simple timesheet application. This application would use nothing but modern Javascript technologies while also teaching them best practices with regards to setting up and maintaining it.

To further share the knowledge gained during the creation of this training we'll be releasing several blog posts. In this first part we'll talk about why to use React, what React is and how you can incorporate it into your Grunt lifecycle.

This series of blog posts assume that you're familiar with the Node.js platform and the Javascript task runner Grunt.

What is React?

ReactJS logo

React is a Javascript library for creating user interfaces made by Facebook. It is their answer to the V in MVC. As it only takes care of the user interface part of a web application React can be (and most often will be) combined with other frameworks (e.g. AngularJS, Backbone.js, ...) for handling the MC part.

In case you're unfamiliar with the MVC architecture, it stands for model-view-controller and it is an architectural pattern for dividing your software into 3 parts with the goal of separating the internal representation of data from the representation shown to the actual user of the software.

Why use React?

There are quite a lot of Javascript MVC frameworks which also allow you to model your views. What are the benefits of using React instead of for example AngularJS?

What sets React apart from other Javascript MVC frameworks like AngularJS is the way React handles UI updates. To dynamically update a web UI you have to apply DOM updates whenever data in your UI changes. These DOM updates, compared to reading data from the DOM, are expensive operations which can drastically slow down your application's responsiveness if you do not minimize the amount of updates you do. React took a clever approach to minimizing the amount of DOM updates by using a virtual DOM (or shadow DOM) diff.

In contrast to the normal DOM consisting of nodes the virtual DOM consists of lightweight Javascript objects that represent your different React components. This representation is used to determine the minimum amount of steps required to go from the previous render to the next render. By using an observable to check if the state has changed React prevents unnecessary re-renders. By calling the setState method you mark a component 'dirty' which essentially tells React to update the UI for this component. When setState is called the component rebuilds the virtual DOM for all its children. React will then compare this to the current virtual sub-tree for the same component to determine the changes and thus find the minimum amount of data to update.

Besides efficient updates of only sub-trees, React batches these virtual DOM batches into real DOM updates. At the end of the React event loop, React will look up all components marked as dirty and re-render them.

How does React compare to AngularJS?

It is important to note that you can perfectly mix the usage of React with other frameworks like AngularJS for creating user interfaces. You can of course also decide to only use React for the UI and keep using AngularJS for the M and C in MVC.

In our opinion, using React for simple components does not give you an advantage over using AngularJS. We believe the true strength of React lies in demanding components that re-render a lot. React tends to really outperform AngularJS (and a lot of other frameworks) when it comes to UI elements that require a lot of re-rendering. This is due to how React handles UI updates internally as explained above.

JSX

JSX is a Javascript XML syntax transform recommended for use with React. It is a statically-typed object-oriented programming language designed for modern browsers. It is faster, safer and easier to use than Javascript itself. Although JSX and React are independent technologies, JSX was built with React in mind. React works without JSX out of the box but they do recommend using it. Some of the many reasons for using JSX:

  • It's easier to visualize the structure of the DOM
  • Designers are more comfortable making changes
  • It's familiar for those who have used MXML or XAML

If you decide to go for JSX you will have to compile the JSX to Javascript before running your application. Later on in this article I'll show you how you can automate this using a Grunt task. Besides Grunt there are a lot of other build tools that can compile JSX. To name a few, there are plugins for Gulp, Broccoli or Mimosa.

An example JSX file for creating a simple link looks as follows:

/** @jsx React.DOM */
var link = React.DOM.a({href: 'http://facebook.github.io/react'}, 'React');

Make sure to never forget the starting comment or your JSX file will not be processed by React.

Components

With React you can construct UI views using multiple, reusable components. You can separate the different concepts of your application by creating modular components and thus get the same benefits when using functions and classes. You should strive to break down the different common elements in your UI into reusable components that will allow you to reduce boilerplate and keep it DRY.

You can construct component classes by calling React.createClass() and each component has a well-defined interface and can contain state (in the form of props) specific to that component. A component can have ownership over other components and in React, the owner of a component is the one setting the props of that component. An owner, or parent component can access its children by calling this.props.children.

Using React you could create a hello world application as follows:

/** @jsx React.DOM */
var HelloWorld = React.createClass({
  render: function() {
    return <div>Hello world!</div>;
  }
});

Creating a component does not mean it will get rendered automatically. You have to define where you would like to render your different components using React.renderComponent as follows:

React.renderComponent(<HelloWorld />, targetNode);

By using for example document.getElementById or a jQuery selector you target the DOM node where you would like React to render your component and you pass it on as the targetNode parameter.

Automating JSX compilation in Grunt

To automate the compilation of JSX files you will need to install the grunt-react package using Node.js' npm installer:

npm install grunt-react --save-dev

After installing the package you have to add a bit of configuration to your Gruntfile.js so that the task knows where your JSX source files are located and where and with what extension you would like to store the compiled Javascript files.

react: {
  dynamic_mappings: {
    files: [
      {
        expand: true,
        src: ['scripts/jsx/*.jsx'],
        dest: 'app/build_jsx/',
        ext: '.js'
      }
    ]
  }
}

To speed up development you can also configure the grunt-contrib-watch package to keep an eye on JSX files. Watching for JSX files will allow you to run the grunt-react task whenever you change a JSX file resulting in continuous compilation of JSX files while you develop your application. You simply specify the type of files to watch for and the task that you would like to run when one of these files changes:

watch: {
  jsx: {
    files: ['scripts/jsx/*.jsx'],
    tasks: ['react']
  }
}

Last but not least you will want to add the grunt-react task to one or more of your grunt lifecycle tasks. In our setup we added it to the serve and build tasks.

grunt.registerTask('serve', function (target) {
  if (target === 'dist') {
    return grunt.task.run(['build', 'connect:dist:keepalive']);
  }

  grunt.task.run([
    'clean:server',
    'bowerInstall',
    <strong>'react',</strong>
    'concurrent:server',
    'autoprefixer',
    'configureProxies:server',
    'connect:livereload',
    'watch'
  ]);
});

grunt.registerTask('build', [
  'clean:dist',
  'bowerInstall',
  'useminPrepare',
  'concurrent:dist',
  'autoprefixer',
  'concat',
  <strong>'react',</strong>
  'ngmin',
  'copy:dist',
  'cdnify',
  'cssmin',
  'uglify',
  'rev',
  'usemin',
  'htmlmin'
]);
Conclusion

Due to React's different approach on handling UI changes it is highly efficient at re-rendering UI components. Besides that it's easily configurable and integrate in your build lifecycle.

What's next?

In the next article we'll be discussing how you can use React together with AngularJS, how to deal with state in your components and how to avoid passing through your entire component hierarchy using callbacks when updating state.

CocoaHeadsNL @ Xebia on September 16th

Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

What is your next step in Continuous Delivery? Part 1

Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

Synchronize the Team

Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‚ÄėSpecification by Example‚Äô. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested.¬†Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

Vert.x with core.async. Handling asynchronous workflows

Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.