Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Go: First attempt at channels

Mark Needham - Sat, 12/24/2016 - 11:45

In a previous blog post I mentioned that I wanted to extract blips from The ThoughtWorks Radar into a CSV file and I thought this would be a good mini project for me to practice using Go.

In particular I wanted to try using channels and this seemed like a good chance to do that.

I watched a talk by Rob Pike on designing concurrent applications where he uses the following definition of concurrency:

Concurrency is a way to structure a program by breaking it into pieces that can be executed independently.

He then demonstrates this with the following diagram:

2016 12 23 19 52 30

I broke the scraping application down into four parts:

  1. Find the links of blips to download ->
  2. Download the blips ->
  3. Scrape the data from each page ->
  4. Write the data into a CSV file

I don’t think we gain much by parallelising steps 1) or 4) but steps 2) and 3) seem easily parallelisable. Therefore we’ll use a single goroutine for steps 1) and 4) and multiple goroutines for steps 2) and 3).

We’ll create two channels:

  • filesToScrape
  • filesScraped

And they will interact with our components like this:

  • 2) will write the path of the downloaded files into filesToScape
  • 3) will read from filesToScrape and write the scraped content into filesScraped
  • 4) will read from filesScraped and put that information into a CSV file.


I decided to write a completely serial version of the scraping application first so that I could compare it to the parallel version. I had the following common code:

scrape/scrape.go

package scrape

import (
	"github.com/PuerkitoBio/goquery"
	"os"
	"bufio"
	"fmt"
	"log"
	"strings"
	"net/http"
	"io"
)

func checkError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

type Blip struct {
	Link  string
	Title string
}

func (blip Blip) Download() File {
	parts := strings.Split(blip.Link, "/")
	fileName := "rawData/items/" + parts[len(parts) - 1]

	if _, err := os.Stat(fileName); os.IsNotExist(err) {
		resp, err := http.Get("http://www.thoughtworks.com" + blip.Link)
		checkError(err)
		body := resp.Body

		file, err := os.Create(fileName)
		checkError(err)

		io.Copy(bufio.NewWriter(file), body)
		file.Close()
		body.Close()
	}

	return File{Title: blip.Title, Path: fileName }
}

type File struct {
	Title string
	Path  string
}

func (fileToScrape File ) Scrape() ScrapedFile {
	file, err := os.Open(fileToScrape.Path)
	checkError(err)

	doc, err := goquery.NewDocumentFromReader(bufio.NewReader(file))
	checkError(err)
	file.Close()

	var entries []map[string]string
	doc.Find("div.blip-timeline-item").Each(func(i int, s *goquery.Selection) {
		entry := make(map[string]string, 0)
		entry["time"] = s.Find("div.blip-timeline-item__time").First().Text()
		entry["outcome"] = strings.Trim(s.Find("div.blip-timeline-item__ring span").First().Text(), " ")
		entry["description"] = s.Find("div.blip-timeline-item__lead").First().Text()
		entries = append(entries, entry)
	})

	return ScrapedFile{File:fileToScrape, Entries:entries}
}

type ScrapedFile struct {
	File    File
	Entries []map[string]string
}

func FindBlips(pathToRadar string) []Blip {
	blips := make([]Blip, 0)

	file, err := os.Open(pathToRadar)
	checkError(err)

	doc, err := goquery.NewDocumentFromReader(bufio.NewReader(file))
	checkError(err)

	doc.Find(".blip").Each(func(i int, s *goquery.Selection) {
		item := s.Find("a")
		title := item.Text()
		link, _ := item.Attr("href")
		blips = append(blips, Blip{Title: title, Link: link })
	})

	return blips
}

Note that we’re using the goquery library to scrape the HTML files that we download.

A Blip is used to represent an item that appears on the radar e.g. .NET Core. A File is a representation of that blip on my local file system and a ScrapedFile contains the local representation of a blip and has an array containing every appearance the blip has made in radars over time.

Let’s have a look at the single threaded version of the scraper:

cmd/single/main.go

package main

import (
	"fmt"
	"encoding/csv"
	"os"
	"github.com/mneedham/neo4j-thoughtworks-radar/scrape"
)


func main() {
	var filesCompleted chan scrape.ScrapedFile = make(chan scrape.ScrapedFile)
	defer close(filesCompleted)

	blips := scrape.FindBlips("rawData/twRadar.html")

	var filesToScrape []scrape.File
	for _, blip := range blips {
		filesToScrape = append(filesToScrape, blip.Download())
	}

	var filesScraped []scrape.ScrapedFile
	for _, file := range filesToScrape {
		filesScraped = append(filesScraped, file.Scrape())
	}

	blipsCsvFile, _ := os.Create("import/blipsSingle.csv")
	writer := csv.NewWriter(blipsCsvFile)
	defer blipsCsvFile.Close()

	writer.Write([]string{"technology", "date", "suggestion" })
	for _, scrapedFile := range filesScraped {
		fmt.Println(scrapedFile.File.Title)
		for _, blip := range scrapedFile.Entries {
			writer.Write([]string{scrapedFile.File.Title, blip["time"], blip["outcome"] })
		}
	}
	writer.Flush()
}

rawData/twRadar.html is a local copy of the A-Z page which contains all the blips. This version is reasonably simple: we create an array containing all the blips, scrape them into another array, and then that array into a CSV file. And if we run it:

$ time go run cmd/single/main.go 

real	3m10.354s
user	0m1.140s
sys	0m0.586s

$ head -n10 import/blipsSingle.csv 
technology,date,suggestion
.NET Core,Nov 2016,Assess
.NET Core,Nov 2015,Assess
.NET Core,May 2015,Assess
A single CI instance for all teams,Nov 2016,Hold
A single CI instance for all teams,Apr 2016,Hold
Acceptance test of journeys,Mar 2012,Trial
Acceptance test of journeys,Jul 2011,Trial
Acceptance test of journeys,Jan 2011,Trial
Accumulate-only data,Nov 2015,Assess

It takes a few minutes and most of the time will be taken in the blip.Download() function – work which is easily parallelisable. Let’s have a look at the parallel version where goroutines use channels to communicate with each other:

cmd/parallel/main.go

package main

import (
	"os"
	"encoding/csv"
	"github.com/mneedham/neo4j-thoughtworks-radar/scrape"
)

func main() {
	var filesToScrape chan scrape.File = make(chan scrape.File)
	var filesScraped chan scrape.ScrapedFile = make(chan scrape.ScrapedFile)
	defer close(filesToScrape)
	defer close(filesScraped)

	blips := scrape.FindBlips("rawData/twRadar.html")

	for _, blip := range blips {
		go func(blip scrape.Blip) { filesToScrape <- blip.Download() }(blip)
	}

	for i := 0; i < len(blips); i++ {
		select {
		case file := <-filesToScrape:
			go func(file scrape.File) { filesScraped <- file.Scrape() }(file)
		}
	}

	blipsCsvFile, _ := os.Create("import/blips.csv")
	writer := csv.NewWriter(blipsCsvFile)
	defer blipsCsvFile.Close()

	writer.Write([]string{"technology", "date", "suggestion" })
	for i := 0; i < len(blips); i++ {
		select {
		case scrapedFile := <-filesScraped:
			for _, blip := range scrapedFile.Entries {
				writer.Write([]string{scrapedFile.File.Title, blip["time"], blip["outcome"] })
			}
		}
	}
	writer.Flush()
}

Let's remove the files we just downloaded and give this version a try.

$ rm rawData/items/*

$ time go run cmd/parallel/main.go 

real	0m6.689s
user	0m2.544s
sys	0m0.904s

$ head -n10 import/blips.csv 
technology,date,suggestion
Zucchini,Oct 2012,Assess
Reactive Extensions for .Net,May 2013,Assess
Manual infrastructure management,Mar 2012,Hold
Manual infrastructure management,Jul 2011,Hold
JavaScript micro frameworks,Oct 2012,Trial
JavaScript micro frameworks,Mar 2012,Trial
NPM for all the things,Apr 2016,Trial
NPM for all the things,Nov 2015,Trial
PowerShell,Mar 2012,Trial

So we're down from 190 seconds to 7 seconds, pretty cool! One interesting thing is that the order of the values in the CSV file will be different since the goroutines won't necessarily come back in the same order that they were launched. We do end up with the same number of values:

$ wc -l import/blips.csv 
    1361 import/blips.csv

$ wc -l import/blipsSingle.csv 
    1361 import/blipsSingle.csv

And we can check that the contents are identical:

$ cat import/blipsSingle.csv  | sort > /tmp/blipsSingle.csv

$ cat import/blips.csv  | sort > /tmp/blips.csv

$ diff /tmp/blips.csv /tmp/blipsSingle.csv 


The code in this post is all on github. I'm sure I've made some mistakes/there are ways that this could be done better so do let me know in the comments or I'm @markhneedham on twitter.

Categories: Programming

Logically Fallacious Friday

Herding Cats - Glen Alleman - Fri, 12/23/2016 - 23:09

Most people know nothing about learning; many despiseΒ it. Dummies reject as too hard whatever is not dumb - Thomas More,Β Utopia

The Fallacy - We can't know much of anything about the Future. But in fact, the future is always knowable to some degree of precision and accuracy unless it is truly Unknowable

Β Software developers and the IT managers they work for operating in theΒ future. Many of this practitionersΒ view the future as an esoteric, abstract, impractical realm, But the future is where the value of the software is earned. The futureΒ is where to cost to develop that software is paid back. The future is where the users of the system will be satisfied with the provided capabilities.

The primaryΒ job of management and those developing value for management is to find the future, not just the future in general, but the specificΒ futures for the customers. - Al Ries, inΒ Positioing the Battle for Your Mind.

Β This future is not in the resulting products and services but the cost, schedule, and technical attributes needed to produce these products and services. Knowing this future can be on large-grainedΒ boundaries - sometimes called waterfall. Or on fine grainedΒ boundariesΒ - sometimes called spiral, incremental commit and agile.

All ProjectΒ Work Operates in the Presence of Uncertainty

All workΒ even production line work operates in the presence of uncertainty. Uncertainty comes in tow forms - Aleatory and Epistemic

Screen Shot 2016-12-18 at 10.01.20 AM

Managing in the presence of these two uncertaintiesΒ requires one or two approaches:

  • For IrreducibleΒ UncertaintiesΒ - margin isΒ required since the uncertainty cannot be reduced. This can be scheduleΒ margin, cost margin, or technicalΒ margin. How much margin must be determined as well. This is typically done with a Monte Caro Simulation of the underlyingΒ statistical processes that are creating the Aleatory uncertainty.Β 
  • For Reducible Uncertainties - redundancy, experiments, prototyping, fault tolerance, failure safe and a variety of other processes to protect the system when the uncertaintyΒ creates a risk or fault.Β 

Now to the Logical Fallacy

In the presence of Aleatory and Epistemic uncertanty, risk is created to the cost, schedule, and technical performance of the project. To assess the impact of that risk, devise the protective actions needed to address the resulting risk - ESTIMATING is required. Without estimates the Aleatory and Epistemic uncertanty that exists on All project will go unaddressed and the probablity of project sucess will be significanlty reduced, perhaps to Zero.Β 

Without estimating both the probability of occurance (for reducible uncertanties) and the statistical processes (for irreducible uncertanties) informed decsions cannot be made.

The falalcy is that decsions can be made in the presence of these uncertanties - that exist on all project work - willfully ignores these principles.

There are two type of errors made, of which this fallacyΒ adheres:

  • An Error of OmissionΒ - a mistake that consists of not doing something you should have done, or not including something such as a value or fact that should be included. I didn't know I should be estimating.
  • An Error of Commission - a mistake that consists of doing something wrong on purpose, such as including a wrong value, or including an amount that is knowingly wrong - I willfully ignored I shouldΒ beΒ estimating.

Β 

[1]Β Software Cost Estimation with COCOM II, Barry Boehm, et al

[2]Β The Incremental Commitment Spiral Model, BarryΒ Boehm, and Jo Ann Lane

[3]Β The Economics of Iterative Software Development, Walker Royce

[4]Β Facts and Fallacies of Software Engineering, Robert Glass

[5] Facts and Fallacies of Estimating Software Cost and Schedule

[6] Distinguishing Two Dimensions of Uncertainty, Craig Fox and Gülden Ülkumen, in Perspectives of Thinking, Judging, and Decision Making

[7]Β Decision Analysis for Professionals, Peter McNamee and John Celona

Β 

Related articles Economics of Software Development I Think You'll Find It's a Bit More Complicated Than That Herding Cats: Estimating is a Learned Skill Complex, Complexity, Complicated
Categories: Project Management

Go: cannot execute binary file: Exec format error

Mark Needham - Fri, 12/23/2016 - 19:24

In an earlier blog post I mentioned that I’d been building an internal application to learn a bit of Go and I wanted to deploy it to AWS.

Since the application was only going to live for a couple of days I didn’t want to spend a long time build up anything fancy so my plan was just to build the executable, SSH it to my AWS instance, and then run it.

My initial (somewhat naive) approach was to just build the project on my Mac and upload and run it:

$ go build

$ scp myapp ubuntu@aws...

$ ssh ubuntu@aws...

$ ./myapp
-bash: ./myapp: cannot execute binary file: Exec format error

That didn’t go so well! By reading Ask Ubuntu and Dave Cheney’s blog post on cross compilation I realised that I just needed to set the appropriate environment variables before running go build.

The following did the trick:

env GOOS=linux GOARCH=amd64 GOARM=7 go build

And that’s it! I’m sure there’s more sophisticated ways of doing this that I’ll come to learn about but for now this worked for me.

Categories: Programming

Neo4j: Graphing the ThoughtWorks Technology Radar

Mark Needham - Fri, 12/23/2016 - 18:40

For a bit of Christmas holiday fun I thought it’d be cool to create a graph of the different blips on the ThoughtWorks Technology Radar and how the recommendations have changed over time.

I wrote a script to extract each blip (e.g. .NET Core) and the recommendation made in each radar that it appeared in. I ended up with a CSV file:

|----------------------------------------------+----------+-------------|
|  technology                                  | date     | suggestion  |
|----------------------------------------------+----------+-------------|
|  AppHarbor                                   | Mar 2012 | Trial       |
|  Accumulate-only data                        | Nov 2015 | Assess      |
|  Accumulate-only data                        | May 2015 | Assess      |
|  Accumulate-only data                        | Jan 2015 | Assess      |
|  Buying solutions you can only afford one of | Mar 2012 | Hold        |
|----------------------------------------------+----------+-------------|

I then wrote a Cypher script to create the following graph model:

2016 12 23 16 52 08

WITH ["Hold", "Assess", "Trial", "Adopt"] AS positions
UNWIND RANGE (0, size(positions) - 2) AS index
WITH positions[index] AS pos1, positions[index + 1] AS pos2
MERGE (position1:Position {value: pos1})
MERGE (position2:Position {value: pos2})
MERGE (position1)-[:NEXT]->(position2);

load csv with headers from "file:///blips.csv" AS row
MATCH (position:Position {value:  row.suggestion })
MERGE (tech:Technology {name:  row.technology })
MERGE (date:Date {value: row.date})
MERGE (recommendation:Recommendation {
  id: tech.name + "_" + date.value + "_" + position.value})
MERGE (recommendation)-[:ON_DATE]->(date)
MERGE (recommendation)-[:POSITION]->(position)
MERGE (recommendation)-[:TECHNOLOGY]->(tech);

match (date:Date)
SET date.timestamp = apoc.date.parse(date.value, "ms", "MMM yyyy");

MATCH (date:Date)
WITH date
ORDER BY date.timestamp
WITH COLLECT(date) AS dates
UNWIND range(0, size(dates)-2) AS index
WITH dates[index] as month1, dates[index+1] AS month2
MERGE (month1)-[:NEXT]->(month2);

MATCH (tech)<-[:TECHNOLOGY]-(reco:Recommendation)-[:ON_DATE]->(date)
WITH tech, reco, date
ORDER BY tech.name, date.timestamp
WITH tech, COLLECT(reco) AS recos
UNWIND range(0, size(recos)-2) AS index
WITH recos[index] AS reco1, recos[index+1] AS reco2
MERGE (reco1)-[:NEXT]->(reco2);

Note that I installed the APOC procedures library so that I could convert the string representation of a date into a timestamp using the apoc.date.parse function. The blips.csv file needs to go in the import directory of Neo4j.

Now we’re reading to write some queries.

The Technology Radar has 4 positions that can be taken for a given technology: Hold, Assess, Trial, and Adopt:

  • Hold: Process with Caution
  • Assess: Worth exploring with the goal of understanding how it will affect your enterprise.
  • Trial: Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.
  • Adopt: We feel strongly that the industry should be adopting these items. We use them when appropriate on our projects.

I was curious whether there had ever been a technology where the advice was initially to ‘Hold’ but had later changed to ‘Assess’. I wrote the following query to find out:

MATCH (pos1:Position {value:"Hold"})<-[:POSITION]-(reco)-[:TECHNOLOGY]->(tech),
      (pos2:Position {value:"Assess"})<-[:POSITION]-(otherReco)-[:TECHNOLOGY]->(tech),
      (reco)-[:ON_DATE]->(recoDate),
      (otherReco)-[:ON_DATE]->(otherRecoDate)
WHERE (reco)-[:NEXT]->(otherReco)
RETURN tech.name AS technology, otherRecoDate.value AS dateOfChange;

╒════════════╀══════════════╕
β”‚"technology"β”‚"dateOfChange"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║
β”‚"Azure"     β”‚"Aug 2010"    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Only Azure! The page doesn’t have any explanation for the initial ‘Hold’ advice in April 2010 which was presumably just before ‘the cloud’ became prominent. What about the other way around? Are there any technologies where the suggestion was initially to ‘Assess’ but later to ‘Hold’?

MATCH (pos1:Position {value:"Assess"})<-[:POSITION]-(reco)-[:TECHNOLOGY]->(tech),
      (pos2:Position {value:"Hold"})<-[:POSITION]-(otherReco)-[:TECHNOLOGY]->(tech),
      (reco)-[:ON_DATE]->(recoDate),
      (otherReco)-[:ON_DATE]->(otherRecoDate)
WHERE (reco)-[:NEXT]->(otherReco)
RETURN tech.name AS technology, otherRecoDate.value AS dateOfChange;

╒═══════════════════════════════════╀══════════════╕
β”‚"technology"                       β”‚"dateOfChange"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║
β”‚"RIA"                              β”‚"Apr 2010"    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Backbone.js"                      β”‚"Oct 2012"    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Pace-layered Application Strategy"β”‚"Nov 2015"    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"SPDY"                             β”‚"May 2015"    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"AngularJS"                        β”‚"Nov 2016"    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

A couple of these are Javascript libraries/frameworks so presumably the advice is now to use React instead. Let’s check:

MATCH (t:Technology)<-[:TECHNOLOGY]-(reco)-[:ON_DATE]->(date), (reco)-[:POSITION]->(pos)
WHERE t.name contains "React.js"
RETURN pos.value, date.value 
ORDER BY date.timestamp

╒═══════════╀════════════╕
β”‚"pos.value"β”‚"date.value"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════║
β”‚"Assess"   β”‚"Jan 2015"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Trial"    β”‚"May 2015"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Trial"    β”‚"Nov 2015"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Adopt"    β”‚"Apr 2016"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Adopt"    β”‚"Nov 2016"  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Ember is also popular:

MATCH (t:Technology)<-[:TECHNOLOGY]-(reco)-[:ON_DATE]->(date), (reco)-[:POSITION]->(pos)
WHERE t.name contains "Ember"
RETURN pos.value, date.value 
ORDER BY date.timestamp

╒═══════════╀════════════╕
β”‚"pos.value"β”‚"date.value"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ════════════║
β”‚"Assess"   β”‚"May 2015"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Assess"   β”‚"Nov 2015"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Trial"    β”‚"Apr 2016"  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Adopt"    β”‚"Nov 2016"  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Let’s go on a different tangent and look at how many technologies were introduced in the most recent radar?

MATCH (date:Date {value: "Nov 2016"})<-[:ON_DATE]-(reco)
WHERE NOT (reco)<-[:NEXT]-()
RETURN COUNT(*) 

╒══════════╕
β”‚"COUNT(*)"β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•‘
β”‚"45"      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Wow, 45 new things! How were they spread across the different positions?

MATCH (date:Date {value: "Nov 2016"})<-[:ON_DATE]-(reco)-[:TECHNOLOGY]->(tech), 
      (reco)-[:POSITION]->(position)
WHERE NOT (reco)<-[:NEXT]-()
WITH position, COUNT(*) AS count, COLLECT(tech.name) AS technologies
ORDER BY LENGTH((position)-[:NEXT*]->()) DESC
RETURN position.value, count, technologies

╒════════════════╀═══════╀══════════════════════════════════════════════╕
β”‚"position.value"β”‚"count"β”‚"technologies"                                β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ══════════════════════════════════════════════║
β”‚"Hold"          β”‚"1"    β”‚["Anemic REST"]                               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Assess"        β”‚"28"   β”‚["Nuance Mix","Micro frontends","Three.js","Scβ”‚
β”‚                β”‚       β”‚ikit-learn","WebRTC","ReSwift","Vue.js","Electβ”‚
β”‚                β”‚       β”‚ron","Container security scanning","wit.ai","Dβ”‚
β”‚                β”‚       β”‚ifferential privacy","Rapidoid","OpenVR","AWS β”‚
β”‚                β”‚       β”‚Application Load Balancer","Tarantool","IndiaSβ”‚
β”‚                β”‚       β”‚tack","Ethereum","axios","Bottled Water","Cassβ”‚
β”‚                β”‚       β”‚andra carefully","ECMAScript 2017","FBSnapshotβ”‚
β”‚                β”‚       β”‚Testcase","Client-directed query","JuMP","Clojβ”‚
β”‚                β”‚       β”‚ure.spec","HoloLens","Android-x86","Physical Wβ”‚
β”‚                β”‚       β”‚eb"]                                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Trial"         β”‚"13"   β”‚["tmate","Lightweight Architecture Decision Reβ”‚
β”‚                β”‚       β”‚cords","APIs as a product","JSONassert","Unityβ”‚
β”‚                β”‚       β”‚ beyond gaming","Galen","Enzyme","Quick and Niβ”‚
β”‚                β”‚       β”‚mble","Talisman","fastlane","Auth0","Pa11y","Pβ”‚
β”‚                β”‚       β”‚hoenix"]                                      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚"Adopt"         β”‚"3"    β”‚["Grafana","Babel","Pipelines as code"]       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Lots of new things to explore over the holidays! The CSV files, import script, and queries used in this post are all available on github if you want to play around with them.

Categories: Programming

Stuff The Internet Says On Scalability For December 23rd, 2016

Hey, it's HighScalability time:

 

A wondrous ethereal mix of technology and art. Experience of "VOID"
If you like this sort of Stuff then please support me on Patreon.
  • 2+ billion: Google lines of code distributed over 9+ million source files; $3.6 bn: lower Google taxes using Dutch Sandwich; $14.6 billion: aggregate value of all cryptocurrencies; 2x: graphene-fed silkworms produce silk that conducts electricity; < 100: scientists looking for extraterrestrial life; 48: core Qualcomm server SoC; 455: original TV series in 2016;

  • Quotable Quotes:
    • Ben Thompson~ It's so easy to think of tech with an 80s mindset with all the upstarts. We still glorify people in garages. The garage is gone...Our position in the world is not the scrappy upstart. It is the establishment.
    • The Attention Merchants: True brand advertising is therefore an effort not so much to persuade as to convert. At its most successful, it creates a product cult, whose loyalists cannot be influenced by mere information
    • @seldo: Speed of development always wins. Performance problems will (eventually) get engineered away. This is nearly always how technology changes.
    • @evgenymorozov: How Silicon Valley can support basic income: give everyone a bot farm so that we can make advertising $ from fake traffic to their platforms
    • @avdi: Apple has 33 Github repos and 56 contributors. Microsoft now has ~1,200 repos and 2,893 contributors.
    • Peter Norvig: Understanding the brain is a fascinating problem but I think it’s important to keep it separate from the goal of AI which is solving problems ... If you conflate the two it’s like aiming at two mountain peaks at the same time—you usually end up in the valley between them .... We don’t need to duplicate humans ... We want humans and machines to partner and do something that they cannot do on their own.
    • Brave New Greek: Unbounded anything—whether its queues, message sizes, queries, or traffic—is a resilience engineering anti-pattern. Without explicit limits, things fail in unexpected and unpredictable ways. Remember, the limits exist, they’re just hidden. By making them explicit, we restrict the failure domain giving us more predictability, longer mean time between failures, and shorter mean time to recovery at the cost of more upfront work or slightly more complexity.
    • Naren Shankar (Expanse): Everybody feels like they can look at the show and find parts of themselves in it. When you can give people collective ownership of the creative product you get the best from people. At the end of the day it shows. People work their asses off and accomplish the impossible.
    • Richard Jones: a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially
    • Waterloo: His [Napoleon] strategy was simple. It was to divide his enemies, then pin one down while the other was attacked hard and, like a boxing match, the harder he punched the quicker the result. Then, once one enemy was destroyed, he would turn on the next. The best defense for Napoleon in 1815 was attack, and the obvious enemy to attack was the closest.
    • Daniel Lemire: beyond a certain point, reducing the possibility of a fault becomes tremendously complicated and expensive… and it becomes far more economical to minimize the harm due to expected faults
    • @greglinden: “For some products at Baidu, the main purpose is to acquire data from users, not revenue.”β€Š—β€Š@stuhlmueller
    • strebler:  Deep Learning has made some very fundamental advances, but that doesn't mean it's going to make money just as magically!
    • sulam: Twitter clearly doesn't have growth magic (or they'd be growing faster) -- but is that an engineer's fault? At the end of the day, any user facing engineering is beholden to the product team. Engineers at Twitter can run experiments, but they can't get those experiments shipped unless a PM is behind it.
    • Gil Tene: The right way to read "99%'ile latency of a" is "1 or a 100 of occurrences of 'a' took longer than this. And we have no idea how long". That is the only information captured by that metric. It can be used to roughly deduce "what is the likelihood that 'a' will take longer than that?". But deducing other stuff from it usually simply doesn't work.
    • @esh: Unheralded tiny features like AWS Lambda inside Kinesis Firehose streams replace infrastructure monstrosities with a few lines of code
    • @postwait: Listening to this twitter caching talk... *so* glad my OS doesn't even contemplate OOMs. How is that shit still in Linux? A literal WTF.
    • SomeStupidPoint: Mostly, it was just a choice to save $1-2k on a laptop (every 1-2 years) and spend the money on cellphone data and lattes.
    • @timbray: Oracle trying to monetize Java... Golang/Rust/Elixir all looking better. Assume all JVM langs are potential targets.
    • Kathryn S. McKinley: In programming languages research, the most revolutionary change on the horizon is probabilistic programming, in which developers produce models that estimate the real world and explicitly reason about uncertainty in data and computations. 
    • cindy sridharan: Four Golden Signals 1) Latency 2) Traffic 3) Errors 4) Saturation
    • @FioraAeterna: as a tech company grows in size, the probability of it developing its own in-house bug tracking system approaches 1
    • The Attention Merchants: In 1928, Paley made a bold offer to the nation’s many independent radio stations. The CBS network would provide any of them all of its sustaining content for free—on the sole condition that they agree to carry the sponsored content as well

  • philips: Essentially I see the world broken down into four potential application types: 1) Stateless applications: trivial to scale at a click of a button with no coordination. These can take advantage of Kubernetes deployments directly and work great behind Kubernetes Services or Ingress Services. 2) Stateful applications: postgres, mysql, etc which generally exist as single processes and persist to disks. These systems generally should be pinned to a single machine and use a single Kubernetes persistent disk. These systems can be served by static configuration of pods, persistent disks, etc or utilize StatefulSets. 3) Static distributed applications: zookeeper, cassandra, etc which are hard to reconfigure at runtime but do replicate data around for data safety. These systems have configuration files that are hard to update consistently and are well-served by StatefulSets. 4) Clustered applications: etcd, redis, prometheus, vitess, rethinkdb, etc are built for dynamic reconfiguration and modern infrastructure where things are often changing. They have APIs to reconfigure members in the cluster and just need glue to be operated natively seemlessly on Kubernetes, and thus the Kubernetes Operator concept

  • Top 5 uses for Redis: content caching; user session store; job & queue management; high speed transactions; notifications.

  • Is machine learning being used in the wild? The answer appears to be yes. Ask HN: Where is AI/ML actually adding value at your company? Many uses you might expect and some unexpected: predicting if a part scanned with an acoustic microscope has internal defects; find duplicate entries in a large, unclean data set; product recommendations; course recommendations; topic detection; pattern clustering; understand the 3D spaces scanned by customers; dynamic selection of throttle threshold; EEG interpretation; predict which end users are likely to churn for our customers; automatic data extraction from web pages; model complex interactions in electrical grids in order to make decisions that improve grid efficiency;sentiment classification; detecting fraud; credit risk modeling; Spend prediction; Loss prediction; Fraud and AML detection; Intrusion detection; Email routing; Bandit testing; Optimizing planning/ task scheduling; Customer segmentation; Face- and document detection; Search/analytics; Chat bots; Topic analysis; Churn detection; phenotype adjudication in electronic health records; asset replacement modeling; lead scoring;  semantic segmentation to identify objects in the users environment to build better recommendation systems and to identify planes (floor, wall, ceiling) to give us better localization of the camera pose for height estimates; classify bittorrent filenames into media classify bittorrent filenames into media categories; predict how effective a given CRISPR target site will be; check volume, average ticket $, credit score and things of that nature to determine the quality and lifetime of a new merchant account; anomaly detection; identify available space in kit from images; optimize email marketing campaigns; investigate & correlate events, initially for security logs; moderate comments; building models of human behavior to provide interactive intelligent agents with a conversational interface; automatically grading kids' essays; Predict probability of car accidents based on the sensors of your smartphone; predict how long JIRA tickets are going to take to resolve; voice keyword recognition; produce digital documents in legal proceedings; PCB autorouting.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Stuff The Internet Says On Scalability For December 23rd, 2016

Hey, it's HighScalability time:

 

A wondrous ethereal mix of technology and art. Experience of "VOID"
If you like this sort of Stuff then please support me on Patreon.
  • 2+ billion: Google lines of code distributed over 9+ million source files; $3.6 bn: lower Google taxes using Dutch Sandwich; $14.6 billion: aggregate value of all cryptocurrencies; 2x: graphene-fed silkworms produce silk that conducts electricity; < 100: scientists looking for extraterrestrial life; 48: core Qualcomm server SoC; 455: original TV series in 2016;

  • Quotable Quotes:
    • Ben Thompson~ It's so easy to think of tech with an 80s mindset with all the upstarts. We still glorify people in garages. The garage is gone...Our position in the world is not the scrappy upstart. It is the establishment.
    • The Attention Merchants: True brand advertising is therefore an effort not so much to persuade as to convert. At its most successful, it creates a product cult, whose loyalists cannot be influenced by mere information
    • @seldo: Speed of development always wins. Performance problems will (eventually) get engineered away. This is nearly always how technology changes.
    • @evgenymorozov: How Silicon Valley can support basic income: give everyone a bot farm so that we can make advertising $ from fake traffic to their platforms
    • @avdi: Apple has 33 Github repos and 56 contributors. Microsoft now has ~1,200 repos and 2,893 contributors.
    • Peter Norvig: Understanding the brain is a fascinating problem but I think it’s important to keep it separate from the goal of AI which is solving problems ... If you conflate the two it’s like aiming at two mountain peaks at the same time—you usually end up in the valley between them .... We don’t need to duplicate humans ... We want humans and machines to partner and do something that they cannot do on their own.
    • Brave New Greek: Unbounded anything—whether its queues, message sizes, queries, or traffic—is a resilience engineering anti-pattern. Without explicit limits, things fail in unexpected and unpredictable ways. Remember, the limits exist, they’re just hidden. By making them explicit, we restrict the failure domain giving us more predictability, longer mean time between failures, and shorter mean time to recovery at the cost of more upfront work or slightly more complexity.
    • Naren Shankar (Expanse): Everybody feels like they can look at the show and find parts of themselves in it. When you can give people collective ownership of the creative product you get the best from people. At the end of the day it shows. People work their asses off and accomplish the impossible.
    • Richard Jones: a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially
    • Waterloo: His [Napoleon] strategy was simple. It was to divide his enemies, then pin one down while the other was attacked hard and, like a boxing match, the harder he punched the quicker the result. Then, once one enemy was destroyed, he would turn on the next. The best defense for Napoleon in 1815 was attack, and the obvious enemy to attack was the closest.
    • Daniel Lemire: beyond a certain point, reducing the possibility of a fault becomes tremendously complicated and expensive… and it becomes far more economical to minimize the harm due to expected faults
    • @greglinden: “For some products at Baidu, the main purpose is to acquire data from users, not revenue.”β€Š—β€Š@stuhlmueller
    • strebler:  Deep Learning has made some very fundamental advances, but that doesn't mean it's going to make money just as magically!
    • sulam: Twitter clearly doesn't have growth magic (or they'd be growing faster) -- but is that an engineer's fault? At the end of the day, any user facing engineering is beholden to the product team. Engineers at Twitter can run experiments, but they can't get those experiments shipped unless a PM is behind it.
    • Gil Tene: The right way to read "99%'ile latency of a" is "1 or a 100 of occurrences of 'a' took longer than this. And we have no idea how long". That is the only information captured by that metric. It can be used to roughly deduce "what is the likelihood that 'a' will take longer than that?". But deducing other stuff from it usually simply doesn't work.
    • @esh: Unheralded tiny features like AWS Lambda inside Kinesis Firehose streams replace infrastructure monstrosities with a few lines of code
    • @postwait: Listening to this twitter caching talk... *so* glad my OS doesn't even contemplate OOMs. How is that shit still in Linux? A literal WTF.
    • SomeStupidPoint: Mostly, it was just a choice to save $1-2k on a laptop (every 1-2 years) and spend the money on cellphone data and lattes.
    • @timbray: Oracle trying to monetize Java... Golang/Rust/Elixir all looking better. Assume all JVM langs are potential targets.
    • Kathryn S. McKinley: In programming languages research, the most revolutionary change on the horizon is probabilistic programming, in which developers produce models that estimate the real world and explicitly reason about uncertainty in data and computations. 
    • cindy sridharan: Four Golden Signals 1) Latency 2) Traffic 3) Errors 4) Saturation
    • @FioraAeterna: as a tech company grows in size, the probability of it developing its own in-house bug tracking system approaches 1
    • The Attention Merchants: In 1928, Paley made a bold offer to the nation’s many independent radio stations. The CBS network would provide any of them all of its sustaining content for free—on the sole condition that they agree to carry the sponsored content as well

  • philips: Essentially I see the world broken down into four potential application types: 1) Stateless applications: trivial to scale at a click of a button with no coordination. These can take advantage of Kubernetes deployments directly and work great behind Kubernetes Services or Ingress Services. 2) Stateful applications: postgres, mysql, etc which generally exist as single processes and persist to disks. These systems generally should be pinned to a single machine and use a single Kubernetes persistent disk. These systems can be served by static configuration of pods, persistent disks, etc or utilize StatefulSets. 3) Static distributed applications: zookeeper, cassandra, etc which are hard to reconfigure at runtime but do replicate data around for data safety. These systems have configuration files that are hard to update consistently and are well-served by StatefulSets. 4) Clustered applications: etcd, redis, prometheus, vitess, rethinkdb, etc are built for dynamic reconfiguration and modern infrastructure where things are often changing. They have APIs to reconfigure members in the cluster and just need glue to be operated natively seemlessly on Kubernetes, and thus the Kubernetes Operator concept

  • Top 5 uses for Redis: content caching; user session store; job & queue management; high speed transactions; notifications.

  • Is machine learning being used in the wild? The answer appears to be yes. Ask HN: Where is AI/ML actually adding value at your company? Many uses you might expect and some unexpected: predicting if a part scanned with an acoustic microscope has internal defects; find duplicate entries in a large, unclean data set; product recommendations; course recommendations; topic detection; pattern clustering; understand the 3D spaces scanned by customers; dynamic selection of throttle threshold; EEG interpretation; predict which end users are likely to churn for our customers; automatic data extraction from web pages; model complex interactions in electrical grids in order to make decisions that improve grid efficiency;sentiment classification; detecting fraud; credit risk modeling; Spend prediction; Loss prediction; Fraud and AML detection; Intrusion detection; Email routing; Bandit testing; Optimizing planning/ task scheduling; Customer segmentation; Face- and document detection; Search/analytics; Chat bots; Topic analysis; Churn detection; phenotype adjudication in electronic health records; asset replacement modeling; lead scoring;  semantic segmentation to identify objects in the users environment to build better recommendation systems and to identify planes (floor, wall, ceiling) to give us better localization of the camera pose for height estimates; classify bittorrent filenames into media classify bittorrent filenames into media categories; predict how effective a given CRISPR target site will be; check volume, average ticket $, credit score and things of that nature to determine the quality and lifetime of a new merchant account; anomaly detection; identify available space in kit from images; optimize email marketing campaigns; investigate & correlate events, initially for security logs; moderate comments; building models of human behavior to provide interactive intelligent agents with a conversational interface; automatically grading kids' essays; Predict probability of car accidents based on the sensors of your smartphone; predict how long JIRA tickets are going to take to resolve; voice keyword recognition; produce digital documents in legal proceedings; PCB autorouting.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Go: Templating with the Gin Web Framework

Mark Needham - Fri, 12/23/2016 - 15:30

I spent a bit of time over the last week building a little internal web application using Go and the Gin Web Framework and it took me a while to get the hang of the templating language so I thought I’d write up some examples.

Before we get started, I’ve got my GOPATH set to the following path:

$ echo $GOPATH
/Users/markneedham/projects/gocode

And the project containing the examples sits inside the src directory:

$ pwd
/Users/markneedham/projects/gocode/src/github.com/mneedham/golang-gin-templating-demo

Let’s first install Gin:

$ go get gopkg.in/gin-gonic/gin.v1

It gets installed here:

$ ls -lh $GOPATH/src/gopkg.in
total 0
drwxr-xr-x   3 markneedham  staff   102B 23 Dec 10:55 gin-gonic

Now let’s create a main function to launch our web application:

demo.go

package main

import (
	"github.com/gin-gonic/gin"
	"net/http"
)

func main() {
	router := gin.Default()
	router.LoadHTMLGlob("templates/*")

	// our handlers will go here

	router.Run("0.0.0.0:9090")
}

We’re launching our application on port 9090 and the templates live in the templates directory which is located relative to the file containing the main function:

$ ls -lh
total 8
-rw-r--r--  1 markneedham  staff   570B 23 Dec 13:34 demo.go
drwxr-xr-x  4 markneedham  staff   136B 23 Dec 13:34 templates
Arrays

Let’s create a route which will display the values of an array in an unordered list:

	router.GET("/array", func(c *gin.Context) {
		var values []int
		for i := 0; i < 5; i++ {
			values = append(values, i)
		}

		c.HTML(http.StatusOK, "array.tmpl", gin.H{"values": values})
	})
    {{ range .values }}
  • {{ . }}
  • {{ end }}

And now we'll cURL our application to see what we get back:

$ curl http://localhost:9090/array
  • 0
  • 1
  • 2
  • 3
  • 4

What about if we have an array of structs instead of just strings?

import "strconv"

type Foo struct {
	value1 int
	value2 string
}

	router.GET("/arrayStruct", func(c *gin.Context) {
		var values []Foo
		for i := 0; i < 5; i++ {
			values = append(values, Foo{Value1: i, Value2: "value " + strconv.Itoa(i)})
		}

		c.HTML(http.StatusOK, "arrayStruct.tmpl", gin.H{"values": values})
	})

    {{ range .values }}
  • {{ .Value1 }} -> {{ .Value2 }}
  • {{ end }}

cURL time:

$ curl http://localhost:9090/arrayStruct
  • 0 -> value 0
  • 1 -> value 1
  • 2 -> value 2
  • 3 -> value 3
  • 4 -> value 4
Maps

Now let's do the same for maps.

	router.GET("/map", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "map.tmpl", gin.H{"myMap": values})
	})
    {{ range .myMap }}
  • {{ . }}
  • {{ end }}

And cURL it:

$ curl http://localhost:9090/map
  • Go
  • 1.7.4

What if we want to see the keys as well?

	router.GET("/mapKeys", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "mapKeys.tmpl", gin.H{"myMap": values})
	})
    {{ range $key, $value := .myMap }}
  • {{ $key }} -> {{ $value }}
  • {{ end }}
$ curl http://localhost:9090/mapKeys
  • language -> Go
  • version -> 1.7.4

And finally, what if we want to select specific values from the map?

	router.GET("/mapSelectKeys", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "mapSelectKeys.tmpl", gin.H{"myMap": values})
	})
  • Language: {{ .myMap.language }}
  • Version: {{ .myMap.version }}
$ curl http://localhost:9090/mapSelectKeys
  • Language: Go
  • Version: 1.7.4

I've found the Hugo Go Template Primer helpful for figuring this out so that's a good reference if you get stuck. You can find a go file containing all the examples on github if you want to use that as a starting point.

Categories: Programming

Will Agile be trashed?

Xebia Blog - Fri, 12/23/2016 - 10:19
Agile is hot.Β Almost every Fortune 500 company is β€œDoing the Agile Thing”. But with the success also the critics are growing rapidly. The post β€œAgile is Dead” from Matthew Kern was extremely popular.Β Many of his arguments are dead right.Β For example, Agile has become a brand name and a hype and the original Agile Manifesto has

Quote of the Day

Herding Cats - Glen Alleman - Fri, 12/23/2016 - 05:57

The suprising discovery of Newton's is just this, the clear seperation of laws of nature on one hand and initial conditions on the other - Eugene Wigner, inΒ Newton's Principis for the Common Reader, S. Chandrasekhar

There are 5 immutable principles of project success. Without the initial conditions of these five principles, the project has little chance of success.

Categories: Project Management

Annual Tune-up: Improvement Is The Only Option!

Getting older and getting wiser!

Getting older and getting wiser!

At the end of the year I take time for reflection, introspection, and retooling; an activity that I highly recommend. The question I often ask myself as I reflect is how can I become more effective and efficient. Β For the sake of clarity, I define effectiveness as the ability to deliver desired results. Β Effectiveness means that we have to know what we are trying to deliver and that what we are delivering matches the need when it’s delivered. Being β€œeffective” is more complicated than just doing what you were asked to do because that might not be what is needed when you get the end of a piece of work. Β Being effective requires efficient execution and carefully listening to feedback. Β Efficiency is a far simpler topic. Β Efficiency is doing useful work with least amount of energy. Β For knowledge workers, the most significant input into the efficiency is their time. A few evenings ago as my wife and I talked over a glass of wine, cider and a few tacos (it was taco night) about plans for the new year, she chided me on wanting to write more columns and extend the podcast franchise. Β As Kevin Kruse (SPaMCAST 398) says there are only 1,440 minutes in a day and without a time machine it is nearly impossible to generate more. Β Efficient and effective use of our minutes is more than an academic question, it is a matter directly tied to meeting our goals and feeling fulfilled. Over the years I have found nine improvement areas that commonly can be capitalized on at a personal level. Β I will openly admit that each item on our list are areas that I strive to be better at almost on a daily basis.

  1. Multi-projecting versus Multitasking
  2. Filtering Work
  3. Delegation
  4. Automation
  5. Time Boxing
  6. Blocking Electronic Distractions
  7. Re-planning
  8. Thinly Slice Work (One Step At a Time)
  9. Random Time Accounting Tips

While these items are all useful at a personal level, as we explore each item in more detail I believe you will find that they can be used at a team or group level as well. Β 

The first improvement areas in detail:

  1. Multi-projecting versus Multitasking. Β Over the years I have been convinced of the evils of even thinking that I can multitask. Multitasking reduces efficiency due to the extra effort of trying to do two things at once and reduces effectiveness due to task confusion and contention. Multi-projecting is a more difficult issue. Β Multi-projecting occurs when you are asked to work on two or more projects during the same timeframe. Β I often multi-project. Β For example, I work on a new podcast every week, a new set of blog entries and typically one client-facing project all during the same timeframe. Β I compartmentalize each project and work on them individually. Done correctly there is no overlaps or contention for resources. Β Multi-projecting does lose some efficiency due to starting and stopping, but helps reduce burnout due to over focus and provides a mechanism to fill odd bits of time such as when I am sitting in a restaurant alone when I am on the road.

Tools I use:

  • Trello – I use Kanban to track work in order to minimize startup time.
  • Evernote – I typically email notes (which I dictate – we will discuss during automation) to Evernote about other projects or ideas that pop to mind when I not working on them. Β I also capture non-project tasks in a to-do list in Evernote.

(Does anyone know how I can do some level of integration between Trello and Evernote?)

I do not think I am making an over fine distinction between multitasking and multi-projecting. Β Multi-projecting requires adopting a time box mentality and then keeping different projects in those time boxes. Β I recognize that there is some tradeoff between efficiency (switching costs) and effectiveness (staying fresh and making visible continuous progress). Finding a workable pattern requires constant reappraisal based on feedback.

Entries in the Annual Tune-Up Theme for 2016!


Categories: Process Management

CloudFoundry Route-Service Demo

Ben Wilcock - Thu, 12/22/2016 - 12:06
CloudFoundry Route-ServiceΒ Demo

This code-demo is an example of a Cloud Foundry Route Service written with Spring Boot.

This application does the following to each request:

  1. Intercepts theΒ incoming request
  2. Logs information about that incoming request
  3. Allows the request to continue to its original destination
  4. Intercepts the response
  5. Logs information about that outgoingΒ response
  6. Allows the response to continue to the intended recipient

The rest of this article and the code itself are on Github here:Β https://github.com/benwilcock/pcf-wiretap-route-service

About the Author

Ben Wilcock works for Pivotal as a Senior Solutions Architect. Ben has a passion for microservices, cloud and mobile applications and helps Pivotal’s Cloud Foundry customers to become more responsive, innovate faster and gain greater returns from their software investments. Ben is a respected technology blogger who’s articles have featured in DZone, Java Code Geeks, InfoQ, Spring Blog and more.


Categories: Architecture, Programming

Risk Management is How Adults Manage Projects

Herding Cats - Glen Alleman - Wed, 12/21/2016 - 20:18

All project work is uncertain. Uncertainty comes in two types - Reducible (Epistemic) and Irreducible (Aleatory). These uncertainties create the risk to the success of all projects. Without managing in the presence of risk, the probability of project success is significantlyΒ reduced, most likely reduced to zero.

First a definition

A risk is an issue or event that could prevent a program or project from meeting its technical, schedule, cost, or safety objectives.

Management in the presence of risk has the following steps: [1]

  1. Identification - is the process of transforming uncertainties about an event or task into distinct risks that can be described, measured, and acted upon. A risk statement is prepared to describe the risk context, condition, consequence, and general time-interval. The context section provides the what, how, when, where, and why of the risk statement. The condition is a single phrase that briefly describes the key circumstances and situations causing concern, doubt, or anxiety. The consequence is a phrase that describes the negative outcome(s) that may occur due to the condition. The identified risk is then submitted as a candidate and either accepted or closed by the program.
  2. Analyze -. includes assessing the likelihood and consequences of each risk, determining the timeframe needed to mitigate each risk, grouping or classifying each risk, and prioritizing identified risks. Likelihood assessments use specific criteria to score risks from 1 (very low likelihood of happening) to 5 (nearly certain to happen). Likelihood scoring criteria are described as: Β Β  Screen Shot 2016-12-21 at 11.53.57 AM
  3. Plan -Β selects an appropriate risk owner who will be responsible for the risk and to apply one of four handling strategies – research, accept, watch, or mitigate.
    • A research strategy seeks more information to determine the most effective way to reduce the risk’s likelihood or consequence.
    • The accept strategy applies when the risk’s consequences are tolerable or the risk cannot be reasonably mitigated in a cost-effective manner. When a risk is accepted, the risk owner must document a complete acceptance rationale in the risk database.
    • A watch strategy applies when the program chooses not to accept the risk or commit resources and requires a metric to indicate a change in conditions or scoring.
    • Some mitigation plans may require a fallback plan in case the primary mitigation does not achieve risk reduction. A recovery plan may be established for a risk that has a high confidence of becoming a problem or that has a high consequence.
    • The recovery plan is invoked should the risk actually occur and allows the program to plan for future problems proactively.
  4. Track -Β is a fundamental step in controlling risks. Data, including measures of actual versus planned progress, qualitative descriptions, and quantitative measures, is collected, compiled, and reported so that management can decide whether to update risk mitigation actions, adopt an alternative mitigation approach or handling strategy, analyze other risks, or initiate new risks. For example, management may track quantitative measures of the residual probability that a risk will occur and assess those measures periodically to decide whether to continue mitigation, change the mitigation approach, accept, or close the risk.
  5. Control -Β management evaluates risk mitigation tracking reports for progress (actual versus planned) and verifies that appropriate tasks and handling plans are in place. If actual progress differs significantly from planned progress, the risk owner should escalate the risk to the next higher review level. Typical decisions made during the step are: continue as planned; re-plan (develop a new or updated mitigation plan); change the primary plan to the fallback plan; accept the risk; or close. The appropriate management level must concur with the closure rationale before a risk is closed. If theΒ residual risk has a score greater than 3, the risk should not be closed but undergo further mitigation or be accepted. Any risk with a score of 3 or lower is assumed to be sufficiently mitigated and may be closed without expending additional resources. Decisions are captured in a program’s risk database.
  6. Communication -Communication and documentation occur in all process steps and ensure risks are properly understood, all consequences are considered, and all options for action are identified and prioritized accurately. Risks are documented in the database appropriate to the risk priority. For example, Top Program Risks are documented in the Active Risk Manager database while lower-level risks can be documented in a database at the organizational level responsible for the risk. Each risk database has the ability to produce summary and detailed reports, which facilitate communication between program stakeholders and managers to enable risk-informed decisions.

For this process to work, each activity - in the presence of uncertainty - must be estimated.Β 

So in the end, if we're going to be adults when managing projects, especially projects funded by other people's money, we need to act like adults and estimate. Without estimates of the uncertainties, the risks created by those uncertainties, the effectiveness of our risk handling processes -Β research, accept, watch, or mitigate in the NASA paradigm, the effectiveness of the controlling processes, and even the effectiveness of the communication processes - there will be little chance of success for our risk management process.

Let's change Tim Lister's quoteΒ and call it as it is

Estimating is how adults manage projects. No estimates No adult management

and the story in our neighborhood when our sons were in the Scouts...Β 

what's the difference between our organization and the Boy Scouts? The Boy Scouts have adult supervision.Β 

[1] NASA's approach to Continuous Risk Management, described in "NASA's Management of the Orion Multi-Purpose Crew Vehicle Program," September 2016

Related articles Bayesian Statistics and Project Work Making Decisions in the Presence of Uncertainty Making Decisions In The Presence of Uncertainty Five Estimating Pathologies and Their Corrective Actions The Use, Misuse, and Abuse of Complexity and Complex Capabilities Based Planning Just Because You Say Words, It Doesn't Make Then True
Categories: Project Management

Sponsored Post: Loupe, New York Times, ScaleArc, Aerospike, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • The New York Times is looking for a Software Engineer for its Delivery/Site Reliability Engineering team. You will also be a part of a team responsible for building the tools that ensure that the various systems at The New York Times continue to operate in a reliable and efficient manner. Some of the tech we use: Go, Ruby, Bash, AWS, GCP, Terraform, Packer, Docker, Kubernetes, Vault, Consul, Jenkins, Drone. Please send resumes to: technicaljobs@nytimes.com
Fun and Informative Events
  • Your event here!
Cool Products and Services
  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Docker: Unknown – Unable to query docker version: x509: certificate is valid for

Mark Needham - Wed, 12/21/2016 - 08:11

I was playing around with Docker locally and somehow ended up with this error when I tried to list my docker machines:

$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   -        virtualbox   Running   tcp://192.168.99.101:2376           Unknown   Unable to query docker version: Get https://192.168.99.101:2376/v1.15/version: x509: certificate is valid for 192.168.99.100, not 192.168.99.101

My Google Fu was weak I couldn’t find any suggestions for what this might mean so I tried shutting it down and starting it again!

On the restart I actually got some helpful advice:

$ docker-machine stop
Stopping "default"...
Machine "default" was stopped.
$ docker-machine start
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

So I tried that:

$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.101:2376": x509: certificate is valid for 192.168.99.100, not 192.168.99.101
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.

And then regenerates my certificates:

$ docker-machine regenerate-certs
Regenerate TLS machine certs?  Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

And now everything is happy again!

$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER   ERRORS
default   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.9.0
Categories: Programming

Post-Agile Age: The Age of Aquarius (Something Better is Beginning)

Shanghai Skyline

Change is in the skyline!

Modernism, which dated from the 1800’s to the early 20th century, was replaced by postmodernism. Art Deco was replaced by less extravagant architecture. All movements come to an end and are at some point replaced by something else. The mainstay of software development, waterfall development, introduced in the late 50’s and then documented in 1970, was supplanted by Agile. Organizations are adopting Agile at a slower rate and with more adaptations that fall outside the intent of the principles in the Agile Manifesto. We have reached or are reaching the end of Agile as a movement. Stating that we are approaching Agile’s end has elicited a number of responses. For example, I had the following exchange on Facebook

Michael Miller So Agile, in the end, is just another fad?

Thomas Cagley Jr Not exactly. The point is more that for a variety of reasons Agile as a movement is over even though some are still in the adoption mode. At some point, another movement will catch fire and build from the base that Agile and DevOps have wrought.

Agile as a set of frameworks and techniques is not at risk of going away, but the principle drive of the movement as described in the Agile Manifesto is being weighted down and by four things:

  • Method Lemmings – Just doing Agile, and therefore often doing Agile inappropriately.
  • Proscriptive Norms – Defining boundaries around methods that reduce flexibility.
  • Brand Driven Eco-Systems – Splintering of schools of thought driven by economic competition.
  • A Lack of Systems Thinking/Management – A resurgence of managing people and steps rather than taking a systems view.

The question then becomes what is next, because there will always be something next. Which is exactly what Woody Zuill asked on Twitter.

Woody ZuillΒ @TCagley Do you have a definition of “post-agile” that you can share?

I am not the first person to use the term β€œpost-agile”. Woody uses the term to mean “We accept all the good things that Agile brings, and are ready to explore and innovate beyond”. Β I am far less sanguine. The four drivers suggest that we have reached high tide. But when a wave breaks, while some water rolls back to the sea, some sinks into the beach to feed the animals and water the plants. Β The Agile movement has helped many teams and organizations to take steps toward unlocking the power of teams. Β Tools and techniques such as collaboration, kaizen events, retrospectives and frequent planning are part of our basic vocabulary. Β These tools provide the basis for what is to come. Β Which to some extent makes Woody’s definition correct. Β The definition I use for the post-agile age is the age of improvement punctuated with innovation. The age of improvement will not be limited to process, technology, or even people improvement individually but an age where organizations change how they work based on a process of learning and adapting. Β Β When Mike Miller stated:

Michael Miller It sounds like after 50 years we don’t really know how to do SW development — we keep coming up with new methodologies.

He was correct. Of course, we don’t really know how to do software development, After 50 years of software development it is time to try new things combining everything we have learned before with what we will learn tomorrow. Β The post-agile age is an age of improvement, even if we are all starting at a different point with different constraints and capabilities. Β 

Essays in Post Agile Age Arc include:

  1. Post Agile Age: The Movement Is Dead
  2. Post Agile Age: Drivers of the End of the Agile Movement and Method Lemmings
  3. Proscriptive Norms
  4. Brand Driven Ecosystems
  5. A Lack of Systems Thinking/Management
  6. The Age of Aquarius (Something Better is Beginning) – Current

Categories: Process Management

Post-Agile Age: The Age of Aquarius (Something Better is Beginning)

Shanghai Skyline

Change is in the skyline!

Modernism, which dated from the 1800’s to the early 20th century, was replaced by postmodernism. Art Deco was replaced by less extravagant architecture. All movements come to an end and are at some point replaced by something else. The mainstay of software development, waterfall development, introduced in the late 50’s and then documented in 1970, was supplanted by Agile. Organizations are adopting Agile at a slower rate and with more adaptations that fall outside the intent of the principles in the Agile Manifesto. We have reached or are reaching the end of Agile as a movement. Stating that we are approaching Agile’s end has elicited a number of responses. For example, I had the following exchange on Facebook

Michael Miller So Agile, in the end, is just another fad?

Thomas Cagley Jr Not exactly. The point is more that for a variety of reasons Agile as a movement is over even though some are still in the adoption mode. At some point, another movement will catch fire and build from the base that Agile and DevOps have wrought.

Agile as a set of frameworks and techniques is not at risk of going away, but the principle drive of the movement as described in the Agile Manifesto is being weighted down and by four things:

  • Method Lemmings – Just doing Agile, and therefore often doing Agile inappropriately.
  • Proscriptive Norms – Defining boundaries around methods that reduce flexibility.
  • Brand Driven Eco-Systems – Splintering of schools of thought driven by economic competition.
  • A Lack of Systems Thinking/Management – A resurgence of managing people and steps rather than taking a systems view.

The question then becomes what is next, because there will always be something next. Which is exactly what Woody Zuill asked on Twitter.

Woody ZuillΒ @TCagley Do you have a definition of “post-agile” that you can share?

I am not the first person to use the term β€œpost-agile”. Woody uses the term to mean “We accept all the good things that Agile brings, and are ready to explore and innovate beyond”. Β I am far less sanguine. The four drivers suggest that we have reached high tide. But when a wave breaks, while some water rolls back to the sea, some sinks into the beach to feed the animals and water the plants. Β The Agile movement has helped many teams and organizations to take steps toward unlocking the power of teams. Β Tools and techniques such as collaboration, kaizen events, retrospectives and frequent planning are part of our basic vocabulary. Β These tools provide the basis for what is to come. Β Which to some extent makes Woody’s definition correct. Β The definition I use for the post-agile age is the age of improvement punctuated with innovation. The age of improvement will not be limited to process, technology, or even people improvement individually but an age where organizations change how they work based on a process of learning and adapting. Β Β When Mike Miller stated:

Michael Miller It sounds like after 50 years we don’t really know how to do SW development — we keep coming up with new methodologies.

He was correct. Of course, we don’t really know how to do software development, After 50 years of software development it is time to try new things combining everything we have learned before with what we will learn tomorrow. Β The post-agile age is an age of improvement, even if we are all starting at a different point with different constraints and capabilities. Β 

Essays in Post Agile Age Arc include:

  1. Post Agile Age: The Movement Is Dead
  2. Post Agile Age: Drivers of the End of the Agile Movement and Method Lemmings
  3. Proscriptive Norms
  4. Brand Driven Ecosystems
  5. A Lack of Systems Thinking/Management
  6. The Age of Aquarius (Something Better is Beginning) – Current

Categories: Process Management

SE-Radio Episode 278: Peter Hilton on Naming

Felienne talks with Peter Hilton on how to name things. The discussion covers: why naming is much harder than we think, why naming matters in programming and program comprehension, how to create good names, and recognize bad names, and how to improve your naming skills. Venue: Felienne’s residence, Rotterdam RelatedΒ Links To camelcase or under_score by […]
Categories: Programming

Consider Rolling Wave Roadmap and Backlog Planning

Many agile teams attempt to plan for an entire quarter at a time.

Sometimes, that works quite well. You have deliverables, and everyone understands the order in which you need to deliver them. You use agile because you can receive feedback about the work as you proceed.

You might make small adjustments, and you manage to stay on track with the work. In fact, you often complete what you thought you could complete in a quarter. (Congratulations to you!)

I rarely meet teams like that.

Instead, I meet and work with teams who discover something in the first or second iteration that means the entire rest of the quarter is suspect. As they proceed through those first few features/deliverables, they, including the PO, realize they don’t know what they thought they knew. They discovered something important.

Sometimes, the managers in the organization realize they want this team to work on a different project sometime in the quarter. Or, they want the team to alternate features (in flow) orΒ projects (in iterations) so the managers can re-assess the project portfolio. Or, something occurs outside the organization and the managers need different deliverables.

If you’re like me, you then view all the planning you did for the rest of the quarter as waste. I don’t want to spend time planning for work I’m not going to do. I might need to know something about where the product is headed, but I don’t want to write stories or refine backlogs or even estimate work I’m not going to do.

If you are like me, we have alternatives if we use rolling wave, deliverable-based planning with decreased specific plans.

In this one-quarter roadmap example, you can see how the teams completed the first iteration. That completion changes the color from pink to white. Notice how the last month of the quarter is grayed out. That’s what we think will happen, and we’re not sure.

We only have specific plans for two iterations. As the team completes this iteration, the PO and the team will refine/plan for what goes into the 3 iteration from here (the end of the second month). As the team completes work, the PO (and the PO Value team) can reassess what should go into the last part of this quarter and the final month.

If you work in flow, it’s the same idea if you keep your demos on a cadence.

What if you need a shorter planning horizon? Maybe you don’t need one-quarter plans. You can do the same thing with two-month plans or one-month plans.

This is exactly what happened with a team I’m working with. They tried to plan for a quarter at a time. And, often, it was the PO who needed to change things partway through the quarter. Or, the PO Value team realized they did not have a perfect crystal ball and needed to change the order of the features partway through the quarter.

They tried to move to two-month horizons, and that didn’t help. They moved to one-month horizons, and almost always change the contents for the last half of the second month. In the example above, notice how the Text Transfer work moved to farther out, and the secure login work moved closer in.

You might have the same kind of problem. If so, don’t plan details for the quarter. Plan details as far out as you can see, and that might be only one or two iterations in duration. Then, take the principles of what you want (next part of the engine, or next part of search, or whatever it is that you need) and plan the details just in time.

Rolling wave deliverable-based planning works for agile. In fact, you might think it’s the way agile should work.

If you lie this approach to roadmapping, please join my Practical Product Owner workshop. All the details are on that page.

Categories: Project Management

Start building Actions on Google

Android Developers Blog - Tue, 12/20/2016 - 18:17

Posted by Jason Douglas, PM Director for Actions on Google

The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.

In October, we previewed Actions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.

Conversation Actions for Google Home

Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.

You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.

Introduction to Conversation Actions by Wayne Piekarski

Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings

Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.

Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the β€œactions-on-google” tag on StackOverflow.
Categories: Programming

Blockchain for Software Developers

From the Editor of Methods & Tools - Tue, 12/20/2016 - 09:18
In a lot of software developer conferences, there are talks about the technical aspects of the blockchains, how to develop smart contracts on top of Ethereum and things like that. But before looking at those, it is crucial to take a step back and understand what is the blockchain, what it brings to the table […]