Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Architecture of Algolia’s Distributed Search Network

Guest post by Julien Lemoine, co-founder & CTO of Algolia, a developer friendly search as a service API.

Algolia started in 2012 as an offline search engine SDK for mobile. At this time we had no idea that within two years we would have built a worldwide distributed search network.

Today Algolia serves more than 2 billion user generated queries per month from 12 regions worldwide, our average server response time is 6.7ms and 90% of queries are answered in less than 15ms. Our unavailability rate on search is below 10-6 which represents less than 3 seconds per month.

The challenges we faced with the offline mobile SDK were technical limitations imposed by the nature of mobile. These challenges forced us to think differently when developing our algorithms because classic server-side approaches would not work.

Our product has evolved greatly since then. We would like to share our experiences with building and scaling our REST API built on top of those algorithms.

We will explain how we are using a distributed consensus for high-availability and synchronization of data in different regions around the world and how we are doing the routing of queries to the closest locations via an anycast DNS.

The data size misconception
Categories: Architecture

Python: scikit-learn/lda: Extracting topics from QCon talk abstracts

Mark Needham - 11 hours 30 min ago

Following on from Rik van Bruggen’s blog post on a QCon graph he’s created ahead of this week’s conference, I was curious whether we could extract any interesting relationships between talks based on their abstracts.

Talks are already grouped by their hosting track but there’s likely to be some overlap in topics even for talks on different tracks.
I therefore wanted to extract topics and connect each talk to the topic that describes it best.

My first attempt was following an example which uses Non-Negative Matrix factorization which worked very well for extracting topics but didn’t seem to provide an obvious way to work out how to link those topics to individual talks.

Instead I ended up looking at the lda library which uses Latent Dirichlet Allocation and allowed me to achieve both goals.

I already had some code to run TF/IDF over each of the talks so I thought I’d be able to feed the matrix output from that into the LDA function. This is what I started with:

import csv
 
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF
from collections import defaultdict
from bs4 import BeautifulSoup, NavigableString
from soupselect import select
 
def uri_to_file_name(uri):
    return uri.replace("/", "-")
 
sessions = {}
with open("data/sessions.csv", "r") as sessions_file:
    reader = csv.reader(sessions_file, delimiter = ",")
    reader.next() # header
    for row in reader:
        session_id = int(row[0])
        filename = "data/sessions/" + uri_to_file_name(row[4])
        page = open(filename).read()
        soup = BeautifulSoup(page)
        abstract = select(soup, "div.brenham-main-content p")
        if abstract:
            sessions[session_id] = {"abstract" : abstract[0].text, "title": row[3] }
        else:
            abstract = select(soup, "div.pane-content p")
            sessions[session_id] = {"abstract" : abstract[0].text, "title": row[3] }
 
corpus = []
titles = []
for id, session in sorted(sessions.iteritems(), key=lambda t: int(t[0])):
    corpus.append(session["abstract"])
    titles.append(session["title"])
 
n_topics = 15
n_top_words = 50
n_features = 6000
 
vectorizer = TfidfVectorizer(analyzer='word', ngram_range=(1,1), min_df = 0, stop_words = 'english')
matrix =  vectorizer.fit_transform(corpus)
feature_names = vectorizer.get_feature_names()
 
import lda
import numpy as np
 
vocab = feature_names
 
model = lda.LDA(n_topics=20, n_iter=500, random_state=1)
model.fit(matrix)
topic_word = model.topic_word_
n_top_words = 20
 
for i, topic_dist in enumerate(topic_word):
    topic_words = np.array(vocab)[np.argsort(topic_dist)][:-n_top_words:-1]
    print('Topic {}: {}'.format(i, ' '.join(topic_words)))

And if we run it?

Topic 0: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 1: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 2: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 3: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 4: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 5: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 6: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 7: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 8: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 9: 10 faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 10: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 11: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 12: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 13: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 14: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 15: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 16: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 17: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 18: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure
Topic 19: zoosk faced exposing expression external externally extra extraordinary extreme extremes face facebook facilitates faster factor factors fail failed failure

As you can see, every topic has the same set of words which isn’t what we want. Let’s switch out our TF/IDF vectorizer for a simpler count based one:

vectorizer = CountVectorizer(analyzer='word', ngram_range=(1,1), min_df = 0, stop_words = 'english')

The rest of the code stays the same and these are the topics that get extracted:

Topic 0: time people company did writing real way used let cassandra soundcloud successful know web organization audio lives swift stuck
Topic 1: process development delivery platform developer continuous testing rapidly deployment implementing release demonstrate paas advice hard light predictable radically introduce
Topic 2: way open space kind people change meetings ll lead powerful practice times everyday simple qconlondon organization unconference track extraordinary
Topic 3: apache apis processing open spark distributed leading making environments solr cases brooklyn components existing ingestion contributing data target evolved
Topic 4: management million effective cost halo gameplay player billion ad catastrophic store microsoft final music influence information launch research purchased
Topic 5: product look like use talk problems working analysis projects challenges 2011 functionality useful spread business deep inside happens sensemaker
Topic 6: ll computers started principles free focus face smaller atlas control uses products avoid computing ground billions mean volume consistently
Topic 7: code end users developers just application way line apps mobile features sites hours issues applications write faster game better
Topic 8: ve development teams use things world like time learned lessons think methods multiple story say customer developer experiences organisations
Topic 9: software building docker built challenges monitoring gilt application discuss solution decision talk download source center critical decisions bintray customers
Topic 10: years infrastructure tools language different service lot devops talk adoption scala popular clojure advantages introduced effectively looking wasn includes
Topic 11: high does latency session requirements functional performance real world questions problem second engineering patterns gravity explain discuss expected time
Topic 12: business make build technology technologies help trying developers parts want interfaces small best centres implementations critical moo databases going
Topic 13: need design systems large driven scale software applications slow protocol change needs approach gets new contracts solutions complicated distributed
Topic 14: architecture service micro architectures increasing talk microservices order market value values new present presents services scalable trading practices today
Topic 15: java using fast robovm lmax ios presentation really jvm native best exchange azul hardware started project slowdowns goal bring
Topic 16: data services using traditional create ways support uk large user person complex systems production impact art organizations accessing mirage
Topic 17: agile team experience don work doing processes based key reach extra defined pressure machines nightmare practices learn goals guidance
Topic 18: internet new devices programming things iot big number deliver day connected performing growing got state thing provided times automated
Topic 19: cloud including deploy session api government security culture software type attack techniques environment digital secure microservice better creation interaction

Some of the groupings seem to make sense e.g. Topic 11 contains words related to high performance code with low latency; Topic 15 covers Java, the JVM and other related words; but others are more difficult to decipher

e.g. both Topic 14 and Topic 19 talk about micro services but the latter mentions ‘government’ and ‘security’ so perhaps the talks linked to that topic come at micro services from a different angle altogether.

Next let’s see which topics a talk is most likely to be about. We’ll look at the first ten:

doc_topic = model.doc_topic_
for i in range(0, 10):
    print("{} (top topic: {})".format(titles[i], doc_topic[i].argmax()))
    print(doc_topic[i].argsort()[::-1][:3])
 
To the Moon (top topic: 8)
[ 8  0 11]
Evolutionary Architecture and Micro-Services - A Match Enabled by Continuous Delivery (top topic: 14)
[14 19 16]
How SoundCloud uses Cassandra (top topic: 0)
[0 6 5]
DevOps and the Need for Speed (top topic: 18)
[18  5 16]
Neuro-diversity and agile (top topic: 7)
[17  7  2]
Java 8 in Anger (top topic: 7)
[ 7 15 12]
APIs that Change Lifestyles (top topic: 9)
[ 9  6 19]
Elasticsearch powers the Citizen Advice Bureau (CAB) to monitor trends in society before they become issues (top topic: 16)
[16 12 19]
Architecture Open Space (top topic: 2)
[ 2 19 18]
Don’t let Data Gravity crush your infrastructure (top topic: 11)
[11 16  3]

So our third talk on the list ‘How SoundCloud uses Cassandra’ does end up being tagged with topic 0 which mentions SoundCloud so that’s good!

Topic 0: time people company did writing real way used let cassandra soundcloud successful know web organization audio lives swift stuck

It’s next two topics are 5 & 6 which contain the following words…

Topic 5: product look like use talk problems working analysis projects challenges 2011 functionality useful spread business deep inside happens sensemaker
Topic 6: ll computers started principles free focus face smaller atlas control uses products avoid computing ground billions mean volume consistently

…which are not as intuitive. What about Java 8 in Anger? It’s been tagged with topics 7, 15 and 12:

Topic 7: code end users developers just application way line apps mobile features sites hours issues applications write faster game better
Topic 15: java using fast robovm lmax ios presentation really jvm native best exchange azul hardware started project slowdowns goal bring
Topic 12: business make build technology technologies help trying developers parts want interfaces small best centres implementations critical moo databases going

15 makes sense since that mentions Java and perhaps 12 and 7 do as well as they both mention developers.

So while the topics pulled out are not horrendous I don’t think they’re particularly useful yet either. These are some of the areas I need to do some more research around:

  • How do you measure the success of topic modelling? I’ve been eyeballing the output of the algorithm but I imagine there’s an automated way to do that.
  • How do you determine the right number of topics? I found an article written by Christophe Grainger which explains a way of doing that which I need to look at in more detail.
  • It feels like I would be able to pull out better topics if I had an ontology of computer science/software words and then ran the words through that to derive topics.
  • Another approach suggested by Michael is to find the most popular words using the CountVectorizer and tag talks with those instead.

If you have any suggestions let me know. The full code is on github if you want to play around with it.

Categories: Programming

Lightweight software architecture - an interview with Fog Creek

I recently did a short interview with the folks from Fog Creek (creators of Stack Exchange, Trello, FogBugz, etc) about lightweight approaches to software architecture, my book and so on. The entire interview is only about 8 minutes in length and you can watch/listen/read it on the Fog Creek blog.

Read more...

Categories: Architecture

Google Play services 7.0 - Places Everyone!

Android Developers Blog - Thu, 03/05/2015 - 02:43

Posted by Ian Lake, Developer Advocate

Today, we’re bringing you new tools to build better apps with the rollout of Google Play services 7.0. With this release, we’re delivering improvements to location settings experiences, a brand new API for place information, new fitness data, Google Play Games, and more.

Location Settings Dialog

While the FusedLocationProviderApi combines multiple sensors to give you the optimal location, the accuracy of the location your app receives still depends greatly on what settings are enabled on the device (e.g. GPS, wifi, airplane mode, etc). In Google Play services 7.0, we’re introducing a standard mechanism to check that the necessary location settings are enabled for a given LocationRequest to succeed. If there are possible improvements, you can display a one touch control for the user to change their settings without leaving your app.

This API provides a great opportunity to make for a much better user experience, particularly if location information is critical to the user experience of your app such as was the case with Google Maps when they integrated the Location Settings dialog and saw a dramatic increase in the number of users in a good location state.

Places API

Location can be so much more than a latitude and longitude: the new Places API makes it easy to get details from Google’s database of places and businesses. The built-in place picker makes it easy for the user to pick their current place and provides all the relevant place details including name, address, phone number, website, and more.

If you prefer to provide your own UI, the getCurrentPlace() API returns places directly around the user’s current location. Autocomplete predictions are also provided to allow a low latency search experience directly within your app.

You can also manually add places with the addPlace() API and report that the user is at a particular place, ensuring that even the most explorative users can input and share their favorite new places.

The Places API will also be available cross-platform: in a few days, you’ll be able to apply for the Places API for iOS beta program to ensure a great and consistent user experience across mobile platforms.

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

In Google Play services 7.0, the previous Fitness.API that you passed into your GoogleApiClient has now been replaced with a number of APIs, matching the high level set of Google Fit Android APIs:

  • SENSORS_API to access raw sensor data via SensorsApi
  • RECORDING_API to record data via RecordingApi
  • HISTORY_API for inserting, deleting, or reading data via HistoryApi
  • SESSIONS_API for managing sessions via SessionsApi
  • BLE_API to interact with Bluetooth Low Energy devices via BleApi
  • CONFIG_API to access custom data types and settings for Google Fit via ConfigApi

This change significantly reduces the memory requirement for Google Fit enabled apps running in the background. Like always, apps built on previous versions of Google Play services will continue to work, but we strongly suggest you rebuild your Google Fit enabled apps to take advantage of this change.

Having all the data can be an empowering part of making meaningful changes and Google Fit is augmenting their existing data types with the addition of body fat percentage and sleep data.

Google Mobile Ads

We’ve found integration of AdMob and Google Analytics a powerful combination for analyzing how your users really use your app since we launched Google Analytics in AdMob last year. This new release enables any Google Mobile Ads SDK implementation to automatically get Google Analytics integration giving you the number of users and sessions, session duration, operating systems, device models, geography, and automatic screen reporting without any additional development work.

In addition, we’ve made numerous improvements across the SDK including ad request prefetching (saving battery usage and improving apparent latency) and making the SDK MRAIDv2 compliant.

--> Google Play Games

Announced at Game Developers Conference (GDC), we’re offering new tools to supercharge your games on Google Play. Included in Google Play services 7.0 is the Nearby Connections API, allowing games to seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV.

App Indexing

App Indexing lets Google index apps just like websites, enabling Google search results to deep-link directly into your native app. We've simplified the App Indexing API to make this integration even easier for you by combining the existing view()/viewEnd() and action()/end() flows into a single start() and end() API.

Changes to GoogleApiClient

GoogleApiClient serves as the common entry point for accessing Google APIs. For this release, we’ve made retrieval of Google OAuth 2.0 tokens part of GoogleApiClient, making it much easier to request server auth codes to access Google APIs.

SDK Coming Soon!

We will be rolling out Google Play services 7.0 over the next few days. Expect an update to this blog post, published documentation, and the availability of the SDK once the rollout is completed.

To learn more about Google Play services and the APIs available to you through it, visit the Google Services section on the Android Developer site.

Join the discussion on

+Android Developers
Categories: Programming

Dogma Driven Development

Actively Lazy - Wed, 03/04/2015 - 21:24

We really are an arrogant, opinionated bunch, aren’t we? We work in an industry where there aren’t any right answers. We pretend what we do is computer “science”. When in reality, its more art than science. It certainly isn’t engineering. Engineering suggests an underlying physics, mathematical models of how the world works. Is there a mathematical model of how to build software at scale? No. Do we understand the difference between what makes good software and bad software? No. Are there papers with published proofs of whether this idea or that idea has any observable difference on written software, as practised by companies the world over? No. It turns out this is a difficult field: software is weird stuff. And yet we work in an industry full of close-minded people, convinced that their way is The One True Way. It’s not science, its basically art. Our industry is dominated by fashion.

Which language we work in is fashion: should we use Ruby, or Node.js or maybe Clojure. Hey Go seems pretty cool. By which I mean “I read about it on the internet, and I’d quite like to put it on my CV so can I please f*** up your million pound project in a big experiment of whether I can figure out all the nuances of the language faster than the project can de-rail?”

If it’s not the language we’re using, its architectural patterns. The dogma attached to REST. Jesus H Christ. It’s just a bunch of HTTP requests, no need to get so picky! For a while it was SOA. Then that became the old legacy thing, so now it’s all micro-services, which are totally different. Definitely. I read it on the internet, it must be true.

Everyone has their opinions. Christ, we’ve got our opinions. Thousands of blogs and wankers on twitter telling you what they think about the world (exactly like this one) As if one person’s observations are useful for anything more than being able to replicate their past success, should you ever by mistake find yourself on their timeline from about six weeks ago.

For example: I wrote a post recently about pairing, and some fine specimen of internet based humanity felt the need to tell me that people who need to pair are an embarrassment to the profession, that we should find another line of work. Hahaha I know, don’t read the comments. Especially when it’s in reply to something you wrote. But seriously now, is it necessary to share your close minded ignorance with the world?

I shouldn’t get worked up about some asshat on the internet. But it’s not just some asshat on the internet. There are hundreds of thousands of these asshats with their closed minds and dogmatic views on the world. And not just asshats spouting off on the internet, but getting paid to build the software that increasingly runs all our lives. When will we admit that we have no idea what we’re doing. The only way to get better is to learn as many tools and techniques as we can and hopefully, along the way, we’ll learn when to apply which techniques and when not to.

For example, I’ve worked with some people that don’t get TDD. Ok, fine – some people just aren’t “test infected”. And a couple of guys that really would rather gut me and fry my liver for dinner than pair with me. Do I feel the need to evangelise to them as though I’ve just found God? No. Does it offend me that they don’t follow my religion? No. Do I feel the need to suicide bomb their project? No. Its your call. Its your funeral. When I have proof that my way is The One True Way and yours is a sham, you can damn well bet I’ll be force feeding it to you. But given that ain’t gonna happen: I think we’re all pretty safe. If you don’t wanna pair, you put your headphones on and disappear into your silent reverie. Those of us that like pairing will pair, those of us that don’t, won’t. I’m fine with that.

The trouble is, in this farcical echo chamber of an industry, where the lessons of 40 years ago still haven’t been learnt properly. Where we keep repeating the mistakes of 20 years ago. Of 10 years ago. Of 5 years ago. Of 2 years ago. Of last week. For Christ’s sake people, can we not just learn a little of what’s gone before? All we have is mindless opinion, presented as fact. Everyone’s out to flog you their new shiny products, or whatever bullshit service they’re offering this week. No, sorry, it’s all utter bollocks. We know less about building decent software now than we did 40 years ago. It’s just now we build a massive amount more of it. And it’s even more shit than it ever was. Only now, now we have those crazy bastards that otherwise would stand on street corners telling me that Jesus would save me if only I would let him; but now they’re selling me scrum master training or some other snake oil.

All of this is unfortunately entirely indistinguishable from reasoned debate, so for youngsters entering the industry they have no way to know that its just a bunch of wankers arguing which colour to paint this new square wheel they invented. Until after a few years they become as jaded and cynical as the rest of us and decide to take advantage of all the other dumb fools out there. They find their little niche, their little way of making the world a little bit worse but themselves a little bit richer. And so the cycle repeats. Fashion begets fashion. Opinion begets opinion.

There aren’t any right answers in creating software. I know what I’ve found works some of the time. I get paid to put into practice what I know. I hope you do, too. But we’ve all had a different set of experiences which means we often don’t agree on what works and what doesn’t. But this is all we have. The plural of anecdote is not data.

All we have is individual judgement, borne out of individual experience. There is no grand unified theory of Correct Software Development. The best we can hope to do is learn from each other and try as many different approaches as possible. Try and fail safely and often. The more techniques you’ve tried the better the chance you can find the right technique at the right time.

Call it craftsmanship if you like. Call it art if you like. But it certainly isn’t science. And I don’t know about you, but it’s a very long time since I saw any engineering round these parts.


Categories: Programming, Testing & QA

10 Reasons to Consider a Multi-Model Database

This is a guest post by Nikhil Palekar, Solutions Architect, FoundationDB.

The proliferation of NoSQL databases is a response to the needs of modern applications. Still, not all data can be shoehorned into a particular NoSQL model, which is why so many different database options exist in the market. As a result, organizations are now facing serious database bloat within their infrastructure.

But a new class of database engine recently has emerged that can address the business needs of each of those applications and use cases without also requiring the enterprise to maintain separate systems, software licenses, developers, and administrators.

These multi-model databases can provide a single back end that exposes multiple data models to the applications it supports. In that way, multi-model databases eliminate fragmentation and provide a consistent, well-understood backend that supports many different products and applications. The benefits to the organization are extensive, but some of the most significant benefits include:

1. Consolidation
Categories: Architecture

The Use, Misuse, and Abuse of Complexity and Complex

Herding Cats - Glen Alleman - Wed, 03/04/2015 - 16:54

Our world is complex and becoming more complex all the time. We are connected and in turn driven by a complex web of interacting technology and processes. These interacting technologies and processes are implemented by information and communication technologies that are themselves complex. It is difficult to apply such a broad topic as complexity to the equally broad topic of developing software systems or even broader topic of engineered systems.

Measuring complexity in engineered systems is a highly varying concept.1

The result of these complex systems many times creates complexity. But care is needed is tossing around words like complex and complexity. If these systems are in fact Engineered, rather than simply left to emerge on their own, we can apply some principles to control the unwanted complexity of these complex systems.

First some definitions. These are not the touchy feely definitions found in places like Cynefin where the units of measure of complex, complexity, and chaos, are no where to be found. Cynefin was developed in the context of management and organizational strategy by David Snowden. We're interested in the system complexity of things, the people who build them, and the environments where they are deployed. But this also means measuring complex and the complexity in units meaningful to the decision makers. These units must somehow be connected to the cost, schedule, and probability of success for those paying for the work.

Complexity has turned out to be very difficult to define. The dozens of definitions that have been offered all fall short in one respect or another, classifying something as complex which we intuitively would see as simple, or denying an obviously complex phenomenon the label of complexity. Moreover, these definitions are either only applicable to a very restricted domain, such as computer algorithms or genomes, or so vague as to be almost meaningless. (From Principia Cybernetica) 

Some more background about complex systems and their complexity4

  • A System is a set of interacting components - whether human-made, naturally-occurring, or a combination of both.
  • By "interact", it means the exchange of physical force, energy, mass flow, or information, such that one component can change the state of another component. Or for software systems, the exchange of information, state knowledge, or impact an outcome of other component.
  • The technologies or natures of these systems may be mechanical, chemical, electronic, biological, informational (software), or combinations of these or others.
  • The behavior of a system can include "emergent" aspects that are not a characteristic of any individual component, but arise from their combination.
  • Emergent properties can be valuable (e.g., delivery of new services) or undesired (e.g., dangerous or unstable).
  • The behavior of a system is often not easily predicted from the behavior of its individual components, and may also be more complex.
  • The complexity of human-engineered systems is growing, in response to demands for increased sophistication, in response to market or government competition, and enabled by technologies.
  • It has become relatively easy to construct systems which cannot be so readily understood. 

What is complexity then?

The words chaos and complexity have been used to mean disorder and complications for centuries. Only in the last thirty years have they been used to refer to mathematical and scientific bodies of knowledge.6

Often something is called complex when we can't fully understand its structure or behavior. It is uncertain, unpredictable, complicated, or just difficult to understand.  Complexity is often described as the inability of a human mind to grasp the whole of a complex problem and predict the outcome as Subjective Complexity.5

The complex and complexity I'm speaking about are for Engineered Systems, products and services used by organizations, but engineered for their use.  Their use can be considered complex, even create complexity and many times emergent. But the Cynefin approach to complex is ethereal, without the principled basis found in engineering and more importantly Systems Engineering.3 This appears to be why agilest toss around the terms found in Cynefin, since engineering of the software is not a core principle of agile development, rather emergent design and architecture is the basis of the Agile Manifesto.

Here's an example of the engineering side of complex systems, from Dr. Sheard's presentation "Twelve Roles and Three Types of Systems Engineering," NASA Goddard Space Flight Center, Systems Engineering Seminar, Complexity and Systems Engineering, April 5, 2011,

Screen Shot 2015-03-01 at 9.02.58 AM

Before applying these definitions to problems for developing software, there is more work to do.

In complex systems there are entities that participate in the system2

  • The technical system being designed and built.
  • The socio-technical systems that are building the systems - the project team or production team.
  • The technological Environment into which the system will be inserted when the system is complete and deployed. The socio-political system related to the technological environment. This is generally the interaction of the system stakeholders with the resulting system.
  • The subjective human experience when thinking about, designing, or using the system, called Cognition.

Cynefin does not make these distinctions, instead separates the system into Complex, Complicated, Chaotic, and Obvious, without distinguishing to which engineered portion of the system these are applicable.

So when we hear about Complex Adaptive Systems in the absence of a domain and the mathematics of such a system, care is needed. It is likely no actionable information will be available in units of measure meaningful to the decision makers to help them make decisions. Just a set words.

References

1 "Complexity Types: from Science to Systems Engineering," Sarah Sheard, ; Mostashari, Ali, Proceedings of the 21th Annual International Symposium of the International Council of Systems Engineering

2 "Systems Engineering in Complexity Context," Sarah Sheard, Proceedings of the 23nd Annual International Symposium of the International Council of Systems Engineering

3 Systems Engineering Principles and Practices, 2nd Edition, Alexander Kossiakoff William N. Sweet Samuel J. Seymour Steven M. Biemer, John Wiley & Sons.

4 The Challenge of Complex Systems, INCOSE Crossroads of America Chapter.

5 “On Systems architects and systems architecting: some thoughts on explaining the art and science of system architecting” H. G. Sillitto, Proceedings of INCOSE IS 2009, Singapore, 20-23 July. 

6 Practical Applications of Complexity Theory for Systems Engineers, Sarah Sheard, Proceedings of the 15th Annual International Symposium of the International Council on Systems Engineering

Related articles Risk Management is How Adults Manage Projects Start with Problem, Than Suggest The Solution I Think You'll Find It's a Bit More Complicated Than That
Categories: Project Management

Failure is not an Option

Herding Cats - Glen Alleman - Wed, 03/04/2015 - 04:30

LastTitan_vandenberg_fThere is a popular noton in the agile world, and some business guru's that failure is encouraged as part of the learning process. What is not stated is when and where this failure can take place.

The picture to the left is the flight of the last Titan IV launch vehicle. I was outside the SCIF, but got to see everything up the 2nd stage separation.

The Martin Company’s launch vehicle built a five-decade legacy goes back to the earliest rockets designed and built in the United States. The Intercontinental Ballistic Missile (ICBM) program; Project Gemini, NASA’s 2nd human spaceflight program; Mars Viking landers; Voyager deep space probes; communications and reconnaissance satellites—all of these programs and more relied on the Titan for a safe and dependable launch. 

The final version flew when the program was retired after delivering National Reconnaissance Office payload to orbit on October 19, 2005. A total 368 Titans were flown, with capabilities ranging from Earth reconnaissance and military and civil communications to human and robotic exploration.

In this domain, failure is not an option. Many would correctly say, failures were found before use. And that is correct, Design, Development, Test, and Evaluation (DDT&E) is the basis of must work when commanded to do so when commanded to do so.

In domains without the needed capability that must perform on demand - fail fast and fail often may be applicable. 

Choose domain before suggesting a process idea is applicable

Related articles Software Engineering is a Verb Self-Organization Quote of the Day
Categories: Project Management

The House of Lean, or Is That The House of Quality?

House of Lean

Lean is the systematic removal of waste within a process, known as muda in Japanese. Much of our understanding of lean as a process improvement tool is a reflection of the Toyota Production System (TPS). In the parlance of TPS, the container for lean ideas and concepts is the House of Quality. Larman and Leffingwell and others have modified the metaphor to the House of Quality that Toyota leveraged to the House of Lean (HoL) to focus on the flow of work. The focus on flow makes lean concepts an important component for scaling Agile in frameworks like SAFe. Even without the rebranding, the core of lean provides a focus how work is done, which improves the flow or smoothness of work. The focus on flow reduces variance (know as muri in Japanese). Lean identifies variance from by comparing the outcome of work to development standards to expose existing problems so waste can be reduced. I am assuming that once waste is exposed that you do something about it. The concept of the House of Lean or the House of Quality has many variants. All that I have studied are valuable, however we will use the Larman/Leffingwell version of the HoL as this version seems to have found fertile ground in the software development field. The House of Lean we will explore consists of six components. They are:

  1. A ceiling or roof, which represents the goal. The goal is typically defined as delivering the maximum value in the shortest sustainable lead-time while providing the highest value and quality to customers, people and society.
  2. Pillar one (typically shown on the left side of the hours or lean) represents respect for people. Work of any sort is built around people. People need to be empowered to assess and evolve how they work within the standards of the organization. Agile reflects this respect for people in the principles of self-organization.
  3. Pillar two (typically shown on the right side of the hours or lean) represents continuous improvement. Continuous improvement, often termed Kiazen or good change, is the relentless removal of inefficiencies. Inefficiencies (waste) detract or keep an organization from attaining the goal.
  4. Between the two pillars are housed:
    1. Delivery practices that reflect the techniques used to deploy lean, such as great engineers, cadence, cross-functional teams, team rooms and process visualization. In the SAFe version of the HoL, the 14 Lean Principles often subsume a discussion of delivery practices. The inclusion in the HoL of specific types of lean and Agile delivery practices helps practitionersto clearly see the linkage between theory in the 14 Lean Principles and the two pillars of lean and the practice of developing software.
    2. 14 Lean Principles (See my interview with Don Reinertsen on SPaMCAST and my review of his book, The Principles of Product Development Flow: Second Generation Lean Product Development). The 14 Lean Principles espoused by Reinertsen are a mechanism to remove waste from the flow of work. In the original TPS, versions of the HoQ this was reflect by an element called Reduction of Mudas (reduction of wastes). Reinertsen provides a set of principles that are more easily translated to software development and maintenance.
  5. A base which represents lean/Agile leadership. Many of the representations of the HoL/HoQ call the base management support. Leadership is far stronger than support. Leadership reflects a management that is trained in lean and Agile AND believes in lean and Agile. Belief is reflected in decisions that are philosophically in sync with the 12 Agile Principles and the 14 Principles of Product Development.

The House of Lean is a convenient container to hold the concepts and ideas that began as the Toyota Production System and have evolved as tools to be less manufacturing-oriented. The evolution of the HoL to include concepts and techniques familiar to Agile practitioners have not only helped to reduce muda and muri, but also is a useful tool to help reduce overhead when scaling Agile using frameworks like SAFe.


Categories: Process Management

Off-Time: It’s OK to Do Nothing

NOOP.NL - Jurgen Appelo - Tue, 03/03/2015 - 18:37

On my final trip last year, I had been looking forward to run in Rio de Janeiro, along Ipanema Beach and Copacabana. But British Airways lost my luggage. I had no running gear and no time to purchase alternative shoes and clothes. I felt a bit sad and disappointed.

The post Off-Time: It’s OK to Do Nothing appeared first on NOOP.NL.

Categories: Project Management

Sponsored Post: Apple, InMemory.Net, Sentient, Couchbase, VividCortex, Internap, Transversal, MemSQL, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring a Software Engineer for Maps Services. The Maps Team is looking for a developer to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Sentient Technologies is hiring several Senior Distributed Systems Engineers and a Senior Distributed Systems QA Engineer. Sentient Technologies, is a privately held company seeking to solve the world’s most complex problems through massively scaled artificial intelligence running on one of the largest distributed compute resources in the world. Help us expand our existing million+ distributed cores to many, many more. Please apply here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Rise of the Multi-Model Database. FoundationDB Webinar: March 10th at 1pm EST. Do you want a SQL, JSON, Graph, Time Series, or Key Value database? Or maybe it’s all of them? Not all NoSQL Databases are not created equal. The latest development in this space is the Multi Model Database. Please join FoundationDB for an interactive webinar as we discuss the Rise of the Multi Model Database and what to consider when choosing the right tool for the job.
Cool Products and Services
  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Top Enterprise Use Cases for NoSQL. Discover how the largest enterprises in the world are leveraging NoSQL in mission-critical applications with real-world success stories. Get the Guide.
    http://info.couchbase.com/HS_SO_Top_10_Enterprise_NoSQL_Use_Cases.html

  • VividCortex Developer edition delivers a groundbreaking performance management solution to startups, open-source projects, nonprofits, and other organizations free of charge. It integrates high-resolution metrics on queries, metrics, processes, databases, and the OS and hardware to deliver an unprecedented level of visibility into production database activity.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Aerospike demonstrates RAM-like performance with Google Compute Engine Local SSDs. After scaling to 1 M Writes/Second with 6x fewer servers than Cassandra on Google Compute Engine, we certified Google’s new Local SSDs using the Aerospike Certification Tool for SSDs (ACT) and found RAM-like performance and 15x storage cost savings. Read more.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

The Cost Estimating Problem

Herding Cats - Glen Alleman - Tue, 03/03/2015 - 17:32

Screen Shot 2015-03-02 at 12.32.11 PM

In probability theory, de Finetti's theorem† explains why exchangeable observations are conditionally independent given some latent variable to which an epistemic probability distribution would then be assigned. It is named in honor of Bruno de Finetti.

It states that an exchangeable sequence of Bernoulli random variables is a "mixture" of independent and identically distributed (i.i.d.) Bernoulli random variables – while the individual variables of the exchangeable sequence are not themselves i.i.d., only exchangeable, there is an underlying family of i.i.d. random variables.

Thus, while observations need not be i.i.d. for a sequence to be exchangeable, there are underlying, generally unobservable, quantities which are i.i.d. – exchangeable sequences are (not necessarily i.i.d.) mixtures of i.i.d. sequences.

All of this actually has importance. When we start to assess risks using probabilistic process based on statistical processes, we need to be very careful to understand the underlying mathematics.

There are four approach to saying what we mean when we say “probability”

  1. Logical – weak implications
  2. Propensity – physical properties
  3. Frequency – attributed to sequences of observations
  4. Subjective – personal opinion

† Finetti’s Theorem is at the heart of estimating random variables. Cost is a random variable, like schedule durations and the technical outcomes from the effort based on cost and schedule. In statistical assessment of cost and schedule, Frequentist (counting) statistics is one approach. The second is Bayesian inference (used in most science). The exchangeability of the random variables is critical to building times series of sampled data from the project to forecast future performance.

The Bigger Problem in Estimating

Screen Shot 2015-03-02 at 12.38.40 PM

Exchangability and de Finetti's Theorem 

Finite Exchangeable Sequences

Categories: Project Management

101 Proven Practices for Focus

“Lack of direction, not lack of time, is the problem. We all have twenty-four hour days.” -- Zig Ziglar

Here is my collection of 101 Proven Practices for Focus.   It still needs work to improve it, but I wanted to shared it, as is, because focus is one of the most important skills we can develop for work and life.

Focus is the backbone of personal effectiveness, personal development, productivity, time management, leadership skills, and just about anything that matters.   Focus is a key ingredient to helping us achieve the things we set out to do, and to learn the things we need to learn.

Without focus, we can’t achieve great results.

I have a very healthy respect for the power of focus to amplify impact, to create amazing breakthroughs, and to make things happen.

The Power of Focus

Long ago one of my most impactful mentors said that focus is what separates the best from the rest.  In all of his experience, what exceptional people had, that others did not, was focus.

Here are a few relevant definitions of focus:
A main purpose or interest.
A center of interest or activity.
Close or narrow attention; concentration.

I think of focus simply as  the skill or ability to direct and hold our attention.

Focus is a Skill

Too many people think of focus as something either you are good at, or you are not.  It’s just like delayed gratification.

Focus is a skill you can build.

Focus is actually a skill and you can develop it.   In fact, you can develop it quite a bit.  For example, I helped a colleague get themselves off of their ADD medication by learning some new ways to retrain their brain.   It turned out that the medication only helped so much, the side effects sucked, and in the end, what they really needed was coping mechanisms for their mind, to better direct and hold their attention.

Here’s the surprise, though.  You can actually learn how to direct your attention very quickly.  Simply ask new questions.  You can direct your attention by asking questions.   If you want to change your focus, change the question.

101 Proven Practices at a Glance

Here is a list of the 101 Proven Practices for Focus:

  1. Align  your focus and your values
  2. Ask new questions to change your focus
  3. Ask yourself, “What are you rushing through for?”
  4. Beware of random, intermittent rewards
  5. Bite off what you can chew
  6. Breathe
  7. Capture all of your ideas in one place
  8. Capture all of your To-Dos all in one place
  9. Carry the good forward
  10. Change your environment
  11. Change your physiology
  12. Choose one project or one thing to focus on
  13. Choose to do it
  14. Clear away all distractions
  15. Clear away external distractions
  16. Clear away internal distractions
  17. Close your distractions
  18. Consolidate and batch your tasks
  19. Create routines to help you focus
  20. Decide to finish it
  21. Delay gratification
  22. Develop a routine
  23. Develop an effective startup routine
  24. Develop an effective shutdown routine
  25. Develop effective email routines
  26. Develop effective renewal activities
  27. Develop effective social media routines
  28. Direct your attention with skill
  29. Do less, focus more
  30. Do now what you could put off until later
  31. Do things you enjoy focusing on
  32. Do worst things first
  33. Don’t chase every interesting idea
  34. Edit later
  35. Exercise your body
  36. Exercise your mind
  37. Expand your attention span
  38. Find a way to refocus
  39. Find the best time to do your routine tasks
  40. Find your flow
  41. Finish what you started
  42. Focus on what you control
  43. Force yourself to focus
  44. Get clear on what you want
  45. Give it the time and attention it deserves
  46. Have a time and place for things
  47. Hold a clear picture in your mind of what you want to accomplish
  48. Keep it simple
  49. Keep your energy up
  50. Know the tests for success
  51. Know what’s on your plate
  52. Know your limits
  53. Know your personal patterns
  54. Know your priorities
  55. Learn to say no – to yourself and others
  56. Limit your starts and stops
  57. Limit your task switching
  58. Link it to good feelings
  59. Make it easy to pick back up where you left off
  60. Make it relentless
  61. Make it work, then make it right
  62. Master your mindset
  63. Multi-Task with skill
  64. Music everywhere
  65. Narrow your focus
  66. Pair up
  67. Pick up where you left off
  68. Practice meditation
  69. Put the focus on something bigger than yourself
  70. Rate your focus each day
  71. Reduce friction
  72. Reduce open work
  73. Reward yourself along the way
  74. See it, do it
  75. Set a time frame for focus 
  76. Set goals
  77. Set goals with hard deadlines
  78. Set mini-goals
  79. Set quantity limits
  80. Set time limits
  81. Shelve things you aren’t actively working on
  82. Single Task
  83. Spend your attention with skill
  84. Start with WHY
  85. Stop starting new projects
  86. Take breaks
  87. Take care of the basics
  88. Use lists to avoid getting overwhelmed or overloaded
  89. Use metaphors
  90. Use Sprints to scope your focus
  91. Use the Rule of Three
  92. Use verbal cues
  93. Use visual cues
  94. Visualize your performance
  95. Wake up at the same time each day
  96. Wiggle your toes – it’s a fast way to bring yourself back to the present
  97. Write down your goals
  98. Write down your steps
  99. Write down your tasks
  100. Write down your thoughts
  101. Work when you are most comfortable

When you go through the 101 Proven Practices for Focus, don’t expect it to be perfect.  It’s a work in progress.   Some of the practices for focus need to be fleshed out better.   There is also some duplication and overlap, as I re-organize the list and find better ways to group and label ideas.

In the future, I’m going to revamp this collection to have some more precision, better naming, and some links to relevant quotes, and some science where possible.   There is a lot more relevant science that explains why some of these techniques work, and why some work so well.

What’s important is that you find the practices that resonate for you, and the things that you can actually practice.

Getting Started

You might find that from all the practices, only one or two really resonate, or help you change your game.   And, that’s great.   The idea of having a large list to select from is that it’s more to choose from.  The bigger your toolbox, the more you can choose the right tool for the job.  If you only have a hammer, then everything looks like a nail.

If you don’t consider yourself an expert in focus, that’s fine.  Everybody has to start somewhere.  In fact, you might even use one of the practices to help you get better:  Rate your focus each day.

Simply rate yourself, on a scale of 1-10, where 10 is awesome and 1 means you’re a squirrel with a sugar high, dazed and confused, and chasing all the shiny objects that come into site.   And then see if your focus improves over the course of a week.

If you adopt just one practice, try either Align  your focus and your values or Ask new questions to change your focus.  

Feel Free to Share It With Friends

At the bottom of the 101 Proven Practices for Focus, you’ll find the standard sharing buttons for social media to make it easier to share.

Share it with friends, family, your world, the world.

The ability to focus is really a challenge for a lot of people.   The answer to improve your attention and focus is through proven practices, techniques, and skill building.  Too many people hope the answer lies in a pill, but pills don’t teach you skills.

Even if you struggle a bit in the beginning, remind yourself that growth feels awkward.   You' will get better with practice.  Practice deliberately.  In fact, the side benefit of focusing on improving your focus, is, well, you guessed it 
 you’ll improve your focus.

What we focus on expands, and the more we focus our attention, and apply deliberate practice, the deeper our ability to focus will grow.

Grow your focus with skill.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Motivational Quotes Revamped

The Great Personal Development Quotes Collection Revamped

The Great Positive Thinking Quotes Collection

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

A product manager's perfection....

Xebia Blog - Tue, 03/03/2015 - 15:59

is achieved not there are no more features to add, but when there are no more features to take away. -- Antoine de Saint Exupéry

Not only was Antoine a brilliant writer, philosopher and pilot (well arguably since he crashed in the Mediterranean) but most of all he had a sharp mind about engineering, and I frequent quote him when I train product owners, product managers or in general product companies, about what makes a good product great. I also tell them their most important word in their vocabulary is "no". But the question then becomes, what is the criteria to say "yes"?

Typically we will look at the value of a feature and use different ways to prioritise and rank different features, break them down to their minimal proposition and get the team going. But what if you already have a product? and it’s rate of development is slowing. Features have been stacked on each other for years or even decades, and it’s become more and more difficult for the teams to wade through the proverbial swamp the code has become?

Too many features

Too many features

Turns out there are a number of criteria that you can follow:

1.) Working software, means it’s actually being used.

Though it may sound obvious, it’s not that easy to figure out. I was once part of a team that had to rebuild a rather large piece (read huge) of software for an air traffic control system. The managers ensured us that every functionality was a must keep, but the cost would have been prohibitory high.

One of the functions of the system was a record and replay mode for legal purposes. It basically registers all events throughout the system to serve as evidence that picture compilers would be accountable, or at least verifiable. One of our engineers had the bright insight that we could catalogue this data anonymously to figure out which functions were used and which were not.

Turned out the Standish Group was pretty right in their claims that 80% of the software is never used. Carving that out was met with fierce resistance, but it was easier to convince management (and users) with data, than with gut.

Another upside? we also knew what functions they were using a lot, and figured out how to improve those substantially.

2.) The cost of platforms

Yippee we got it running on a gazillion platforms! and boy do we have a reach, the marketing guys are going frenzy. Even if is the right choice at the time, you need to revisit this assumption all the time, and be prepared to clean up! This is often looked upon as a disinvestment: “we spent so much money on finally getting Blackberry working” or “It’s so cost effective that we can also offer it on platform XYZ”.

In the web world it’s often the number of browsers we support, but for larger software systems it is more often operating systems, database versions or even hardware. For one customer we would refurbish hardware systems, simply because it was cheaper than moving on to a more modern machine.

Key take away: If the platform is deprecated, remove it entirely from the codebase, it will bog the team down and you need their speed to respond to an ever increasing pace of new platforms.

3.) Old strategies

Every market and every product company pivots at least every few years (or dies). Focus shifts from consumer groups, to type of clients, type of delivery, shift to service or something else which is novel, hip and most of all profitable. Code bases tend to have a certain inertia. The larger the product, the bigger the inertia, and before you know it there a are tons of features in their that are far less valuable in the new situation. Cutting away perfectly good features is always painful but at some point you end up with the toolbars of Microsoft Word. Nice features, but complete overkill for the average user.

4.) The cause and effect trap

When people are faced with an issue they tend to focus on fixing the issue as it manifests itself. It's hard for our brain to think in problems, it tries to think in solutions. There is an excellent blog post here that provides a powerful method to overcome this phenomena by asking five times "why".

  • "We need the system to automatically export account details at the end of the day."
  • "Why?"
  • "So we can enter the records into the finance system"
  • "So it sounds like the real problem is getting the data into the finance system, not exporting it. Exporting just complicates the issue. Let's implement a data feed that automatically feeds the data to the finance system"

The hard job is to continuously keep evaluating your features, and remove those who are no longer valuable. It may seem like your throwing away good code, but ultimately it is not the largest product that survives, but the one that is able to adapt fast enough to the changing market. (Freely after Darwin)

 

The Virtue of Purgatory in Software Development

From the Editor of Methods & Tools - Tue, 03/03/2015 - 14:51
Having some decade of experience in software development behind me, I had the time to accumulate a lot of mistakes. One of the recurring patterns in these failures was the ambition to solve code issues too quickly. This was especially the case when the problem was related to code that I wrote, which made me feel responsible for the situation. Naturally, I had also often think that my code couldn’t be bad and somebody must have changed it after I deliver it, but this is another story ;O) When you detect ...

Four Tips for Managing Performance in Agile Teams

I’ve been talking with clients recently about their managers’ and HR’s transition to agile. I hear this common question: “How do we manage performance of the people on our agile teams?”

  1. Reframe “manage performance” to “career development.” People on agile teams don’t need a manager to manage their performance. If they are retrospecting at reasonable intervals, they will inspect-and-adapt to work better together. Well, they will if managers don’t interfere with their work by creating experts or moving people off project teams.
  2. The manager creates a trusting relationship with each person on the team. That means having a one-on-one weekly or bi-weekly with each person. At the one-on-one, the manager provides tips for feedback and offers coaching.  (If the person needs it or wants it from the manager.) The person might want to know where else he or she can receive coaching. The manager removes obstacles if the person has them. They discuss career development.
  3. When managers discuss career development, each person needs to see an accurate view of the value they bring to the organization. That means each person has to know how to give and receive feedback. They each have to know how to ask for and accept coaching. The manager provides meta-feedback and meta-coaching.
  4. If you, as a manager, meet with each person at least once every two weeks, no problem is a problem for too long. The people in the team have another person to discuss issues with. The manager sees the system and can change it to help the people on the team.

Now, what does this mean for raises?

I like to separate the raise from the feedback. People need feedback all the time, not just once a year. That’s why I like weekly or biweekly one-on-ones. Feedback isn’t just from the manager to the employee; it’s two-way feedback. If people have trouble working in the current environment, the managers might have a better chance to change it than an employee who is not a manager.

What about merit raises? This is tricky. So many managers and HR people continue to think one person is a star. No, on well-functioning agile teams, the team is the star—not individuals. You have options:

  • Make sure you pay each person at parity. This might not be trivial. You need expertise criteria for each job level.
  • When it comes to merit raises, provide a pot of money for the team and ask them to distribute it.
  • Distribute the merit money to each person equally. Explain that you are doing this, so people provide feedback to each other.
  • Here’s something radical: When people think they are ready for a raise or another level, have a discussion with the team. Let the team vote on it.

Managers have to not get in the way when it comes to “performance management.” The people on the team are adult humans. They somehow muddle through the rest of their lives, successfully providing and receiving feedback. They know the worth of things outside work. It’s just inside work that we keep salary secret.

It might not fit for you to have open-book salaries. On the other hand, how much do your managers and HR do that interferes with a team? You have to be careful about this.

If you reward individuals and ask people to work together as a team, how long do you think they will work together as a team? I don’t know the answer to that question.

Long ago, my managers asked me to a “team player.”  One guy got a huge raise—and I didn’t, although I had saved his tush several times—I stopped working as a “team” member. I got my big raise the following year. (Year!) This incongruent approach is why people leave organizations—when the stated way “we work here” is not congruent with the stated goals: agile and self-organizing teams.

What do you want? Managers and HR to manage people? Or, to lead people using servant leadership, and let the teams solve their problems and manage their own performance?

If teams don’t know how to improve, that’s one thing. But, I bet your teams do know how to improve. You don’t have to manage their performance. You need to create an environment in which people can do their best work—that’s the manager’s job and the secret to “managing performance.”

Related posts:

 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 03/02/2015 - 20:09

GalileoChristina_edGalileo Galilei, Letter to the Grand Duchess Christina of Tuscany (1615)

.... Considering the force exerted by logical deductions, they may ascertain that it is not in the power of the professors of demonstrative sciences to change their opinions at will and apply themselves first to one side and then to the other.

There is a great difference between commanding a mathematician or a philosopher and influencing a lawyer or a merchant, for demonstrated conclusions about things in nature or in the heavens cannot be changed with the same facility as opinions about what is or is not lawful in a contract, bargain, or bill of exchange.

If those suggesting we abandon the principles of Microeconomics of Software Development (decision making in the presence of scarcity, abundance, and economic value)†, requiring that decisions made today with their impacts on future outcomes, do so without probabilistic knowledge of those impacts can be done in the absence of estimating those impacts - think again.

It Just Ain't So

† Software Project Effort Estimation: Foundations and Best Practice Guidelines for Success, May 7, 2014 by Adam Trendowicz and Ross Jeffery

Categories: Project Management

New Tools to Supercharge Your Games on Google Play

Android Developers Blog - Mon, 03/02/2015 - 19:29

Posted by Greg Hartrell, Senior Product Manager of Google Play Games

Everyone has a gaming-ready device in their pocket today. In fact, of the one billion Android users in more than 190 countries, three out of four of them are gamers. This allows game developers to reach a global audience and build a successful business. Over the past year, we paid out more than $7 billion to developers distributing apps and games on Google Play.

At our Developer Day during the Game Developers Conference (GDC) taking place this week, we announced a set of new features for Google Play Games and AdMob to power great gaming. Rolling out over the next few weeks, these launches can help you better measure and monetize your games.

Better measure and adapt to player needs

“Player Analytics has helped me hone in on BombSquad’s shortcomings, right the ship, and get to a point where I can financially justify making the games I want to make.”

Eric Froemling, BombSquad developer

Google Play Games is a set of services that help game developers reach and engage their audience. To further that effort, we’re introducing Player Analytics, giving developers access to powerful analytics reports to better measure overall business success and understand in-game player behavior. Launching in the next few weeks in the Google Play Developer Console, the new tool will give indie developers and big studios better insight into how their players are progressing, spending, and churning; access to critical metrics like ARPPU and sessions per user; and assistance setting daily revenue targets.

BombSquad, created by a one-person game studio in San Francisco, was able to more than double its revenue per user on Google Play after implementing design changes informed during beta testing Player Analytics.

Optimizing ads to earn the most revenue

After optimizing your game for performance, it’s important to build a smarter monetization experience tailored to each user. That’s why we’re announcing three important updates to the AdMob platform:

  • Native Ads: Currently available as a limited beta, participating game developers will be able to show ads in their app from Google advertisers, and then customize them so that users see ads that match the visual design of the game. Atari is looking to innovate on its games, like RollerCoaster Tycoon 4 Mobile, and more effectively engage users with this new feature.
  • In-App Purchase House Ads Beta: Game developers will be able to smartly grow their in-app purchase revenue for free. AdMob can now predict which users are more likely to spend on in-app purchases, and developers will be able to show these users customized text or display ads promoting items for sale. Currently in beta, this feature will be coming to all AdMob accounts in the next few weeks.
  • Audience Builder: A powerful tool that enables game developers to create lists of audiences based on how they use their game. They will be able to create customized experiences for users, and ultimately grow their app revenue.

"Atari creates great game experiences for our broad audience. We're happy to be partnering with Google and be the first games company to take part in the native ads beta and help monetize games in a way that enhances our users' experience."

Todd Shallbetter, Chief Operating Officer, Atari

New game experiences powered by Google

Last year, we launched Android TV as a way to bring Android into the living room, optimizing games for the big screen. The OEM ecosystem is growing with announced SmartTVs and micro-consoles from partners like Sony, TPVision/Philips and Razer.

To make gaming even more dynamic on Android TV, we’re launching the Nearby Connections API with the upcoming update of Google Play services. With this new protocol, games can seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV. Beach Buggy Racing is a fun and competitive multiplayer racing game on Android TV that plans to use Nearby Connections in their summer release, and we are looking forward to more living room multiplayer games taking advantage of mobile devices as second screen controls.

At Google I/O last June, we also unveiled Google Cardboard with the goal of making virtual reality (VR) accessible to everyone. With Cardboard, we are giving game developers more opportunities to build unique and immersive experiences from nothing more than a piece of cardboard and your smartphone. The Cardboard SDKs for Android and Unity enable you to easily build VR apps or adapt your existing app for VR.

Check us out at GDC

Visit us at the Google booth #502 on the Expo floor to get hands on experience with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.

For more information on what we’re doing at GDC, please visit g.co/dev/gdc2015.

Join the discussion on

+Android Developers
Categories: Programming

Change Your Life With this Free Blogging Course

Making the Complex Simple - John Sonmez - Mon, 03/02/2015 - 17:00

Around the mid December 2014, I decided to launch a completely free blogging course that would be delivered via email over 3 weeks. I had no idea how popular and successful that blogging course would turn out to be. (Here’s a good book to check out) At the time of writing this post, almost 3,000 software developers have signed up ... Read More

The post Change Your Life With this Free Blogging Course appeared first on Simple Programmer.

Categories: Programming

Tutum, first impressions

Xebia Blog - Mon, 03/02/2015 - 16:40

Tutum is a platform to build, run and manage your docker containers. After shortly playing with it some time ago, I decided to take a bit more serious look at it this time. This article describes first impressions of using this platform, more specifically looking at it from a continuous delivery perspective.

The web interface

First thing to notice is the clean and simple web interface. Basically there are two main sections, which are services and nodes. The services view lists the services or containers you have deployed with status information and two buttons, one to stop (or start) and one to terminate the container, which means to throw it away.

You can drill down to a specific service, which provides you with more detailed information per service. The detail page provides you information about the containers, a slider to scale up and down, endpoints, logging, some metrics for monitoring and more .

Screen Shot 2015-02-23 at 22.49.33

The second view is a list of nodes. The list contains the VM's on which containers can be deployed. Again with two simple buttons to start/stop and to terminate the node. For each node it displays useful information about the current status, where it runs, and how many containers are deployed on it.

The node page also allows you to drill down to get more information on a specific node.  The screenshot below shows some metrics in fancy graphs for a node, which can potentially be used to impress your boss.

Screen Shot 2015-02-23 at 23.07.30

 

Creating a new node

You’ll need a node to deploy containers on it. In the node view you see two big green buttons. One states: “Launch new node cluster”. This will bring up a form with currently four popular providers Amazon, Digital Ocean, Microsoft Azure and Softlayer. If you have linked your account(s) in the settings you can select that provider from a dropdown box. It only takes a few clicks to get a node up and running. In fact you create a node cluster, which allows you to easily scale up or down by adding or removing nodes from the cluster.

You also have an option to ‘Bring you own node’. This allows you to add your own Ubuntu Linux systems as nodes to Tutum. You need to install an agent onto your system and open up a firewall port to make your node available to Tutum. Again very easy and straight forward.

Creating a new service

Once you have created a node, you maybe want to do something with it. Tumtum provides jumpstart images with popular types of services for storage, cacheing, queueing and more, providing for example MongoDB, Elasticsearch or Tomcat. Using a wizard it takes only four steps to get a particular service up and running.

Besides the jumpstart images that Tutum provides, you can also search public repositories for your image of choice. Eventually you would like to have your own images running your homegrown software. You can upload your image to a Tutum private registry. You can either pull it from Docker Hub or upload your local images directly to Tutum.

Automating

We all know real (wo)men (and automated processes) don’t use GUI’s. Tutum provides a nice and extensive command line interface for both Linux and Mac. I installed it using brew on my MBP and seconds later I was logged in and doing all kinds of cool stuff with the command line.

Screen Shot 2015-02-24 at 22.23.30

The cli is actually doing rest calls, so you can skip the cli all together and talk HTTP directly to a REST API, or if it pleases you, you can use the python API to create scripts that are actually maintainable. You can pretty much automate all management of your nodes, containers, and services using the API, which is a must have in this era of continuous everything.

A simple deployment example

So let's say we've build a new version of our software on our build server. Now we want to get this software deployed to do some integration testing, or if you feeling lucky just drop it straight into production.

build the docker image

tutum build -t test/myimage .

upload the image to Tutum registry

tutum image push <image_id>

create the service

tutum service create <image_id>

run it on a node

tutum service run -p <port> -n <name> <image_id>

That's it. Of course there are lots of options to play with, for example deployment strategy, set memory, auto starting etc. But the above steps are enough to get your image build, deployed and run. Most time I had to spend was waiting while uploading my image using the flaky-but-expensive hotel wifi.

Conclusion for now

Tutum is clean, simple and just works. I’m impressed with ease and speed you can get your containers up and running. It takes only minutes to get from zero to running using the jumpstart services, or even your own containers. Although they still call it beta, everything I did just worked, and without the need to read through lots of complex documentation. The web interface is self explanatory and the REST API or cli provides everything you need to integrate Tutum in your build pipeline, so you can get your new features in production with automation speed.

I'm wondering how challenging managing would be at a scale of hundreds of nodes and even more containers, when using the web interface. You'd need a meta-overview or aggregate view or something. But then again, you have a very nice API to