Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Swift optional chaining and method argument evaluation

Xebia Blog - Tue, 04/21/2015 - 08:21

Everyone that has been programming in Swift knows that you can call a method on an optional object using a question mark (?). This is called optional chaining. But what if that method takes any arguments whose value you need to get from the same optional? Can you safely force unwrap those values?

A common use case of this is a UIViewController that runs some code within a closure after some delay or after a network call. We want to keep a weak reference to self within that closure because we want to be sure that we don't create reference cycles in case the closure would be retained. Besides, we (usually) don't need to run that piece of code within the closure in case the view controller got dismissed before that closure got executed.

Here is a simplified example:

class ViewController: UIViewController {

    let finishedMessage = "Network call has finished"
    let messageLabel = UILabel()

    override func viewDidLoad() {
        super.viewDidLoad()

        someNetworkCall { [weak self] in
            self?.finished(self?.finishedMessage)
        }
    }

    func finished(message: String) {
        messageLabel.text = message
    }
}

Here we call the someNetworkCall function that takes a () -> () closure as argument. Once the network call is finished it will call that closure. Inside the closure, we would like to change the text of our label to a finished message. Unfortunately, the code above will not compile. That's because the finished method takes a non-optional String as parameter and not an optional, which is returned by self?.finishedMessage.

I used to fix such problem by wrapping the code in a if let statement:

if let this = self {
    this.finished(this.finishedMessage)
}

This works quite well, especially when there are multiple lines of code that you want to skip if self became nil (e.g. the view controller got dismissed and deallocated). But I always wondered if it was safe to force unwrap the method arguments even when self would be nil:

self?.finished(self!.finishedMessage)

The question here is: does Swift evaluate method argument even if it does not call the method?

I went through the Swift Programming Guide to find any information on this but couldn't find an answer. Luckily it's not hard to find out.

Let's add a method that will return the finishedMessage and print a message and then call the finished method on an object that we know for sure is nil.

override func viewDidLoad() {
    super.viewDidLoad()

    let vc: ViewController? = nil
    vc?.finished(printAndGetFinishedMessage())
}

func printAndGetFinishedMessage() -> String {
    println("Getting message")
    return finishedMessage
}

When we run this, we see that nothing gets printed to the console. So now we know that Swift will not evaluate the method arguments when the method is not invoked. Therefore we can change our original code to the following:

someNetworkCall { [weak self] in
    self?.finished(self!.finishedMessage)
}

R: Numeric keys in the nested list/dictionary

Mark Needham - Tue, 04/21/2015 - 06:59

Last week I described how I’ve been creating fake dictionaries in R using lists and I found myself using the same structure while solving the dice problem in Think Bayes.

The dice problem is described as follows:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.

Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?

Here’s a simple example of the nested list that I started with:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = dice
 
> priors
  4   6   8  12  20 
0.2 0.2 0.2 0.2 0.2

I wanted to retrieve the prior for the 8 dice which I tried to do like this:

> priors[8]
<NA> 
  NA

That comes back with ‘NA’ because we’re actually looking for the numeric index 8 rather than the item in that column.

As far as I understand if we want to look up values by name we have to use a string so I tweaked the code to store names as strings:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = sapply(dice, paste)
 
> priors["8"]
  8 
0.2

That works much better but with some experimentation I realised I didn’t even need to run ‘dice’ through the sapply function, it already works the way it was:

dice = c(4,6,8,12,20)
priors = rep(1.0 / length(dice), length(dice))
names(priors) = dice
 
> priors["8"]
  8 
0.2

Now that we’ve got that working we can write a likelihood function which takes in observed dice rolls and tells us how likely it was that we rolled each type of dice. We start simple by copying the above code into a function:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
  4   6   8  12  20 
0.2 0.2 0.2 0.2 0.2

Next we’ll update the score for a particular dice to 0 if one of the observed rolls is greater than the dice’s maximum score:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0       
        }
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
  4   6   8  12  20 
0.0 0.2 0.2 0.2 0.2

The 4 dice has been ruled out since we’ve rolled a 6! Now let’s put in the else condition which updates our score by the probability that we got the observed roll with each of valid dice. i.e. we have a 1/20 chance of rolling any number with the 20 side dice, a 1/8 chance with the 8 sided dice etc.

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0
        } else {
          scores[paste(name)] = scores[paste(name)] *  (1.0 / name)
        }        
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1
         4          6          8         12         20 
0.00000000 0.03333333 0.02500000 0.01666667 0.01000000

And finally let’s normalise those scores so they’re a bit more readable:

> l1 / sum(l1)
        4         6         8        12        20 
0.0000000 0.3921569 0.2941176 0.1960784 0.1176471
Categories: Programming

Approximating for Improved Understanding

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 22:05

Screen Shot 2015-04-09 at 9.35.45 AMThe world of projects, project management, and the products or services produced by those projects is uncertain. It's never certain. Seeking certainty is not only naive, it's simply not possible.

Making decisions in the presence of this uncertainty is part of our job as project managers, engineers, developers on behave of those paying for our work.

It's also the job of the business, whose money is being spent on the projects to produce tangible value in exchange for that money.

From the introduction of the book to the left...

Science and engineering, our modern ways of understanding and altering the world, are said to be about accuracy and precision. Yet we best master the complexity of our world by cultivating insight rather than precision. We need insight because our minds are but a small part of the world. An insight unifies fragments of knowledge into a compact picture that fits in our minds. But precision can overflow our mental registers, washing away the understanding brought by insight. This book shows you how to build insight and understanding first, so that you do not drown in complexity.

So what does this mean for our project world?

  • The future is uncertain. It is always uncertain. It can't be anything but uncertain. Assuming certainty, is a waste of time. Managing in the presence of uncertanity is unavoidable. To do this we must estimate. This is unavoidable. To suggest otherwise willfully ignores the basis of all management practices.
  • This uncertainty creates risk to our project. Cost, schedule, and risks to the delivered capabilities of the project or product development. To manage in with closed loop process, estimates are needed. This is unavoidable as well.
  • Uncertainty is either reducible or irreducible
    • Reducible uncertainty can be reduced with new information. We can buy down this uncertainty.
    • Irreducible uncertainty - the natural variations in what we do - can only be handled with margin.

In both these conditions we need to get organized in order to address the  underlying uncertainties. We need to put structure in place in some manner. Decomposing the work is a common way in the project domain. From a Work Breakdown Structure to simple sticky notes on the wall, breaking problems down into smaller parts is a known successful way to address a problem. 

With this decomposition, now comes the hard part. Making decisions in the presence of this uncertainty.

Probabilistic Reasoning

Reasoning about things that are uncertain is done with probability and statistics. Probability is a degree of belief. 

I believe we have a 80% probability of completing on or before the due date for the migration of SQL Server 2008 to SQL Server 2012.

Why do we have this belief?  Is it based on our knowledge from past experience. Is this knowledge sufficient to establish that 80% confidence?

  • Do we have some external model of the work effort needed to perform the task?
  • Is there a parametric model of similar work that can be applied to this work?
  • Could we decompose the work to smaller chunks that could then be used to model the larger set of tasks?
  • Could I conduct some experiments to improve my knowledge?
  • Could I build a model from intuition that could be used to test the limits of my confidence?

The answers to each of these informs our belief. 

Chaos, Complexity, Complex, Structured?

A well known agile thought leader made a statement today

I support total chaos in every domain

This is unlikely going to result in sound business decisions in the presence of uncertainty. Although there may be domains where chaos might produce usable results, when some degree of confidence that the money being spent will produce the needed capabilities, on of before the need date, at of below the budget needed to be profitable, and with the collection of all the needed capability to accomplish the mission or meet the business case, we're going to need to know how to manage our work to achieve those outcomes.

So let's assume - with a high degree of confidence - that we need to manage in the presence of uncertainty, but we have little interest in encouraging chaos, here's one approach.

So In The End

Since all the world's a set of statistical processes, producing probabilistic outcomes, which in turn create risk to any expected results when not addressed properly - the notion that decisions can be made in the presence of this condition can only be explained by the willful ignorance of the basic facts of the physic of project work. 

  Related articles The Difference Between Accuracy and Precision Making Decisions in the Presence of Uncertainty Managing in Presence of Uncertainty Herding Cats: Risk Management is How Adults Manage Projects Herding Cats: Decision Analysis and Software Project Management Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Estimates

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 22:00

Estimating SW Intensive SystemsEstimation and measurement of project attributes are critical success factors for designing, building, modifying, and operating products and services. †

Good estimates are the key to project success. Estimates provide information to the decision makers to assess adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

We use estimates and measurements to evaluate the feasibility and affordability of products being built, choose between alternatives designs, assess risk, and support business decisions. Engineers compare estimates if technical baselines of observed performance to decide of the product meets its functional and performance requirements. These are used by management to control processes and detect compliance problems. Process manager use capability baselines to improve production processes.

Developers, engineers, and planners estimate resources needed to develop, maintain, enhance and deploy products, Project planners use estimates for staffing, facilities. Planners and managers use estimates for resources to determine project cost and schedule and prepare budgets and plans. 

Managers compare estimates - cost and schedule baselines - and actual values to determine deviations from plan and understand the root causes of those deviations needed to take corrective actions. Estimates of product, project, and process characteristics provide baselines to assess progress during the project. 

Bad estimates affect all participants in the project or product development process. Incomplete and inaccurate estimates mean inadequate time and money  available for increasing the probability of project success.

The Nature of Estimation

The verb estimate means to produce a statement of the approximate value of some quantity that describes or characterizes an object. The noun estimate refers to the value produced by the verb. The object can be an artifact - software, hardware, documents - or an activity - planning, development, testing, or process.

We make estimates because we cannot directly measure the value of that quantity because:

  • The object is inaccessible
  • The object does not exist yet
  • The Measurement would be too expensive

Reasons to Estimate and Measure Size, Cost and Schedule

  • Evaluate feasibility of requirements.
  • Analyze alternative designs and implementations.
  • Determ required capacity and speed of produced results.
  • Evaluate performance - accuracy, speed, reliability, availability and other ...ilities.
  • Identify and assess technical risks.
  • Provide technical baselines for tracking and guiding.

Reasons to Estimate Effort, Cost, and Schedule

  • Determine project feasibility in terms of cost and schedule.
  • Identify and assess risks.
  • Negotiate achievable commitments.
  • Prepare realistic plans and budgets.
  • Evaluate business value - cost versus benefit.
  • Provide cost and schedule baselines for tracking and guiding.

Reasons to Estimate Capability and Performance

  • Predict resource consumption and efficiency.
  • Establish norms for expected performance.
  • Identify opportunities for improvement.

There are many sources of data for making estimates, some reliable some not. Human subject matter expert based estimates have been shown to be the least reliable, accurate and precise due to the biases involved in the human processes of developing the estimate. estimates based on past performance, while useful, must be adjusted for the statistical behaviors of the past and the uncertainty of the future. 

If the estimate is misused in any way, this is not the fault of the estimate - both noun and verb - byt simply bad management. Fix that first, then apply proper estimating processes.

If your project or product development effort does none of these activities or has no need for information on which to make a decision, then estimating is likely a waste of time.

But before deciding estimate are the smell of dysfunction, with NO root cause identified for corrective action check with those paying your salary first, to see what they have to say about your desire to spend their money in presence of uncertainty with the absence of an estimate to see what they say.

† This post is extracted from Estimating Software Intensive Systems: Project, Products and Processes, Dr. Richard Stutzke, Addison Wesley. This book is a mandatory read for anyone working in a software domain on any project that is mission critical. This means if you need to show up on or before the need date, at or below your planned cost, with the needed capabilities - Key Performance Parameters , without which the project will get cancel - then you're going to need to estimate all the parameters of your project, If your project doesn't need to show up on time, stay on budget, or can provide less than the needed capabilities, no need to estimate. Just spend your customer's money, she'll tell you when to stop.

Related articles Capability Maturity Levels and Implications on Software Estimating Incremental Delivery of Features May Not Be Desirable Capabilities Based Planning First Then Requirements
Categories: Project Management

Root Cause Analysis

Herding Cats - Glen Alleman - Mon, 04/20/2015 - 16:07

Root-causeRoot Cause Analysis is a means to answer to why we keep seeing the same problems over and over again. When we treat the symptoms, the root cause remains.

In Lean there is a supporting process of 5S's. 5S's is a workplace organization method that uses a list of five words seiri, seiton, seiso, seiketsu, and shitsuke. This list describes how to organize a work places for efficiency and effectiveness by identifying and storing items used, maintaining the areas and items, and sustaining the new order. The decision making process usually comes from a dialogue about standardization, which build understanding around the employees of how they should do their work.

At one client we are installing Microsoft Team Foundation Server, for development, Release Management and Test Management. The current processes relies on the heroics of many on the team every Thursday night to get the release out the door.

We started the improvement of the development, test, and release process with Root Cause Analysis. In this domain Cyber and Security are paramount, so when there is a cyber or a data security  issue, RCA is the core process to address the issue.

The results of the RCA have show that the work place is chaotic at times, code poorly managed, testing struggles on deadline, and the configuration of the release base inconsistent. It was clear we were missing tools, but the human factors were also the source of the problem - the symptom of latent defects and a break fix paradigm.

There are many ways to ask and answer the 5 Whys and apply the 5 S's, but until that is done and the actual causes determined, and the work place cleaned up, the symptoms will continue to manifest in undesirable ways. 

If we're going to start down the path of 5 Whys and NOT actually determine the Root Cause and develop a corrective action plan, then that is in itself a waste. 

Related articles Five Estimating Pathologies and Their Corrective Actions Economics of Software Development
Categories: Project Management

The Programmer’s Guide to Networking at a Conference

Making the Complex Simple - John Sonmez - Mon, 04/20/2015 - 16:00

One of the best reasons to go to a conference is the networking opportunities that are present—if you know how to take advantage of them. Sometimes the best thing about a conference is everyone you meet in the hallways, not the actual talks or sessions themselves. So much so, that people often refer to this […]

The post The Programmer’s Guide to Networking at a Conference appeared first on Simple Programmer.

Categories: Programming

R: non-numeric argument to binary operator

Mark Needham - Mon, 04/20/2015 - 00:08

When debugging R code, given my Java background, I often find myself trying to print out the state of variables along with an appropriate piece of text like this:

names = c(1,2,3,4,5,6)
> print("names: " + names)
Error in "names: " + names : non-numeric argument to binary operator

We might try this next:

> print("names: ", names)
[1] "names: "

which doesn’t actually print the names variable – only the first argument to the print function is printed.

We’ll find more success with the paste function:

> print(paste("names: ", names))
[1] "names:  1" "names:  2" "names:  3" "names:  4" "names:  5" "names:  6"

This is an improvement but it repeats the ‘names:’ prefix multiple times which isn’t what we want. Introducing the toString function gets us over the line:

> print(paste("names: ", toString(names)))
[1] "names:  1, 2, 3, 4, 5, 6"
Categories: Programming

SPaMCAST 338 – Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 338 features our new interview with Stephen Parry.  We discussed adaptable organizations. Stephen recently wrote: “Organizations which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things: vision, imagination and – most importantly of all – implicit trust in their own people.” We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Need more Stephen Parry?  Check out our first interview.  We discussed adaptive thinking and command and control management!

Stephen’s Bio

Stephen Parry is an international leader and strategist on the design and creation of adaptive-lean enterprises. He has a world-class reputation for passionate leadership and organisational transformation by changing the way employees, managers and leaders think about their business and their customers.

He is the author of Sense and Respond: The Journey to Customer Purpose (Palgrave), a highly regarded book written as a follow-up to his award-winning organisational transformations. His change work was recognised when he received Best Customer Service Strategy at the National Business Awards. The judges declared his strategy had created organisational transformations which demonstrated an entire cultural change around the needs of customers and could, as a result, demonstrate significant business growth, innovation and success. He is the founder and senior partner at Lloyd Parry a consultancy specialising in Lean organisational design and business transformation.

Stephen believes that organisations must be designed around the needs of customers through the application of employee creativity, innovation and willing contribution. This was recognised when his approach received awards from the European Service Industry for the Best People Development Programme and a personal award for Innovation and Creativity. Stephen has since become a judge at the National Business Awards and the National Customer Experience Awards. He is also a Fellow at the Lean Systems Society.

Website: www.lloydparry.com
Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on demonstrations!  **** Meg June 24 – 29 2013 / / /**** Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution. It is unfortunate that many teams mess them up.  We can help demonstrate what a good demo is all about.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 338 – Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

Software Process and Measurement Cast - Sun, 04/19/2015 - 22:00

Software Process and Measurement Cast 338 features our new interview with Stephen Parry.  We discussed adaptable organizations. Stephen recently wrote: “Organizations which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things: vision, imagination and – most importantly of all - implicit trust in their own people.” We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Need more Stephen Parry?  Check out our first interview.  We discussed adaptive thinking and command and control management!

Stephen’s Bio

Stephen Parry is an international leader and strategist on the design and creation of adaptive-lean enterprises. He has a world-class reputation for passionate leadership and organisational transformation by changing the way employees, managers and leaders think about their business and their customers.

He is the author of Sense and Respond: The Journey to Customer Purpose (Palgrave), a highly regarded book written as a follow-up to his award-winning organisational transformations. His change work was recognised when he received Best Customer Service Strategy at the National Business Awards. The judges declared his strategy had created organisational transformations which demonstrated an entire cultural change around the needs of customers and could, as a result, demonstrate significant business growth, innovation and success. He is the founder and senior partner at Lloyd Parry a consultancy specialising in Lean organisational design and business transformation.

Stephen believes that organisations must be designed around the needs of customers through the application of employee creativity, innovation and willing contribution. This was recognised when his approach received awards from the European Service Industry for the Best People Development Programme and a personal award for Innovation and Creativity. Stephen has since become a judge at the National Business Awards and the National Customer Experience Awards. He is also a Fellow at the Lean Systems Society.

Website: www.lloydparry.com

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on demonstrations!  **** Meg June 24 – 29 2013 / / /**** Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution. It is unfortunate that many teams mess them up.  We can help demonstrate what a good demo is all about.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Faster Word Puzzles with Neo4J

Mistaeks I Hav Made - Nat Pryce - Sun, 04/19/2015 - 21:08
When I used Neo4J to create and solve Word Morph puzzles, I brute-forced the algorithm to find and link words that differ by one letter. I was lucky – my dataset only contained four-letter words and so was small enough for my O(n2) algorithm to run in a reasonable amount of time. But what happens when I expand my dataset to include words of 4, 5 and 6 letters? Obviously, I have to change my Cypher to only relate words that are the same length: match (w1), (w2) where w2.word > w1.word and length(w1.word) = length(w2.word) with w1, w2, length([i in range(0,length(w1.word)) where substring(w1.word,i,1) <> substring(w2.word,i,1)]) as diffCount where diffCount = 1 create (w1)-[:STEP]->(w2) create (w2)-[:STEP]->(w1) But with the larger dataset, this query takes a very long time to run. I don’t know how long – I’ve never been patient enough to wait for the query to complete. I need a better algorithm. Nikhil Kuriakose has also written about solving these puzzles with Neo4J. He used a more sophisticated algorithm to create the graph. He first grouped words into equivalence classes, each of which contains words that are the same except at one letter position. So, for instance, the class P_PE would contain PIPE, POPE, etc., the class PIP_ would contain PIPE, PIPS, etc., and so on. He then created relationships between all the words in each equivalence class. This also has a straightforward representation as a property graph. An Equivalence class can be represented by an Equivalence node with a pattern property, and a word’s membership of an equivalence class can be represented by a relationship from the Word node to the Equivalence node. Words related via equivalence classes Nikhil implemented the algorithm in Java, grouping words with a HashMap and ArrayLists before loading them into Neo4J. But by modelling equivalence classes in the graph, I can implement the algorithm in Cypher – no Java required. For each Word in the database, my Cypher query calculates the patterns of the equivalence classes that the word belongs to, creates Equivalence nodes if for the patterns, and creates an :EQUIV relationship from the Word node to each Equivalence node. The trick is to only create Equivalence nodes for a pattern once, when one doesn’t yet exist, and subsequently use the same Equivalence node for the same pattern. This is achieved by creating Equivalence nodes with Cypher’s MERGE clause. MERGE either matches existing nodes and binds them, or it creates new data and binds that. It’s like a combination of MATCH and CREATE that additionally allows you to specify what happens if the data was matched or created. Before using MERGE, I must define a uniqueness constraint on the pattern property of the Equivalence nodes that will be used to identify nodes in the MERGE command. This makes Neo4J create an index for the property and ensures that the merge has reasonable performance. create constraint on (e:Equivalence) assert e.pattern is unique Then I relate all the Word nodes in my database to Equivalence nodes: match(w:Word) unwind [i in range(0,length(w.word)-1) | substring(w.word,0,i)+"_"+substring(w.word,i+1)] as pattern merge (e:Equivalence {pattern:pattern}) create (w)-[:EQUIV]->(e) This takes about 15 seconds to run. Much less time for my large dataset than my first, brute-force approach took for the small dataset of only four-letter words. Now that the words are related to their equivalence classes, there is no need to create relationships between the words directly. I can query via the Equivalence nodes: match (start {word:'halt'}), (end {word:'silo'}), p = shortestPath((start)-[*]-(end)) unwind [n in nodes(p)|n.word] as step with step where step is not null return step Giving: step ---- halt hilt silt silo Returned 4 rows in 897 ms. And it now works for longer words: step ---- candy bandy bands bends beads bears hears heart Returned 8 rows in 567 ms. Organising the Data During Import The Cypher above organised Word nodes that I had already loaded into my database. But if starting from scratch, I can organise the data while it is being imported, by using MERGE and CREATE clauses in the LOAD CSV command. load csv from "file:////puzzle-words.csv" as l create (w:Word{word:l[0]}) with w unwind [i in range(0,length(w.word)-1) | substring(w.word,0,i)+"_"+substring(w.word,i+1)] as pattern merge (e:Equivalence {pattern:pattern}) create (w)-[:EQUIV]->(e)
Categories: Programming, Testing & QA

Open positions for a Sr. dev and a data scientist in Appsflyer data group

As you may know I’ve recently joined AppsFlyer as Chief Data Officer. Appsflyer, in case you don’t know, is a well funded (20M$ round B just last January) and very exciting startup that is already the market leader in mobile attribution. In any event, one of the tasks that I have at hand, is to establish the data group within the company.The data group’s role is to unlock the potential of data handled by Appsflyer and increase the value for its customers – I am looking for both a data scientist and a senior backend developer to join that team. Below you can find the blurb describing the roles. If you think you are a good fit and interested drop me a line and/or send your cv to jobs@appsflyer.com. Note that the positions are in our HQ which are in Herzliya Israel and also that we have additional openings for R&D (in Israel) and Sales/account management (in multiple locations world-wide) which you can find here

Data Scientist:

Are you looking for an opportunity to play with vast, random and unstructured social data to create insights, patterns, and models? Then, leveraging that, for a groundbreaking software platform? Are you excited by the prospect of working on a small, highly technical team while enjoying significant responsibility and learning something new almost daily? Do you have a highly quantitative advanced degree and experience in mathematics and perhaps statistics? Use machine learning techniques to create scalable solutions for business problems. Analyze and extract key insights from the rich store of online social media data. Design, develop and evaluate highly innovative models for predictive learning. Work closely with the core development team to deploy models seamlessly, as part of the production system. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning algorithms for new business problems.

Requirements:  

  • PhD or MSc in Computer Science, Math or Statistics with a focus on machine learning
  • Leadership and team skills
  • Hands-on experience in predictive modeling and analysis of large volumes of data
  • Strong problem-solving ability
  • Strong programming skills (Clojure or other functional language preferred)
  • Experience with large scale distributed programming paradigms – experience with the Hadoop/Spark and SQL stacks
  • Experience with mobile analytics
  • An absolute love for hacking and tinkering with data is the basic requirement

Perks:

AppsFlyer is a fast growing startup providing mobile advertising analytics and attribution in real time. The R&D team takes an active part of the Israeli development community, including a range of meetups and such outlets as Reversim. We are focused on functional programming and on releasing open source. Get immediate satisfaction from your work—get feedback from clients in hours or even minutes.

Senior Backend Developer:

If you’re a Senior Backend Developer and you can’t remember when was the last time you did something for the first time, then the AppsFlyer R&D team is the place for you. Are you looking to propel your career to the next level? You’ll experience the excitement of handling 3.2 billion events per day, in real time using technologies like Kafka, Spark, Couchbase, Clojure, Redis etc. Our micro-service architecture is built to support the scale that went from 200 million to 3.2 billion in a year.

You have the skills but lack of proven experience with this stack? come work with the best.

Requirements:  

  • At least 5 years of working with software development
  • Experience working with live production
  • Experience with architecture design, technology evaluation and performance tuning
  • Passion to learn cutting edge technologies
  • Team player, ownership and sense of urgency
  • “Can-do approach”

Perks:

AppsFlyer is a fast growing startup providing mobile advertising analytics and attribution in real time. The R&D team takes an active part of the Israeli development community, including a range of meetups and such outlets as Reversim. We are focused on functional programming and on releasing open source. Get immediate satisfaction from your work – it usually takes hours from the minutes the code is ready in production until the clients use it. Also get the benefits of: end-to-end system ownership, including design, technologies, code, quality and production liveliness

 

 

Categories: Architecture

Economics of Software Development

Herding Cats - Glen Alleman - Sun, 04/19/2015 - 16:21

Economics is called the Dismal Science. Economics is the branch of knowledge concerned with the production, consumption, and transfer of wealth. Economics is generally about behaviors of humans and markets, given the scarcity of means, arises to achieve certain ends.

How does economics apply to software development? We're not a market, we don't create wealth, at least directly, we create products and services that may create wealth. Microeconomics is a branch of economics that studies the behavior of individuals and their decision making on the allocation of limited resources. It's the scarcity of resources that is the basis of Microeconomics. Software development certainly operates in the presence of scarce resources. MicroEconomics is closer to what we need to make decisions in the presence of uncertainty. The general economics processes ae of litle interest, so starting with Big Picture Econ books is not much use.

Software economics is a subset of Engineering Economics. A key aspect of all Microeconomics applied to engineering problems is the application of Statistical Decisions Theory - making decisions in the presence of uncertainty. Uncertainty comes in two types:

  • Aleatory uncertainty - the naturally occurring variances in the underlying processes.
  • Epistemic uncertainty - the lack of information about a probabilistic event in the future.

Aleatory uncertainty can be addressed by adding margin to our work. Time and Money. Epistemic uncertainty and the missing information has economic value to our decision making processes. That is there is economic value in decision based problems in the presence of uncertainty.

This missing information can be bought down with simple solutions. Prototypes for example. Short deliverables to test an idea or confirm an approach. Both are the basis of Agile and have been discussed in depth in Software Engineering Economics, Barry Boehm, Prentice Hall. 1981.

Engineering economics is the application of economic techniques to the evaluation of design and engineering alternatives. Engineering economics assesses the appropriateness of a given project, estimates of its value, and justification of the project (or product) from an engineering standpoint.

This involves the time value of money and cash-flow concepts, -  compound and continuous interest. It continues with economic practices and techniques used to evaluate and optimize decisions on selection of strategies for project success. 

When I hear I read that book and it's about counting lines of code, the reader has failed to comprehend the difference between principles and practices. The section of Statistical Decision theory are about the Expected Value of Perfect Information and how to make decisions with Imperfect information.

Statistical Decision Theory is about making choice, identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. In Statistical Decision Theory, the underlying statistical processes and the resulting Probabilistic outcomes require us to Estimate in the presence of uncertainty.

Writing software for money, other people's money, requires us to estimate how much money, when we'll be done spending that money and what will result from that spend.

This is the foundation of the Microeconomics of Software Development

If there is no scarcity of resources - time, cost, technical performance - then estimating is not necessary. Just start the work, spend the money and you'll be done when you're done. If however 

Related articles Five Estimating Pathologies and Their Corrective Actions Critical Success Factors of IT Forecasting
Categories: Project Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 9  

IMG_1249

Part of the reason I embarked on Re-Read Saturdays was to refresh myself on a number of books that have had significant impact on my career. Re-grounding myself was somwhat of a selfish idea; however at the same time as I refreshing myself on important concepts Re-read Saturday has provided a platform to share those ideas with a wider audience. As we begin the second half of the re-read of The Goal, I have been struck how many people have been exposed to the ideas in The Goal and how many of those ideas they have put into action, even if they can’t recite the Theory of Constraints verbatim. For example, the development and test manager that recognizing the handoff from development to test as a bottleneck and worked out a priorization scheme that maximized throughput of critical projects. Earlier entries in this re-read are:

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8

Chapter 23: Alex meets with Ted Spencer, the supervisor for the heat-treat area, who is asking Alex to get the “computer guy” (Ralph) off his back. Ralph is asking Ted to keep significantly better records of when parts enter and exit the heat-treat process. Ted indicates that he does not know why Ralph wants the data. In the past few chapters we have seen the power of transparency and communication to support process changes; in this scenario the lack of transparency has generated conflict. When Alex meets with Ralph he finds that has been trying to use the process data to understand why more shipments have not been being completed, and has noticed SIGNIFICANT variability in the times that parts enter and exit the heat-treat process. Often parts that have been completed sit until someone has time to unload the furnace. This originally lead him to question the validity of the data, so he requested that Ted to generate better data. The issue turns out to be that parts are often not immediately unloaded after the heat-treat process due to timing and staffing issues. The furnace is loaded and then heats the parts for four or five hours before being ready to be unloaded. In order to maximize efficiency, people are assigned other jobs during the heating process, which causes the timing and resource contention problems. The heat-treat bottleneck is not being run at or near maximum capacity, which reduces the output of the plant. As Ralph leaves he mentions that he believes they can predict when orders ship based on the bottlenecks (this is a bit of foreshadowing; however an area of further reading you might consider is our discussion of Little’s Law). Note that the problem with the heat-treat process was identified through measurement and analysis of the data. The problem Ralph identified is a reflection of the ripple effect of other changes and that as the process is refined better information is exposed.

Alex and Rob Donavon, the production supervisor, meet (loudly) over the discovery that the heat-treat process is not being used to maximum capacity. The discussion unearths that similar timing and resource problems are happening at the NCX-10, even though the new work rules ensure that no breaks are taken during machine set-up. The problem occurs when the machine stops and before the set-up begins again. They decide to staff both bottlenecks 24×7 so there is no downtime. The problem of who to staff the NCX-10 and heat-treat with immediately exposes a new set of constraints, this time as a reflection of the overall organization’s policies on pay and hiring. Hiring, including layoff callbacks, are currently frozen, therefore Alex and his team need to rob Peter to pay Paul. (A bit of foreshadowing – the impact of change can ripple through other process steps.)   In an overall sense, the efficiency of both the heat-treat and NCX-10 steps as measured in terms of cost per unit is being reduced while the overall effectiveness of the plant is being increased by the changes being made.

One of the other steps taken as part of the new changes to staff the bottlenecks is that Alex has let the foremen in the heat-treat area know that they will be rewarded for changes that improve the output of the process. The third-shift foreman makes two process changes that have a significant impact. He has broken high priority orders down and batched parts that require the same treatment together, and has prepped the material so that it is staged to be loaded as soon as the furnace is ready. He also points out to Alex that with a little help from engineering they can modify the loading process so that parts can be wheeled in and wheeled out rather than lifted in an out by a crane. The foreman is immediately shifted to first shift to work with engineering to make the changes and document the process. In Agile frameworks like Scrum this is EXACTLY why the teams doing the work need to reflect on how they are doing the work and take steps to improve their processes.

Chapter 24 begins on an up note! The plant has been able to ship more orders with a higher sales value than ever before, while reducing the level of work-in-progress. Champagne flows! During the celebration Bill Peach (Alex’s boss) calls and delivers praise from one the clients that has noticed that his orders are getting delivered. In light of the continuing celebration Alex is driven home by a female member of staff, which leads to complications with Julie, his estranged wife, who just happens to be waiting to surprise Alex. If Alex’s marriage problems were not enough, the next workday Alex discovers that like a virus, the bottlenecks have spread. Changes to the process to focus on making the parts that flow through the bottleneck more available have caused process steps for other parts to become bottlenecks.

Alex tracks down Johan and briefs him on what they have done. Johan suggests a visit to see what has been accomplished.

The Goal is truly about the Theory of Constraints; however Johan’s role is the same as an Agile coach. Johan rarely solves the plants problems directly, but rather asks the questions that lead to a solution. The Goal provides a great side benefit of reinforcing one the central ideas of Agile coaching.

Summary of The Goal so far:

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4 through 6 shift the focus from steps in the process to the process as a whole. Chapters 4 – 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won’t exist.

Chapters 7 through 9 show Alex’s commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept “that you get what you measure” is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‘you get what you measure’ is that if you measure the wrong thing 
you get the wrong thing. We begin to see Alex’s urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters 19 through 20 begins with Johan coaching Alex’s team to help them to identify a pallet of possible solutions. They discover that every time the capacity of a bottleneck is increased more product can be shipped.  Changing the capacity of a bottleneck includes reducing down time and the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Chapters 21 through 22 are a short primer on change management. Just telling people to do something different does not generate support. Significant change requires transparency, communication and involvement. One of Deming’s 14 Principles is constancy of purpose. Alex and his team engage the workforce though a wide range of communication tools and while staying focused on implementing the changes needed to stay in business.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version

 


Categories: Process Management

R: Removing for loops

Mark Needham - Sun, 04/19/2015 - 00:53

In my last blog post I showed the translation of a likelihood function from Think Bayes into R and in my first attempt at this function I used a couple of nested for loops.

likelihoods = function(names, mixes, observations) {
  scores = rep(1, length(names))
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
l = likelihoods(Names, Mixes, Observations)
 
> l / sum(l)
  Bowl 1   Bowl 2 
0.627907 0.372093

We pass in a vector of bowls, a nested dictionary describing the mixes of cookies in each bowl and the observations that we’ve made. The function tells us that there’s an almost 2/3 probability of the cookies coming from Bowl 1 and just over 1/3 of being Bowl 2.

In this case there probably won’t be much of a performance improvement by getting rid of the loops but we should be able to write something that’s more concise and hopefully idiomatic.

Let’s start by getting rid of the inner for loop. That can be replace by a call to the Reduce function like so:

likelihoods2 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  for(name in names) {
    scores[name] = Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1)
  }  
  return(scores)
}
l2 = likelihoods2(Names, Mixes, Observations)
 
> l2 / sum(l2)
  Bowl 1   Bowl 2 
0.627907 0.372093

So that’s good, we’ve still got the same probabilities as before. Now to get rid of the outer for loop. The Map function helps us out here:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
> l3
$`Bowl 1`
  vanilla 
0.1054688 
 
$`Bowl 2`
vanilla 
 0.0625

We end up with a list instead of a vector which we need to fix by using the unlist function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(unlist(scores))
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
Bowl 1.vanilla Bowl 2.vanilla 
      0.627907       0.372093

Now we just have this annoying ‘vanilla’ in the name. That’s fixed easily enough:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  result = unlist(scores)
  names(result) = names
 
  return(result)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

A slightly cleaner alternative makes use of the sapply function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = sapply(names, function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1))
  names(scores) = names
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

That’s the best I’ve got for now but I wonder if we could write a version of this using matrix operations some how – but that’s for next time!

Categories: Programming

F# Exchange 2015

Phil Trelford's Array - Sat, 04/18/2015 - 12:47

This Friday saw the first ever F# eXchange, a one-day 2 track conference dedicated to all things F#, hosted at Skills Matter in London and attracting developers from across Europe.

There was a strong focus on open source projects throughout the day including MBrace (data scripting for the cloud), Fake (a DSL for build tasks), Paket (a dependency manager for .Net), the F# Power Tools and FunScript (an F# to JS compiler). In fact all the presenters used the open source project FsReveal to generate their slides!

Keynote

Tomas Petricek opened proceedings with a keynote on The Big F# and Open Source Love Story:

Slow development

One of Tomas’s observations, on slow development for open source projects resonated with many, where successful projects often start as just a simple script that fulfils a specific need and slowly gather momentum over time.

As an example, in Steffen Forkmann’s presentation he talked about how Fake had started as a simple F# script and over the years seen more and more contributors and downloads, with the addition of high quality documentation having a huge impact:

FAKE in numbers: 4278 commits, 137 contributors, 157252 downloads! Amazing work by @sforkmann & contributors at #fsharpx

— Tomas Petricek (@tomaspetricek) April 17, 2015

Talks

All the talks were recorded, and all the videos are already online!

Speakers

The videos:

Steffen also took advantage of his talk to make a special announcement about Paket:

Yay. It's done. I just released @PaketManager 1.0 live on stage at #fsharpex - http://t.co/STtMAzh3BT

— Steffen Forkmann (@sforkmann) April 17, 2015

Panel

The day ended with some pizza, drinks and a panel organized by prolific F# contributor, Don Syme:

Panel Discussion

Each panel member pitched why they thought F# was good in their core domain area from cloud, games, design, data science, scripting through to web.

There were some interesting discussions, and some mentions of the recent fsharpWorks led F# Survey.

2016

The date for next year’s F# eXchange 2016, the 16th April, is already in the calendar, hope to see you there, and please take advantage of the early bird ticket offer, only 85GBP up until the 16th June!

Categories: Programming

Life Quotes That Will Change Your Life

Life’s better with the right words.

And life quotes can help us live better.

Life quotes are a simple way to share some of the deepest insights on the art of living, and how to live well.

While some people might look for wisdom in a bottle, or in a book, or in a guru at the top of a mountain, surprisingly, a lot of the best wisdom still exists as quotes.

The problem is they are splattered all over the Web.

The Ultimate Life Quotes Collection

My ultimate Life Quotes collection is an attempt to put the best quotes right at your fingertips.

I wanted this life quotes collection to answer everything from “What is the meaning of life?” to “How do you live the good life?” 

I also wanted this life quotes collection to dive deep into all angles of life including dealing with challenges, living with regrets, how to find your purpose, how to live with more joy, and ultimately, how to live a little better each day.

The World’s Greatest Philosophers at Your Fingertips

Did I accomplish all that?

I’m not sure.  But I gave it the old college try.

I curated quotes on life from an amazing set of people including Dr. Seuss, Tony Robbins, Gandhi, Ralph Waldo Emerson, James Dean, George Bernard Shaw, Virginia Woolf, Buddha, Lao Tzu, Lewis Carroll, Mark Twain, Confucius, Jonathan Swift, Henry David Thoreau, and more.

Yeah, it’s a pretty serious collection of life quotes.

Don’t Die with Your Music Still In You

There are many messages and big ideas among the collection of life quotes.  But perhaps one of the most important messages is from the late, great Henry David Thoreau:

“Most men lead lives of quiet desperation and go to the grave with the song still in them.” 

And, I don’t think he meant play more Guitar Hero.

If you’re waiting for your chance to rise and shine, chances come to those who take them.

Not Being Dead is Not the Same as Being Alive

E.E. Cummings reminds us that there is more to living than simply existing:

“Unbeing dead isn’t being alive.” 

And the trick is to add more life to your years, rather than just add more years to your life.

Define Yourself

Life quotes teach us that living live on your terms starts by defining yourself.  Here are big, bold words from Harvey Fierstein that remind us of just that:

“Never be bullied into silence. Never allow yourself to be made a victim. Accept no one’s definition of your life; define yourself.”

Now is a great time to re-imagine all that you’re capable of.

We Regret the Things We Didn’t Do

It’s not usually the things that we do that we regret.  It’s the things we didn’t do:

“Of all sad words of tongue or pen, the saddest are these, ‘It might have been.”  – John Greenleaf Whittier

Have you answered to your calling?

Leave the World a Better Place

One sure-fire way that many people find their path is they aim to leave the world at least a little better than they found it.

“To laugh often and much; to win the respect of intelligent people and the affection of children
to leave the world a better place
to know even one life has breathed easier because you have lived. This is to have succeeded.” -- Ralph Waldo Emerson

It’s a reminder that we can measure our life by the lives of the people we touch.

You Might Also Like

7 Habits of Highly Motivated People

10 Leadership Ideas to Improve Your Influence and Impact

Boost Your Confidence with the Right Words

The Great Inspirational Quotes Revamped

The Great Leadership Quotes Collection Revamped

Categories: Architecture, Programming

No.

NOOP.NL - Jurgen Appelo - Fri, 04/17/2015 - 23:01
Say No.

Last week, I had a nice Skype call with a reader who was seeking my advice on becoming an author and speaker, and I gave him some pointers. I normally don’t schedule calls with random people asking for a favor, but this time I made an exception. I had a good reason.

The post No. appeared first on NOOP.NL.

Categories: Project Management

Australia - July/August 2015

Coding the Architecture - Simon Brown - Fri, 04/17/2015 - 17:42

It's booked! Following on from my trip to Australia and the YOW! 2014 conference in December last year, I'll be back in Australia during July and August. The rough plan is to arrive in Perth and head east; visiting at least Melbourne, Brisbane and Sydney again. I'm hoping to schedule some user group talks and, although there probably won't be any public workshops, I'll be running a limited number of in-house 1-day workshops and/or talks along the way too.

If you're interested in me visiting your office/company during my trip, please just drop me a note at simon.brown@codingthearchitecture.com. Thanks!

Categories: Architecture

Stuff The Internet Says On Scalability For April 17th, 2015

Hey, it's HighScalability time:

A fine tribute on Silicon Valley & hilarious formula evaluating Peter Gregory's positive impact on humanity.

  • 118/196: nations becoming democracies since mid19th century; $70K: nice minimum wage; 70 million: monthly StackExchange visitors; 1 billion: drone planted trees; 1,000 Years: longest-exposure camera shot ever

  • Quotable Quotes:

    • @DrQz: #Performance modeling is really about spreading the guilt around.

    • @natpryce: “What do we want?” “More levels of indirection!” “When do we want it?” “Ask my IDateTimeFactoryImplBeanSingletonProxy!”

    • @BenedictEvans: In the late 90s we were euphoric about what was possible, but half what we had sucked. Now everything's amazing, but we worry about bubbles

    • Calvin Zito on Twitter: "DreamWorks Animation: One movie, 250 TB to make.10 movies in production at one time, 500 million files per movie. Wow."

    • Twitter: Some of our biggest MySQL clusters are over a thousand servers.

    • @SaraJChipps: It's 2015: open source your shit. No one wants to steal your stupid CRUD app. We just want to learn what works and what doesn't.

    • Calvin French-Owen: And as always: peace, love, ops, analytics.

    • @Wikipedia: Cut page load by 100ms and you save Wikipedia readers 617 years of wait annually. Apply as Web Performance Engineer

    • @IBMWatson: A person can generate more than 1 million gigabytes of health-related data.

    • @allspaw: "We’ve learned that automation does not eliminate errors." (yes!)  

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • KaiserPro: Pixar is still battling with lots of legacy cruft. They went through a phase of hiring the best and brightest directly from MIT and the like.

    • @Obdurodon: Immutable data structures solve everything, in any environment where things like memory allocators and cache misses cost nothing.

    • @abt_programming: "Duplication is far cheaper than the wrong abstraction" - @sandimetz

    • @kellabyte: When I see places running 1,200 containers for fairly small systems I want to scream "WHY?!"

    • chetanahuja: One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual.

  • Chances are if something can be done with your data, it will be done. @RMac18: Snapchat is using geofilters specific to Uber's headquarter to poach engineers.

  • Why (most) High Level Languages are Slow. Exactly this by masterbuzzsaw: If manual memory management is cancer, what is manual file management, manual database connectivity, manual texture management, etc.? C# may have “saved” the world from the “horrors” of memory management, but it introduced null reference landmines and took away our beautiful deterministic C++ destructors.

  • Why NFS instead of S3/EBS? nuclearqtip with a great answer: Stateful; Mountable AND shareable; Actual directories; On-the-wire operations (I don't have to download the entire file to start reading it, and I don't have to do anything special on the client side to support this; Shared unix permission model; Tolerant of network failures Locking!; Better caching ; Big files without the hassle.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 14:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.