Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Top Developer Gifts (And Tech Geek Gifts)

Making the Complex Simple - John Sonmez - Mon, 11/17/2014 - 16:00

With the holidays around the corner, I thought it would be a good time to do another round-up of what I think are some of the best gifts for software developers, programmers and other technology geeks for this year. As you know, it can be very difficult to buy gifts for software developers, because, well, […]

The post Top Developer Gifts (And Tech Geek Gifts) appeared first on Simple Programmer.

Categories: Programming

Measures of Program Performance

Herding Cats - Glen Alleman - Mon, 11/17/2014 - 14:29

In a sufficiently complex project we need measures of progress to plan beyond burning down our list of same sized stories, which by the way require non-trivial work to make same sized and keep same sized. And of course if this same size-ness does not have a sufficiently small variance all that effort is a waste.

But let's assume we're not working on a sufficiently small project where same sized work efforts can be developed, we need measures of progress related to the Effectiveness of the deliverables and the Performance of those deliverable in producing that effectiveness for the customer.

Here's a recent webinar on this topic.

Measurement News Webinar from Glen Alleman And of course we need to define in what domain this approach can be applied, what domain it is too much, and what domain it is actually not enough. Paradigm of agile project management from Glen Alleman Then the actual conversation about any approach to Increasing the Probability of Success for our work efforts can start. Along with identifying the underlying Root Causes of any impediments to that goal that exist today and the corrective actions needed to remove them.  Without knowing the root cause and corrective actions any suggested solution has little value as it is speculative at best and nonsense at worse.
Categories: Project Management

R: ggmap – Overlay shapefile with filled polygon of regions

Mark Needham - Mon, 11/17/2014 - 01:53

I’ve been playing around with plotting maps in R over the last week and got to the point where I wanted to have a google map in the background with a filled polygon on a shapefile in the foreground.

The first bit is reasonably simple – we can just import the ggmap library and make a call to get_map:

> library(ggmap)
> sfMap = map = get_map(location = 'San Francisco', zoom = 12)
2014 11 17 00 27 11

Next I wanted to show the outlines of the different San Francisco zip codes and came across a blog post by Paul Bidanset on Baltimore neighbourhoods which I was able to adapt.

I downloaded a shapefile of San Francisco’s zip codes from the DataSF website and then loaded it into R using the readOGR and spTransform functions from the rgdal package:

> library(rgdal)
> library(ggplot2)
> sfn = readOGR(".","sfzipcodes") %>% spTransform(CRS("+proj=longlat +datum=WGS84"))
> ggplot(data = sfn, aes(x = long, y = lat, group = group)) + geom_path()
2014 11 17 00 38 32

sfn is a spatial type of data frame…

> class(sfn)
[1] "SpatialPolygonsDataFrame"
attr(,"package")
[1] "sp"

…but we need a normal data frame to be able to easily merge other data onto the map and then plot it. We can use ggplot2’s fortify command to do this:

> names(sfn)
[1] "OBJECTID" "ZIP_CODE" "ID"   
 
> sfn.f = sfn %>% fortify(region = 'ZIP_CODE')
 
SFNeighbourhoods  = merge(sfn.f, sfn@data, by.x = 'id', by.y = 'ZIP_CODE')

I then made up some fake values for each zip code so that we could have different colour shadings for each zip code on the visualisation:

> library(dplyr) 
 
> postcodes = SFNeighbourhoods %>% select(id) %>% distinct()
 
> values = data.frame(id = c(postcodes),
                      value = c(runif(postcodes %>% count() %>% unlist(),5.0, 25.0)))

I then merged those values onto SFNeighbourhoods:

> sf = merge(SFNeighbourhoods, values, by.x='id')
 
> sf %>% group_by(id) %>% do(head(., 1)) %>% head(10)
Source: local data frame [10 x 10]
Groups: id
 
      id      long      lat order  hole piece   group OBJECTID    ID     value
1  94102 -122.4193 37.77515     1 FALSE     1 94102.1       14 94102  6.184814
2  94103 -122.4039 37.77006   106 FALSE     1 94103.1       12 94103 21.659752
3  94104 -122.4001 37.79030   255 FALSE     1 94104.1       10 94104  5.173199
4  94105 -122.3925 37.79377   293 FALSE     1 94105.1        2 94105 15.723456
5  94107 -122.4012 37.78202   504 FALSE     1 94107.1        1 94107  8.402726
6  94108 -122.4042 37.79169  2232 FALSE     1 94108.1       11 94108  8.632652
7  94109 -122.4139 37.79046  2304 FALSE     1 94109.1        8 94109 20.129402
8  94110 -122.4217 37.73181  2794 FALSE     1 94110.1       16 94110 12.410610
9  94111 -122.4001 37.79369  3067 FALSE     1 94111.1        9 94111 10.185054
10 94112 -122.4278 37.73469  3334 FALSE     1 94112.1       18 94112 24.297588

Now we can easily plot those colours onto our shapefile by calling geom_polgon instead of geom_path:

> ggplot(sf, aes(long, lat, group = group)) + 
    geom_polygon(aes(fill = value))

2014 11 17 00 49 11

And finally let’s wire it up to our google map:

> ggmap(sfMap) + 
    geom_polygon(aes(fill = value, x = long, y = lat, group = group), 
                 data = sf,
                 alpha = 0.8, 
                 color = "black",
                 size = 0.2)
2014 11 17 00 50 13

I spent way too long with the alpha value set to ‘0’ on this last plot wondering why I wasn’t seeing any shading so don’t make that mistake!

Categories: Programming

When the Solution to Bad Management is a Bad Solution

Herding Cats - Glen Alleman - Mon, 11/17/2014 - 01:40

SMartin_SB_PICS5Much has been written about the Estimating Problem, the optimism bias, the planning fallacy, and other related issues with estimating in the presence of Dilbert-isk style management. The notion that the solution to the estimating problem is not to estimate, but to start work, measure the performance of the work, and use that to forecast completion dates and efforts is essentially falling into the trap Steve Martin did in LA Story. 

Using yesterday's weather becasue he was too lazy to make tomorrow's forecast

By the way each of those issues has a direct and applicable solution. So next time you hear someone use them as the basis of a new idea, ask if they have tried the known to work solution to the planning fallacy, estimating bias, optimism bias, and the myriad of other project issues with knowing solutions?

All measuring performance to date does is measure yesterday's weather. This yesterday's weather paradigm has been well studied. If in fact your project is based on Climate then yesterday's weather is likely a good indicator of tomorrow's weather.

The problem of course with the yesterday's weather approach, is the same problem Steve Martin had in LA Story when he used a previously recorded weather forecast for the next day. 

Today's weather turned out not to be like yesterday's weather.

Those posting that stories settle down to a rhythm assume - and we know what assume means - that the variances in the work efforts are settling down as well. That would mean the word assume comes true Ass out of U and Me. That's a hugely naive approach, without actual confirmation that the variances are small enough to not impact the past performance. When you have statistical processes looking like this, from small sampled projects in the absence of actual reference class - in this case self-reference class - you're also being hugely naive about the possible behaviours of stochastic processes.

14_10_08_ItemsPerSprint

Then when you slice the work to same sized efforts - this is actually process used in the domains we work: DOD, DOE, ERP - you're actually estimating future performance base on a reference class and calling it Not Estimating.

So when you hear examples and Bad Management, over commitment of work, assigning a project manager to a project that is 100's of time larger than that PM has ever experienced and expecting success, getting a credible estimate and cutting it in half, or any other Dilbert style management process - and you start with dropping the core process needed to increase the probability of success.

This approach is itself contrary to good project management principles, which are quite simple:

Principles and Practices of Performance-Based Project Management® from Glen Alleman  

 If we start with a solution to a problem of Bad Management, before assuring that the Principles and Practices of Good Management are in place, we'll be paving the cow path as we say in our enterprise, space, defense domain. This means that the solution will have not actually fixed the problem. It will have not treated the root cause of the problem, just addressed the symptoms.

There is no substitute for Good Management.

Inigo1And when you hear there is a smell of bad management and there is no enumeration of the root causes and the corrective actions to those root causes, remember Ingio Montoya's retort to Vizzini's statement

You keep using that word. I do not think it means what you think it means.

That word is dysfunction, smell, root cause - all of which are missing the actual innumerated root causes, assessment of the possible corrective actions, and resulting removal of the symptoms. 

I speak about this approach from my hands on experience working the Performance Assessment and Root Cause Analysis on programs that are in the headlines.

Related articles Should I Be Estimating My Work? Estimating Guidance Assessing Value Produced By Investments Basis of #NoEstimates are 27 Year Old Reports
Categories: Project Management

Spark: Parse CSV file and group by column value

Mark Needham - Sun, 11/16/2014 - 23:53

I’ve found myself working with large CSV files quite frequently and realising that my existing toolset didn’t let me explore them quickly I thought I’d spend a bit of time looking at Spark to see if it could help.

I’m working with a crime data set released by the City of Chicago: it’s 1GB in size and contains details of 4 million crimes:

$ ls -alh ~/Downloads/Crimes_-_2001_to_present.csv
-rw-r--r--@ 1 markneedham  staff   1.0G 16 Nov 12:14 /Users/markneedham/Downloads/Crimes_-_2001_to_present.csv
 
$ wc -l ~/Downloads/Crimes_-_2001_to_present.csv
 4193441 /Users/markneedham/Downloads/Crimes_-_2001_to_present.csv

We can get a rough idea of the contents of the file by looking at the first row along with the header:

$ head -n 2 ~/Downloads/Crimes_-_2001_to_present.csv
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,"(41.75017626412204, -87.55494559131228)"

I wanted to do a count of the ‘Primary Type’ column to see how many of each crime we have. Using just Unix command line tools this is how we’d do that:

$ time tail +2 ~/Downloads/Crimes_-_2001_to_present.csv | cut -d, -f6  | sort | uniq -c | sort -rn
859197 THEFT
757530 BATTERY
489528 NARCOTICS
488209 CRIMINAL DAMAGE
257310 BURGLARY
253964 OTHER OFFENSE
247386 ASSAULT
197404 MOTOR VEHICLE THEFT
157706 ROBBERY
137538 DECEPTIVE PRACTICE
124974 CRIMINAL TRESPASS
47245 PROSTITUTION
40361 WEAPONS VIOLATION
31585 PUBLIC PEACE VIOLATION
26524 OFFENSE INVOLVING CHILDREN
14788 CRIM SEXUAL ASSAULT
14283 SEX OFFENSE
10632 GAMBLING
8847 LIQUOR LAW VIOLATION
6443 ARSON
5178 INTERFERE WITH PUBLIC OFFICER
4846 HOMICIDE
3585 KIDNAPPING
3147 INTERFERENCE WITH PUBLIC OFFICER
2471 INTIMIDATION
1985 STALKING
 355 OFFENSES INVOLVING CHILDREN
 219 OBSCENITY
  86 PUBLIC INDECENCY
  80 OTHER NARCOTIC VIOLATION
  12 RITUALISM
  12 NON-CRIMINAL
   6 OTHER OFFENSE
   2 NON-CRIMINAL (SUBJECT SPECIFIED)
   2 NON - CRIMINAL
 
real	2m37.495s
user	3m0.337s
sys	0m1.471s

This isn’t too bad but it seems like the type of calculation that Spark is made for so I had a look at how I could go about doing that. To start with I created an SBT project with the following build file:

name := "playground"
 
version := "1.0"
 
scalaVersion := "2.10.4"
 
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
 
libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
 
ideaExcludeFolders += ".idea"
 
ideaExcludeFolders += ".idea_modules"

I downloaded Spark and after unpacking it launched the Spark shell:

$ pwd
/Users/markneedham/projects/spark-play/spark-1.1.0/spark-1.1.0-bin-hadoop1
 
$ ./bin/spark-shell
...
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.1.0
      /_/
 
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
...
Spark context available as sc.
 
scala>

I first import some classes I’m going to need:

scala> import au.com.bytecode.opencsv.CSVParser
import au.com.bytecode.opencsv.CSVParser
 
scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD

Now, following the quick start example, we’ll create a Resilient Distributed Dataset (RDD) from our Crime CSV file:

scala> val crimeFile = "/Users/markneedham/Downloads/Crimes_-_2001_to_present.csv"
crimeFile: String = /Users/markneedham/Downloads/Crimes_-_2001_to_present.csv
 
scala> val crimeData = sc.textFile(crimeFile).cache()
14/11/16 22:31:16 INFO MemoryStore: ensureFreeSpace(32768) called with curMem=0, maxMem=278302556
14/11/16 22:31:16 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 265.4 MB)
crimeData: org.apache.spark.rdd.RDD[String] = /Users/markneedham/Downloads/Crimes_-_2001_to_present.csv MappedRDD[1] at textFile at <console>:17

Our next step is to process each line of the file using our CSV Parser. A simple way to do this would be to create a new CSVParser for each line:

scala> crimeData.map(line => {
         val parser = new CSVParser(',')
         parser.parseLine(line).mkString(",")
       }).take(5).foreach(println)
14/11/16 22:35:49 INFO SparkContext: Starting job: take at <console>:23
...
4/11/16 22:35:49 INFO SparkContext: Job finished: take at <console>:23, took 0.013904 s
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,(41.75017626412204, -87.55494559131228)
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,(41.729576153145636, -87.57568059471686)
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,(41.884543798701515, -87.72803579358926)
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,(41.785633535413176, -87.74148516669783)

That works but it’s a bit wasteful to create a new CSVParser each time so instead let’s just create one for each partition that Spark splits our file up into:

scala> crimeData.mapPartitions(lines => {
         val parser = new CSVParser(',')
         lines.map(line => {
           parser.parseLine(line).mkString(",")
         })
       }).take(5).foreach(println)
14/11/16 22:38:44 INFO SparkContext: Starting job: take at <console>:25
...
14/11/16 22:38:44 INFO SparkContext: Job finished: take at <console>:25, took 0.015216 s
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,(41.75017626412204, -87.55494559131228)
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,(41.729576153145636, -87.57568059471686)
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,(41.884543798701515, -87.72803579358926)
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,(41.785633535413176, -87.74148516669783)

You’ll notice that we’ve still got the header being printed which isn’t ideal – let’s get rid of it!

I expected there to be a ‘drop’ function which would allow me to do that but in fact there isn’t. Instead we can make use of our knowledge that the first partition will contain the first line and strip it out that way:

scala> def dropHeader(data: RDD[String]): RDD[String] = {
         data.mapPartitionsWithIndex((idx, lines) => {
           if (idx == 0) {
             lines.drop(1)
           }
           lines
         })
       }
dropHeader: (data: org.apache.spark.rdd.RDD[String])org.apache.spark.rdd.RDD[String]

Now let’s grab the first 5 lines again and print them out:

scala> val withoutHeader: RDD[String] = dropHeader(crimeData)
withoutHeader: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at mapPartitionsWithIndex at <console>:14
 
scala> withoutHeader.mapPartitions(lines => {
         val parser = new CSVParser(',')
         lines.map(line => {
           parser.parseLine(line).mkString(",")
         })
       }).take(5).foreach(println)
14/11/16 22:43:27 INFO SparkContext: Starting job: take at <console>:29
...
14/11/16 22:43:27 INFO SparkContext: Job finished: take at <console>:29, took 0.018557 s
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,(41.75017626412204, -87.55494559131228)
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,(41.729576153145636, -87.57568059471686)
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,(41.884543798701515, -87.72803579358926)
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,(41.785633535413176, -87.74148516669783)
9460355,HX113738,01/14/2014 04:21:00 AM,070XX S PEORIA ST,0820,THEFT,$500 AND UNDER,STREET,true,false,0733,007,17,68,06,1171480,1858195,2014,01/16/2014 12:40:00 AM,41.766348042591375,-87.64702037047671,(41.766348042591375, -87.64702037047671)

We’re finally in good shape to extract the values from the ‘Primary Type’ column and count how many times each of those appears in our data set:

scala> withoutHeader.mapPartitions(lines => {
         val parser=new CSVParser(',')
         lines.map(line => {
           val columns = parser.parseLine(line)
           Array(columns(5)).mkString(",")
         })
       }).countByValue().toList.sortBy(-_._2).foreach(println)
14/11/16 22:45:20 INFO SparkContext: Starting job: countByValue at <console>:30
14/11/16 22:45:20 INFO DAGScheduler: Got job 7 (countByValue at <console>:30) with 32 output partitions (allowLocal=false)
...
14/11/16 22:45:30 INFO SparkContext: Job finished: countByValue at <console>:30, took 9.796565 s
(THEFT,859197)
(BATTERY,757530)
(NARCOTICS,489528)
(CRIMINAL DAMAGE,488209)
(BURGLARY,257310)
(OTHER OFFENSE,253964)
(ASSAULT,247386)
(MOTOR VEHICLE THEFT,197404)
(ROBBERY,157706)
(DECEPTIVE PRACTICE,137538)
(CRIMINAL TRESPASS,124974)
(PROSTITUTION,47245)
(WEAPONS VIOLATION,40361)
(PUBLIC PEACE VIOLATION,31585)
(OFFENSE INVOLVING CHILDREN,26524)
(CRIM SEXUAL ASSAULT,14788)
(SEX OFFENSE,14283)
(GAMBLING,10632)
(LIQUOR LAW VIOLATION,8847)
(ARSON,6443)
(INTERFERE WITH PUBLIC OFFICER,5178)
(HOMICIDE,4846)
(KIDNAPPING,3585)
(INTERFERENCE WITH PUBLIC OFFICER,3147)
(INTIMIDATION,2471)
(STALKING,1985)
(OFFENSES INVOLVING CHILDREN,355)
(OBSCENITY,219)
(PUBLIC INDECENCY,86)
(OTHER NARCOTIC VIOLATION,80)
(NON-CRIMINAL,12)
(RITUALISM,12)
(OTHER OFFENSE ,6)
(NON-CRIMINAL (SUBJECT SPECIFIED),2)
(NON - CRIMINAL,2)

We get the same results as with the Unix commands except it took less than 10 seconds to calculate which is pretty cool!

Categories: Programming

SPaMCAST 316 – David Rico, Agile Cost of Quality

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 316

SPaMCAST 316 features a return visit from Dr. David Rico. We talked about the cost of quality and Agile. Does Agile impact the cost of quality? The cost of quality is a measure of the time and cost that is required to ensure that what is delivered meets quality standards. Dr. Rico walks us through the evidence that not only does Agile improve customer satisfaction, but it also improves the cost of quality.

Dr. Rico has been a technical leader in support of NASA, U.S. Air Force, U.S. Navy, and U.S. Army for over 30 years. He has led numerous projects based on Cloud Computing, Lean Thinking, Agile Methods, SOA, Web Services, Six Sigma, FOSS, ISO 9001, CMMI, Baldrige, TQM, Enterprise Architecture, DoDAF, and DoD 5000. He specializes in IT investment analysis, IT portfolio valuation, and IT enabled change. He has been an international keynote speaker, presented at leading industry conferences, written seven textbooks, published numerous articles, and is a reviewer for multiple systems engineering journals. He is a Certified PMP, CSEP, ACP, CSM, and SAFe Agilist, and teaches at four Washington, DC-area universities. He has been in the field of information systems since 1983.

Contact Dr Rico
Blog:  davidfrico.com
Email: dave1@davidfrico.com
Twitter: @dr_david_f_rico

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 317 will tackle a wide range of frequently asked questions, ranging from the possibility of an acceleration trap, the relevance of function points, whether teams have a peak loads and safe to fail experiments.

We will also have the next instalment of Kim Pries’s column, The Software Sensei!

 

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

 

 


Categories: Process Management

SPaMCAST 316 – David Rico, Agile Cost of Quality

Software Process and Measurement Cast - Sun, 11/16/2014 - 23:00

SPaMCAST 316 features a return visit from Dr. David Rico. We talked about the cost of quality and Agile. Does Agile impact the cost of quality? The cost of quality is a measure of the time and cost that is required to ensure that what is delivered meets quality standards. Dr. Rico walks us through the evidence that not only does Agile improve customer satisfaction, but it also improves the cost of quality.

Dr. Rico has been a technical leader in support of NASA, U.S. Air Force, U.S. Navy, and U.S. Army for over 30 years. He has led numerous projects based on Cloud Computing, Lean Thinking, Agile Methods, SOA, Web Services, Six Sigma, FOSS, ISO 9001, CMMI, Baldrige, TQM, Enterprise Architecture, DoDAF, and DoD 5000. He specializes in IT investment analysis, IT portfolio valuation, and IT enabled change. He has been an international keynote speaker, presented at leading industry conferences, written seven textbooks, published numerous articles, and is a reviewer for multiple systems engineering journals. He is a Certified PMP, CSEP, ACP, CSM, and SAFe Agilist, and teaches at four Washington, DC-area universities. He has been in the field of information systems since 1983.

Contact Dr Rico
Blog:  davidfrico.com
Email: dave1@davidfrico.com
Twitter: @dr_david_f_rico

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 317 will tackle a wide range of frequently asked questions, ranging from the possibility of an acceleration trap, the relevance of function points, whether teams have a peak loads and safe to fail experiments.

We will also have the next instalment of Kim Pries’s column, The Software Sensei!

 

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

 

 

Categories: Process Management

Re-read Saturday: Leading Change, John P. Kotter Chapter 2

You need to climb each stage to reach to top!

You need to climb each stage to reach to top!

 

We began exploring Leading Change by John P. Kotter by exploring the reasons organizational change fails. Chapter two explores successful change and the forces that drive successful change: an introduction to Kotter’s famous eight-stage process model and the role of leadership in change.

The eight-stage process for creating major change is a direct reflection of the eight common errors described in chapter one. The model describes a sequence of activities needed to generate, implement and then instantiate change within an organization. The eight steps are:

  1. Establish a sense of urgency.
  2. Creating a guiding coalition.
  3. Developing a vision and strategy.
  4. Communicating the change vision.
  5. Empower broad-based action.
  6. Generating short-term wins.
  7. Consolidating gains and producing more changes.
  8. Anchoring new approaches in the culture.

Each step in the model builds on the step before. Jumping into the model by communicating a vision and strategy with a power base and an organizational urgency is like putting the cart before the horse. The strategy and vison you are trying to communicate will not have the motivational power and would easily run out gas. When considering the stages in the model, recognize that Kotter conceived of the model as a sequence and that each step needs to be addressed.

Kotter talked briefly in the chapter about projects within projects. The idea is that most major changes are a reflection group of inter-related changes. An IT program is a group of related projects managed in a coordinated fashion. Any project can be starting or completing if we looked at a cross section of the program at any specific time. Similarly any individual change project following Kotter’s process within a larger group of changes will be at the stage need by that project.

The second major theme in this chapter is a discussion of leadership and the differences between leadership and management. Leadership provides vision and direction that are needed for building a powerbase for change and then to galvanize the organization into action. Almost by definition a leader conceives of a vision of the future and then acts as a catalyst to make that vision a reality. Leadership is transformational in nature. The difficulty is that many change programs are led by managers rather than leaders. Management is concerned with organizing, planning and controlling work. Almost by definition management is a tool to resist change. Management is important to the day-to-day activities, but without the vision of leadership there would be nothing to manage. Where leadership transforms, management translates.

While the dichotomy of leadership and management seems black and white, both are always required in any organization. As the rate of change increases (or at least as the need for the rate of change increases), the need for vision and leadership increases. Alternately during periods in were there is little pressure on firms business model, the need for managers and management tends to rise into ascendancy over the need for leadership. The late 40’s and 1950’s were such a period in the United States. That is not the environment that we find ourselves in today. Change is a fact of life. Kotter’s eight-stage process model provides a structure for applying leadership in consistent manner that identifies why change needs to occur, builds a base, delivers change and makes sure it sticks.


Categories: Process Management

Social Media and Process Improvement

Collaboration tools are now nearly ubiquitous in the home, in the workplace and on your belt. These tools are wielded by a generation that has grown up using text/SMS messages.

Collaboration tools are now nearly ubiquitous in the home, in the workplace and on your belt. These tools are wielded by a generation that has grown up using text/SMS messages.

A piece of advice I once received was that you either made things happen or let them happen to you. That piece of advice is as true now as it was then.  We live in an era of great change inside and outside the information technology community.  The advent of social media and its incorporation into workplace to foster collaboration is only one affectation of the change that is going on.  There are many messages the explosion of social medias such as LinkedIn, Facebook, blogs and podcast can share with the process improvement (PI) community.   The first and perhaps the most important message is transmitted by the extraordinary fact these tools exist at all.   The fact that these tools exist (not even reflecting on the fact that there are five more everyday) is that change is occurring and that it is unstoppable because it fits human nature. These changes are not only unstoppable, but gaining speed.  The change driven by social media is a very human kind of change which affects how people interact and how work is accomplished.  In the process improvement arena both you and your customers are the change.  This a statement on many levels.  I would like to focus on what the changing environment says about a changing vision of control.  Control is a critical concept in the process improvement community and the change I see in our industry is redefining the concept of control or at the very least challenging how control can be applied.

At its basic level control is a gate that interjects “permissions” between two groups.  For example, defining a process for tailoring a defined methodology for use on a project creates a control gate that regulates how work will be accomplished. The process defines the ritual required to granted permission to change the standard process.  The world that created these control gates is quickly being overcome by a new set of rules that govern collaboration and social interaction.  Workers in this new world can easily view permissions as a hurdle between them and getting the work done.  This is most true when control gates add drag to the process without adding perceived value.  This conflicts with an alternative, albeit more classic, view of organizational control, where relying on the wisdom of crowds or where a wide distribution of authority is view as contributing to anarchy.  The second paradigm is one that we have been trained to accept as true by models such as the CMMI or tools like Six Sigma.  It is the view that standardization is good and control is required for standardization therefore anything that challenges control leads to a suboptimal outcome.   I suggest that a change in paradigm is at our door, it is knocking and it isn’t going to leave. Change is inevitable and since we are the change, we can help guide change in our organizations and our industry or ride the wave and let change happen to us.


Categories: Process Management

Social Media and Process Improvement

Collaboration tools are now nearly ubiquitous in the home, in the workplace and on your belt. These tools are wielded by a generation that has grown up using text/SMS messages.

Collaboration tools are now nearly ubiquitous in the home, in the workplace and on your belt. These tools are wielded by a generation that has grown up using text/SMS messages.

A piece of advice I once received was that you either made things happen or let them happen to you. That piece of advice is as true now as it was then.  We live in an era of great change inside and outside the information technology community.  The advent of social media and its incorporation into workplace to foster collaboration is only one affectation of the change that is going on.  There are many messages the explosion of social medias such as LinkedIn, Facebook, blogs and podcast can share with the process improvement (PI) community.   The first and perhaps the most important message is transmitted by the extraordinary fact these tools exist at all.   The fact that these tools exist (not even reflecting on the fact that there are five more everyday) is that change is occurring and that it is unstoppable because it fits human nature. These changes are not only unstoppable, but gaining speed.  The change driven by social media is a very human kind of change which affects how people interact and how work is accomplished.  In the process improvement arena both you and your customers are the change.  This a statement on many levels.  I would like to focus on what the changing environment says about a changing vision of control.  Control is a critical concept in the process improvement community and the change I see in our industry is redefining the concept of control or at the very least challenging how control can be applied.

At its basic level, a control is a gate that interjects “permissions” between two groups.  For example, defining a process for tailoring a defined methodology for use on a project creates a control gate that regulates how work will be accomplished. The process defines the ritual required to granted permission to change the standard process.  The world that created these control gates is quickly being overcome by a new set of rules that govern collaboration and social interaction.  Workers in this new world can easily view permissions as a hurdle between them and getting the work done.  This is most true when control gates add drag to the process without adding perceived value.  This conflicts with an alternative, albeit more classic, view of organizational control, where relying on the wisdom of crowds or where a wide distribution of authority is view as contributing to anarchy.  The second paradigm is one that we have been trained to accept as true by models such as the CMMI or tools like Six Sigma.  It is the view that standardization is good and control is required for standardization therefore anything that challenges control leads to a suboptimal outcome.   I suggest that a change in paradigm is at our door, it is knocking and it isn’t going to leave. Change is inevitable and since we are the change, we can help guide change in our organizations and our industry or ride the wave and let change happen to us.


Categories: Process Management

Building Games with Google Cast is Easy and Powerful

Google Code Blog - Fri, 11/14/2014 - 20:03
By Nathan Camarillo, Chromecast Product Manager for Games

This week, we announced several Google Cast games that were built with the Google Cast SDK. With a Google Cast endpoint like a Chromecast or Android TV, developers can now use Google Cast to bring a whole new style of multi-screen social gaming experiences to the living room.
What makes a Google Cast game unique? It enables multi-screen gameplay between mobile devices and a television, and it transforms users’ mobile devices into amazing game controllers. Anyone with a compatible Android or iOS phone or tablet can join the game. There’s no hassle or expense with extra controllers or peripherals; your very own iPhone or Android phone simply works and everyone can join in.

The innovative part of creating a Google Cast game is all the ways you can use mobile devices to create a variety of controls with the television as a shared screen. The Accelerometer can be used for motion controls ranging from subtle, to dramatic, to rhythmic. No-Look Controls and virtual controllers can allow you to focus on the television as you compete against friends. Direct target manipulation through touch controls can create intense gameplay moments that temporarily focus on the mobile device and then return the focus to the television where all players can share and compare the results. You can even use the microphone or other input methods to create games for everyone in the home. Whether cooperative or competitive, Google Cast enables you to create fun moments for everyone using the devices they already have and know how to use.

Now it’s your turn! Go make something fun with the Google Cast Design Principles. The experiences you create using Google Cast will entertain gamers and inspire a whole community of developers embracing a new revolution in multi-screen gaming.

Categories: Programming

Fixing the #1 problem with Xamarin.Forms

Eric.Weblog() - Eric Sink - Fri, 11/14/2014 - 19:00
Update

I'm not sure what I expected when I wrote this blog entry. I just figured I would "throw it out there" and see what happened. Maybe the Xamarin.Forms team would talk about my idea at their next standup.

What actually happened is that within an hour, Jason Smith (the lead developer of Xamarin.Forms) invited me to a Skype call, shared his screen, brought up The Code, and we talked about ways it could be improved.

I didn't expect that.

(But it was glorious, as nerd conversations about code so often are.)

The discussion gave me a decent idea of what kind of changes Jason intended to make, but I don't think it is my place to say more about the details. Suffice it to say that I'm probably not getting exactly what I suggested, but it looks like I'll be getting what I need. If Jason and his team end up confirming the viability of the changes he and I discussed, then it will definitely become possible to get much better performance in a Xamarin.Forms app.

Anyway, huge kudos and thanks to Jason for a response I consider way "above and beyond the call".

And the #1 problem is...

It's slow.

To be fair...

It's not always slow. There are plenty of use cases where it works fine.

And not all of Xamarin.Forms is slow. The biggest offender is the layout system.

But this can be a pretty big problem. It's not too hard to find stories of people who gave up on Xamarin.Forms simply because there was no way to make their app perform acceptably.

Positive remarks to balance my complaining

Xamarin.Forms is terribly exciting. I've spent much of the last three months working with it, and I love it. Like any young technology trying to solve very hard problems, it has plenty of rough edges. But it shows high potential for awesomeness.

And lots of people seem to agree. In fact, I daresay that the level of excitement around Xamarin.Forms has taken some people at Xamarin by surprise. I can't quote verbatim, but I'm pretty sure I heard somebody at Evolve say that they changed their plans for the training and sessions to include a lot more Xamarin.Forms content because of the buzz.

People want this technology. And they want it to be great.

That's my goal in writing this piece. I just want to help make Xamarin.Forms great.

BTW

If the code were open source, this blog entry would be a pull request. And it wouldn't be my only one.

Someone has made the decision to not open source Xamarin.Forms, and I shall not criticize or second-guess this choice. But I can't resist saying that if it were open source, I would be actively spending my time helping make it better.

This is a very difficult problem

Cross-platform stuff is really hard. And if you add "UI" into the previous sentence then you have to add a least two more "reallys".

I've been doing cross platform development for a long time. I've used dozens of different UI toolkits and APIs. After you master the first ten, they all start to look the same.

Until you try to wrap them in a common API, and then the differences stop whispering and start screaming.

Xamarin.Forms is venturing into territory where dozens of others have failed. We're going to have to be patient.

But good grief the layout code is slow

Seriously. Write something non-trivial with Xamarin.Forms and then bring it up under a profiler.

But that's okay...

... for two reasons.

  • The layout code is doing a lot of work for you. I don't mind code that takes time when it's doing stuff I need done. That's life. The layout system is powerful, and convenience always comes at a price.

  • The code will get better. I've spent an awful lot of time reading Xamarin.Forms.Core.dll in the Xamarin Studio Assembly Browser. This code simply hasn't had much attention given to optimizaiton yet. When it does, it will get faster.

I'm not complaining about the layout code being slow. I'm complaining that I always have to use it.

Child views vs. Layouts

The real problem here is that Xamarin.Forms gives you no way to have child views without using a subclass of Layout. What I want is the ability to write a subclass of View that can have child views.

Right now I can do something like this:

public class myParentView : Layout<View>
{
    private Rectangle getBox(View child)
    {
        // whatever
    }

    protected override void LayoutChildren (double x, double y, double width, double height)
    {
        foreach (View v in Children) {
            Rectangle r = getBox (v);
            v.Layout (r);
        }
    }
}

But for some situations, the Layout system is just way too much. Events propagating up and down. Height and width requests. Real estate negotiations between every child view and its parent.

And for the sake of making sure the layout always gets updated when it needs to be, the code triggers a layout cycle in response to, well, almost everything.

Add or remove a child view? Relayout everything.

Here's my favorite: Put a Label in a Frame in a StackLayout in another StackLayout in a ScrollView in a Whatever. The details don't matter. Just build a hierarchy several levels deep with a Label down near the bottom. Then change the Text property of that label from "foo" to "bar". This triggers a layout cycle, because the label might want to request more size, and the parent might want to care about that request, so the whole negotiation starts over.

For extra fun, bind that Label's Text property to something that is also bound to the Text property of an Entry. Now you can trigger a complete relayout on every keystroke.

Xamarin.Forms currently suffers from a problem that is very typical for cross-platform UI toolkits at the toddler stage: It is constantly triggering layout updates "just in case" one might be needed. In general, it is better to have your forums filled with people saying "layout is slow" rather than people saying "I changed something and layout didn't update".

We need a lower-level mechanism

I would like to rewrite my example above to look something like this:

public class myParentView : View
{
    private List myChildren;

    private Rectangle getBox(View child)
    {
        // whatever
    }

    public LayoutMyChildren()
    {
        foreach (View v in myChildren) {
            Rectangle r = getBox (v);
            v.Layout (r);
        }
    }
}
  • Instead of subclassing Layout, I have subclassed View.

  • Since I no longer inherit a Children collection, I have defined my own.

  • Whereas previously I did an override of the abstract LayoutChildren() method, now I just provide a method called LayoutMyChildren().

  • Whenever I know I need to arrange all my child views, instead of triggering a layout cycle, I call LayoutMyChildren().

  • If I only need to arrange some of my child views, I can write a method that does less work.

  • Not shown, but I probably need to override OnSizeAllocated() and call LayoutMyChildren().

  • Since I own myChildren, inserts and deletes from this collection will not automatically trigger a layout cycle. If anything needs to updated when my set of child views changes, I have to do that myself.

In other words, I don't want any help laying things out. I just want the ability to have child views, and I don't want anybody meddling with my right to parent those children however I see fit.

A full implementation of myParentView is going to be rather complicated. But it's going to be fast, because myParentView has all the knowledge it needs to allow it to make smart decisions and avoid doing things that don't need to be done.

The design of child views in Xamarin.Forms

The concept of "child views" should be be distinct from the concept of "deciding where those child views should be placed". Thinking over the various UI toolkits I have used, the separation of these two concepts is a common pattern.

But the example above doesn't work in Xamarin.Forms 1.2. My child views never get a renderer assigned to them, so they never show up. The element hierarchy on the PCL side is fine. It just never gets mirrored on the platform side.

When I explained this to a coworker, I said, "The whole idea almost works. In fact, it all works, right up until the end, when it doesn't work at all."

Interestingly, Xamarin.Forms is very close to the design I want it to have. In researching this problem, I expected to find the child view management stuff in Xamarin.Forms.Layout, but it's just not there. As it turns out, the ability to have child views is actually not specific to Xamarin.Forms.Layout. Rather, this concept is implemented in Xamarin.Forms.Element (right where it belongs, IMNSHO).

The Element class has three important members. It does not actually manage a collection of child views. Rather, it exposes a way for a subclass to do so. OnChildAdded() and OnChildRemoved() need to be called when the child views collection has changed. These methods set the Parent property of the new child view, but more importantly, they trigger an event. The platform code is listening to this event (in VisualElementPackager), and that is where child views get their renderer assigned.

So what I need to do is call OnChildAdded() whenever I add something to myChildren. Now my layout wannabe class looks like this:

public class myParentView : View
{
    private List myChildren;

    private Rectangle getBox(View child)
    {
        // whatever
    }

    private addChild(View child)
    {
        myChildren.Add(child);
        OnChildAdded(child);
    }

    public LayoutMyChildren()
    {
        foreach (View v in myChildren) {
            Rectangle r = getBox (v);
            v.Layout (r);
        }
    }
}

And this almost works, but not quite. As I said above, Element has three important members which are used to communicate with the renderer about child views. The first two (discussed above) are OnChildAdded() and OnChildRemoved(). The third one is a property called LogicalChildren. This property is, well, the collection of child views. The two OnChild* methods are just there to keep things up to do date, but LogicalChildren is the starting point.

So what I need to do in myParentView is just override the LogicalChildren property, which is virtual. No problem, right?

Here's the problem: LogicalChildren is marked "internal", so a subclass cannot override it unless it was written by Xamarin.

My request

Broadly speaking, I am asking the Xamarin.Forms team for the ability to have a View subclass which can have and manage its own child views without being a subclass of Layout.

My suggested implementation of that idea is to just change Element.LogicalChildren from "internal" to "protected internal".

And I've spent some time trying to prove that my suggestion would work. Purely as a proof of concept, I used Mono.Cecil to create a hacked-up copy of Xamarin.Forms.Core.dll which has my suggested change:

#r "Mono.Cecil.dll"

open Mono.Cecil

let a = AssemblyDefinition.ReadAssembly("Xamarin.Forms.Core.dll")
for t in a.MainModule.Types do
    if t.FullName = "Xamarin.Forms.Element" then
        for p in t.Properties do
            if p.Name = "LogicalChildren" then
                let m = p.GetMethod
                m.IsFamilyOrAssembly <- true
a.Write("Better.Xamarin.Forms.Core.dll")

And then I wrote a myParentView class which has an override for LogicalChildren. The result is a huge performance increase for my particular use case, because myParentView doesn't need all the general-purpose stuff that the Layout subclasses provide. My contention is that there are lots of similar use cases which can benefit from this approach.

So that's my suggested change, but I don't actually care how this gets done. If the Xamarin.Forms team were to ship 1.3 with my "child views without layouts" concept exposed in some other fashion, I would be just as happy.

Moving forward

The Xamarin.Forms.Layout classes still need to get faster. And they will.

But regardless of when this happens, the concept of "child views" deserves to exist on its own.

 

Stuff The Internet Says On Scalability For November 14th, 2014

Hey, it's HighScalability time:


Spectacular rendering of the solar system to scale. (Roberto Ziche)

 

  • 700: number of low-orbit satellites in a sidecar cheap internet; 130 terabytes: AdRoll ad data processed daily; 15 billion: daily Weather Channel forecasts; 1 million: AWS customers
  • Quotable Quotes:
    • @benkepes: Each AWS data center has typically 50k to 80k physical servers. Up to 102Tbps provisioned networking capacity #reinvent
    • @scottvdp: AWS just got rid of infrastructure behind any application tier. Lambda for async distributed events, container engine for everything else.
    • @wif: AWS is handling 7 trillion DynamoDB requests per month in a single region. 4x over last year. same jitter. #reinvent
    • Philae: If my path was off by even half a degree the humans would have had to abort the mission.
    • Al Aho: Well, you can get a stack of stacks, basically. And the nested stack automaton has sort of an efficient way of implementing the stack of stacks, and you can think of it as sort of almost like a cactus. That's why some people are calling it cactus automata, at the time.
    • Gilt: Someone spent $30K on an Acura & LA travel package on their iPhone.
    • @cloudpundit: Gist of Jassy's #reinvent remarks: Are you an enterprise vendor? Do you have a high-margin product/service? AWS is probably coming for you.
    • @mappingbabel: Things coming out from the AWS #reinvent analyst summit - Amazon has minimum 3 million servers & lights up own globe-spanning fibre.
    • @cloudpundit: James Hamilton says mobile hardware design patterns are future of servers. Single-chip offerings, semiconductor-level innovation. #reinvent
    • @rightscale: RT @owenrog: AWS builds its own electricity substations simply because the power companies can't build fast enough to meet demand #reInvent
    • @timanderson: New C4 instances #reinvent up to 36 cores up to 16TB SSD
    • @holly_cummins: L1 cache is a beer in hand, L3 is fridge, main memory is walking to the store, disk access is flying to another country for beer. 
    • @ericlaw: Sample HTTP compression ratios observed on @Facebook: -1300%, -34.5%, -14.7%, -25.4%. ProTip: Don't compress two byte responses. #webperf
    • @JefClaes: It's not the concept that defines the invariants but the invariants that define the concept.

  • It's hard to imagine just a few short years ago AWS did not exist. Now it has 1 million customers, runs 70 million hours of software per month, and their AWS re:Invent conference has a robust 13,500 attendees. Re:Invent shows if Amazon is going to be disrupted, a lack of innovation will not be the cause. The key talking point is that AWS is not just IaaS anymore, AWS is a Platform. The underlying subtext is lock-in. Minecraft-like, Amazon is building out their platform brick by brick. Along with GCE, AWS announced a Docker based container service. Intel designed a special new cloud processor for AWS, which will be available in a new C4 instance type. There's Aurora, a bigger, badder MySQL. To the joy of many EBS is getting bigger and faster. The world is getting more reactive, S3 is emitting events. With less fan fare are an impressive suite of code deployment and management tools. There's also a key management service, a configuration manager, and a service catalog. Most provocative is Lambda, or PaaS++, which as the name suggests is the ability to call a function in response to events. Big deal? It could be, though quite limited currently, only supporting Node.js and a few event types. You can't, for example, terminate a REST request. What it could grow in to is promising, a complete abstraction layer over the cloud so any sense of machines and locations are removed. They aren't the first, but that hardly matters.

  • It's not a history of the Civil War. Or WWW I. Or the Dust Bowl. But An Oral History of Unix. Yes, that much time has passed. Interviewed are many names you'll recognize and some you've probably never heard of. A fascinating window into the deep deep past.

  • No surprise at all. Plants talk to each other using an internet of fungus: We suggest that tomato plants can 'eavesdrop' on defense responses and increase their disease resistance against potential pathogen...the phantom orchid, get the carbon they need from nearby trees, via the mycelia of fungi that both are connected to...Other orchids only steal when it suits them. These "mixotrophs" can carry out photosynthesis, but they also "steal" carbon from other plants...The fungal internet exemplifies one of the great lessons of ecology: seemingly separate organisms are often connected, and may depend on each other. 

  • How do you persist a 200 thousand messages/second data stream while guaranteeing data availability and redundancy? Tadas Vilkeliskis shows you how with Apache Kafka. It excels at high write rates, compession saves lots on network traffic, and a custom C++ http-to-kafka accommodates performance.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

DON’T PANIC! (For Everyone Who Fears Failure)

NOOP.NL - Jurgen Appelo - Fri, 11/14/2014 - 09:45
Don't Panic!

It happened just after I told my friend Lisette never to panic. When something goes wrong during your presentation, your workshop, your travels, or in any other endeavor, DO NOT PANIC! Your panic is most likely to cause the thing that you fear most.

The post DON’T PANIC! (For Everyone Who Fears Failure) appeared first on NOOP.NL.

Categories: Project Management

A Simple Tool: A pep talk

It takes a good attitude to be a good leader.

It takes a good attitude to be a good leader.

As a leader of people is easy to become weary to your bones after trying to convince a reticent organization, team or person to become a butterfly, when all they want to do is to stay in their nice safe cocoon. The forces lined up against you can be daunting. Don’t work against yourself by making your attitude part of the problem. Your attitude is one of your primary tools to lend credibility to your message and convince people to engage and befriend you. There are three attributes you need to consider managing immediately: negativism, sarcasm and partisanship.

Negativism is a habitual attitude of skepticism or resistance to the suggestions, order or instructions of others. This includes change and the belief that change is warranted or even possible. Leading change requires that you believe that you can succeed to motivate yourselves and those you are trying to influence. Without a belief that you can succeed, it will be difficult to get up in the morning and impossible to motivate others. I must at admit that I sometimes find that it is easy to confuse being highly rational with negativism. In the wee hours of the night make sure you evaluate which side of the line you are on and make corrections if you have strayed.

Behavior such as sarcasm might be acceptable amongst friends, but the impact of sarcasm is even less predictable when people do not know you or might have a different cultural filter are involved in the conversation. How many time have you heard “hey can’t they take a joke?” The answer is maybe not if it is apparently funny to their point of view. Frankly, just dropping sarcasm from your portfolio of communication techniques might be the best idea.

Another critical mistake that can be traced back to attitude is a need to have an enemy to strike against. Creating a “we/they” environment creates barriers between groups will make finding common ground more difficult. There are rarely benefits when one side is forced to capitulate to another (it difficult to compromise with someone you view as the enemy). You must recognize that as a leader and a negotiator your goal is to find the best solution for your organization.

Negativism, sarcasm and partisanship will minimize your effectiveness as a leader in the long run, and will add to the burden you need to shoulder in order to make change happen. Leading change is not an easy job. Don’t make it harder than it needs to be. Your attitude can either a simple powerful tool or concrete block to tow behind you.


Categories: Process Management

Accelerate business growth with Startup Launch & Launchpad Online

Google Code Blog - Thu, 11/13/2014 - 21:08
Last June, we launched Startup Launch, a program to help tech startups at all stages become successful on the Google Developers platform and open-source technologies. So far, we’ve helped more than 3,000 entrepreneurs transform their ideas into burgeoning websites, services, apps, and products in 150 countries. Hear some of their stories from the Czech Republic, Poland, Kenya, Brazil and Mexico in our Global Spotlight playlist.

Launchpad OnlineToday, we’re bringing the program to a wider audience with a new web series called Launchpad Online, to share knowledge based on questions we’ve had from entrepreneurs using our products. The series kicks off with technical instruction from Developer Advocate Wesley Chun on getting started with Google developer tools and APIs and over time will expand to include topics covering design and distribution.
This show accompanies our established "Root Access" and “How I:” series, which bring perspective and best practices to developers and entrepreneurs on a weekly basis.

Launchpad EventsLaunchpad Online follows the curriculum set out by our ongoing Launchpad events, week-long bootcamps for startups in select cities. In 2014, over 200 startups participated in events in Tel Aviv, London, Rio de Janeiro, Berlin, and Paris, which consisted of workshops on product strategy, UX/UI, engineering, digital marketing and presentation skills. Check out our videos covering recent events in Paris and Berlin here.

You’re invitedIn addition to events and online content, the program offers product credits to participants, from $500 of Cloud Platform and AdWords credits to startups who are just starting off, up to Google’s Cloud Platform startup offer of $100,000 USD in Cloud Credit offerings to startups ready to scale their business. You can apply for these benefits, and to be selected for future Launchpad events, at g.co/launch. Startup Launch runs in conjunction with our Google Business Groups and Google Developer Groups on the ground. Together, these communities have hosted more than 5,000 events in 543 cities and 104 countries this year, helping startups connect with other developers and entrepreneurs. Attend an upcoming business or developer event near you. We hope to see you there!

Posted by Amir Shevat, Global Startup Outreach Program Manager, Google Developer Relations
Categories: Programming

Geometry Math Library for C++ Game Developers: MathFu

Google Code Blog - Thu, 11/13/2014 - 18:45
Today we're announcing the 1.0 release of MathFu, a cross-platform geometry math library for C++ game developers. MathFu is a C++ math library developed primarily for games focused on simplicity and efficiency.

It provides a suite of vector, matrix and quaternion classes to perform basic geometry suitable for game developers. This functionality can be used to construct geometry for graphics libraries like OpenGL or perform calculations for animation or physics systems.

The library is written in portable C++ with SIMD compiler intrinsics and has been tested on Android, Linux, OS X and Windows.

You can download the latest open source release from our GitHub page. We invite you to contribute to the project and join our discussion list!

By Stewart Miles, Fun Propulsion Labs at Google*

*Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.
Categories: Programming

INVEST “Slider”

Software Requirements Blog - Seilevel.com - Thu, 11/13/2014 - 17:00
In the Scrum view of the world, the Product Owner (PO) has accountability for the Product Backlog.  This includes the responsibility for the Product Backlog items to be clearly expressed, and be in the right order.  I’ve had the opportunity to work with several companies where the PO was located remotely from the Scrum teams […]
Categories: Requirements

The Obstacle Is The Way

Making the Complex Simple - John Sonmez - Thu, 11/13/2014 - 16:00

In this video, I review the book “The Obstacle Is They Way” by Ryan Holiday.

The post The Obstacle Is The Way appeared first on Simple Programmer.

Categories: Programming

Speaking in Australia - YOW! 2014

Coding the Architecture - Simon Brown - Thu, 11/13/2014 - 10:37

For my final trip of the year, I'm heading to Australia at the end of this month for the YOW! 2014 series of conferences. I'll be presenting Agility and the essence of software architecture in Melbourne, Brisbane and Sydney. Plus I'll be running my Simple sketches for diagramming your software architecture workshop in Melbourne and Sydney. I can't wait; see you there!

Categories: Architecture