Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=7' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Managing for Happiness FAQ

NOOP.NL - Jurgen Appelo - Thu, 07/14/2016 - 13:55
Managing for Happiness cover (front)

In June 2016, John Wiley & Sons will release released my “new” book Managing for Happiness, which is a re-release of last year’s #Workout book. Some people asked me questions about that.

Why do you re-release the #Workout book with a publisher?

My aim is to be a full-time writer. That means I must sell more books so that I can earn a full income from writing. (Right now, I don’t.) A global publisher can help me with that. A second reason is that I want to reach as many people as possible with my message of better management with fewer managers. A third reason is that wider availability of the book (in bookstores and libraries) is not only good for new readers but also for my reputation as a public speaker.

Categories: Project Management

Using JMESPath queries with the AWS CLI

Agile Testing - Grig Gheorghiu - Wed, 07/13/2016 - 22:05
The AWS CLI, based on the boto3 Python library, is the recommended way of automating interactions with AWS. In this post I'll show some examples of more advanced AWS CLI usage using the query mechanism based on the JMESPath JSON query language.

Installing the AWS CLI tools is straightforward. On Ubuntu via apt-get:

# apt-get install awscli

Or via pip:

# apt-get install python-pip
# pip install awscli

The next step is to configure awscli by specifying the AWS Access Key ID and AWS Secret Access Key, as well as the default region and output format:

# aws configureAWS Access Key ID: your-aws-access-key-idAWS Secret Access Key: your-aws-secret-access-keyDefault region name [us-west-2]: us-west-2
Default output format [None]: json
The configure command creates a ~/.aws directory containing two files: config and credentials.
You can specify more than one pair of AWS keys by creating profiles in these files. For example, in ~/.aws/credentials you can have:
[profile profile1]AWS_ACCESS_KEY_ID=key1AWS_SECRET_ACCESS_KEY=secretkey1
[profile profile2]AWS_ACCESS_KEY_ID=key2AWS_SECRET_ACCESS_KEY=secretkey2
In ~/.aws/config you can have:[profile profile1]region = us-west-2
[profile profile2]region = us-east-1
You can specify a given profile when you run awscli:
# awscli --profile profile1
Let's assume you want to write a script using awscli that deletes EBS snapshots older than N days. Let's go through this one step at a time.
Here's how you can list all snapshots owned by you:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER --query "Snapshots[]"
Note the use of the --query option. It takes a parameter representing a JMESPath JSON query string. It's not trivial to figure out how to build these query strings, and I advise you to spend some time reading over the JMESPath tutorial and JMESPath examples.
In the example above, the query string is simply "Snapshots[]", which represents all the snapshots that are present in the AWS account associated with the profile profile1. The default output in our case is JSON, but you can specify --output text at the aws command line if you want to see each snapshot on its own line of text.
Let's assume that when you create the snapshots, you specify a description with contains PROD or STAGE for EBS volumes attached to production and stage EC2 instances respectively. If you want to only display snapshots containing the string PROD, you would do:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]" --output text
The Snapshots[] array now contains a condition represented by the question mark ?. The condition uses the contains() function included in the JMESPath specification, and is applied against the Description field of each object in the Snapshots[] array, verifying that it contains the string PROD.  Note the use of backquotes surrounding the strings PROD and true in the condition. I spent some quality time troubleshooting my queries when I used single or double quotes with no avail. The backquotes also need to be escaped so that the shell doesn't interpret them as commands to be executed.
To restrict the PROD snapshots even further, to the ones older than say 7 days ago, you can do something like this:
DAYS=7TARGET_DATE=`date --date="$DAYS day ago" +%Y-%m-%d`
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]|[?StartTime < \`$TARGET_DATE\`]" --output text
Here I used the StartTime field of the objects in the Snapshots[] array and compared it against the target date. In this case, string comparison is good enough for the query to work.
In all the examples above, the aws command returned a subset of the Snapshots[] array and displayed all fields for each object in the array. If you wanted to display specific fields, let's say the ID, the start time and the description of each snapshot, you would run:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]|[?StartTime < \`$TARGET_DATE\`].[SnapshotId,StartTime,Description]" --output text
To delete old snapshots, you can use the aws ec2 delete-snapshot command, which needs a snapshot ID as a parameter. You could use the command above to list only the SnapshotId for snapshots older than N days, then for each of these IDs, run something like this:
# aws ec2 delete-snapshot --profile profile1 --snapshot-id $id
All this is well and good when you run these commands interactively at the shell. However, I had no luck running them out of cron. The backquotes resulted in boto3 syntax errors. I had to do it the hard way, by listing all snapshots first, then going all in with sed and awk:
aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER --output=text --query "Snapshots[].[SnapshotId,StartTime,Description]"  > $TMP_SNAPS
DAYS=7TARGET_DATE=`date --date="$DAYS day ago" +%Y-%m-%d`
cat $TMP_SNAPS | grep PROD | sed 's/T[0-9][0-9]:[0-9][0-9]:[0-9][0-9].000Z//' | awk -v target_date="$TARGET_DATE" '{if ($2 < target_date){print}}' > $TMP_PROD_SNAPS
for sid in `awk '{print $1}' $TMP_PROD_SNAPS` ; do echo Deleting PROD snapshot $sid aws ec2 delete-snapshot --profile $PROFILE --region $REGION --snapshot-id $siddone
Ugly, but it works out of cron. Hope it helps somebody out there.

July 14th 2016: I initially forgot to include this very good blog post from Joseph Lawson on advanced JMESPath usage with the AWS CLI.

Why Amazon Retail Went to a Service Oriented Architecture

When Lee Atchison arrived at Amazon, Amazon was in the process of moving from a large monolithic application to a Service Oriented Architecture.

Lee talks about this evolution in an interesting interview on Software Engineering Daily: Scalable Architecture with Lee Atchison, about Lee's new book: Architecting for Scale: High Availability for Your Growing Applications.

This is a topic Adrian Cockcroft has talked a lot about in relation to his work at Netflix, but it's a powerful experience to hear Lee talk about how Amazon made the transition with us having the understanding of what Amazon would later become. 

Amazon was running into the problems of success. Not so much from a scaling to handle the requests perspective, but they were suffering from the problem of scaling the number of engineers working in the same code base.

At the time their philosophy was based on the Two Pizza team. A small group owns a particular piece of functionality. The problem is it doesn’t work to have hundreds of pizza teams working on the same code base. It became very difficult to innovate and add new features. It even became hard to build the application, pass the test suites, and deploy the software.

The solution: move to a Service Oriented Architecture (not microservices).

Organizing around services allowed individual teams to truly own the code base, the support responsibility, and top to bottom responsibility for the functionality.

The result: a dramatic increase in innovation and the ability to grow. After a while the Amazon retail site grew with a constant stream of new capabilities. Maybe too many :-)

The process requires a culture shift, really more of an ownership shift, from being part of a larger group to be an entity of its own that has responsibilities outside of its group as well as responsibilities inside the group.

While the strategy of consciously exploiting team organization as a means of speeding up product development and encouraging innovation is not a new idea now, in the early 2000s it would have been one ballsy move.

Related Articles 
Categories: Architecture

Pairing, Swarming, and Mobbing

A colleague asked mobbing last week on Twitter. Here’s the short answer, including pairing so you can see everything in one place:

  • Swarming has a WIP (work in progress) limit of 1, where the team collaborates to get the one item to done.
  • Mobbing has a WIP limit of 1 for an entire team with one keyboard.
  • Pairing is two people and one keyboard, often with a WIP limit of 1.

A WIP limit of one means the team or pair works on just one story/feature at a time. Sometimes, that feature is large as in the team who worked as a swarm on very large stories. (See the post Product Owners and Learning, Part 2 for how one team finishes very large features over a couple of days.)

Here are some examples of what I’ve seen on projects.

Team 1 swarms. They get together as a team and discuss the item (WIP limit of 1) with the product owner. They talk among themselves for a couple of minutes to decide on their “plan of attack” (their words) and then scatter. The testers develop the automated and take notes on the exploratory tests. The developers work on the code. 

Aside: On another team I know, the UI, platform, and middleware devs get together to discuss for a couple of minutes and then write code together, each on their own computer. (They collaborate but do not pair/mob together.) On another team, those people work together, on one keyboard for the platform/middleware work. The UI person works alone, checking in when she is done. Everyone checks their work into the code base, as they complete the work. Teams develop their own “mobbing” as sub-teams, which works, too.

Team 1 has an agreement to return every 25 minutes to check in with each other. They do this with this kind of a report: “I’m done with this piece. I need help for this next piece. Who’s available?” Or “I’m done as much as I can be for now. Anyone need another pair of eyes?” (Note: you might want more or less than 25 minutes. They chose that time because they have smallish stories and want to make sure they maintain a reasonable pace/momentum.

As people finish their work, they help other people in whatever way they can. Early in Team 1’s agile days, they had a ton of automated test “technical debt.” (I would call it insufficient test automation, but whatever term you like is fine.) The developers finished their stories and helped the testers bootstrap their test automation.

Team 2 mobs. The entire team sits around a table with one keyboard. The monitor output goes to a projector so everyone can see what the person typing is doing. This team has a guideline that they trade off keyboarding every 15 minutes. (You might like a slightly longer or slightly shorter time. In my experience, shorter times are better, but maybe that’s just me.) Sometimes, the tester leads, developing automated tests. Sometimes, the developer leads. This team often uses TDD, so the tests guide their development. 

Team 2 checks in at least as often as they change keyboarders. Sometimes, more often.

Notice that the work in progress (WIP) is small, one story. In both swarming and mobbing, the teams work on one story. That’s it. Their focus is doing the work that gets that story to done. that story getting to done.

Pairing is one keyboard, one machine, two pairs of eyes.The keyboarder is the driver, the watcher is the navigator. You get continuous review on the work product as you proceed. I often ask what I consider “stupid” questions when I am the navigator. Sometimes, the questions aren’t stupid—they prompt us as a pair to understand the item better. Sometimes, they are. I’m okay with that. I find that when I pair, I learn a ton about the domain.

Here’s the value of swarming or mobbing:

  • The team limits their WIP, which helps them focus on getting work to done.
  • The team can learn together in swarming and does learn together in mobbing.
  • The team collaborates, so they reinforce their teamwork. They learn who can do what, and who learns what.
  • The team has multiple eyes on small chunks of work, so they get the benefit of review.

If you work feature-by-feature, I urge you to consider swarming or mobbing. (Yes, you can swarm or mob in any life cycle, as long as you work feature-by-feature.) Either will help you move stories to done faster because of the team focus on that one story.

I wrote a post about pairing and swarming and how they will help your projects a couple of years ago.

Categories: Project Management

Cracking the Code Interview

From the Editor of Methods & Tools - Wed, 07/13/2016 - 14:03
Software development interviews are a different breed from other interviews and, as such, require specialized skills and techniques. This talk will teach you how to prepare for coding interviews, what top companies like Google, Amazon, and Microsoft really look for, and how to tackle the toughest programming and algorithm problems. This is not a fluffy […]

Five Reasons Quality Is Important


Software quality is a nuanced concept that requires a framework that addresses functional, structural and the process of the software delivery process to understand.  Each of these three aspects contribute to the quality of the software being delivered therefore have to be observed and analyzed. Measurement of each aspect is a key tool for understanding whether we are delivering a quality product and whether our efforts to improve quality are having the intended impact. Balancing the effort required to measure quality requires understanding the reasons for measuring quality.  Five of reasons quality is important to measure include:

  1. Safety – Poor quality in software can be hazardous to human life and safety. Quality problems can impact the functionality of the software products we are building. Jon Quigley discussed the impact of configuration defects that effected safety in SPaMCAST 346. 
  2. Cost – Simply put, quality issues cost money to fix.  Whether you believe that a defect is 100x more costly to fix late in the development cycle or not, doing work over because it is defective does not deliver more value than doing it right once.
  3. Customer Satisfaction (internal) – Poor quality leads stakeholders to look for someone else to do your job or perhaps shipping your job and all your friend’s jobs somewhere else.  Recognize that the stakeholders experience as the software is being developed, tested and implemented is just as critical as the raw number of defects.     
  4. Customer Satisfaction (external) – Software products that don’t work, are hard to use (when they don’t need to be), or are buggy don’t tend not to last long in the marketplace.  Remember that in today’s social media driven world every complaint that gets online has a ripple effect.  Our goal should be to avoid problems that can be avoided.
  5. Future Value – Avoiding quality problems increases the amount of time available for the next project or the next set of features.  Increasing quality also improves team morale, improved team morale is directly correlated with increased productivity (which will increase customer satisfaction and reduce cost).  

In October 2015 Forbes predicted the Top Ten CIO Concerns for 2016, the list is littered with words and phrases like ‘alignment with the business’, ‘time-to-market’, ‘productivity’, and ‘efficiency’. While I am cherry picking a bit, the broad definition of quality that includes functional, structural and process aspects is ABSOLUTELY at the core of the value quality delivers.  Because quality is so important measuring quality has become a passionate topic in many organizations.  There are lots of reasons to measure quality but one major reason is to find a lever to improve quality.  The problem with the passion for measuring quality is that there are a huge number of measures and metrics that people have tried, learned from, and possibly abandoned because they have not learned from them.  Almost every measure and metric used to understand quality meets a specific need making the decision of which set of measures to use requires matching business needs with your stomach for measurement.  We will begin that discussion next. 

May I ask a favor?   I am pitching a presentation and exercise for Agile DC 2016.  Can you click the link, read the abstract and if you would like me to do the session “like” (click the heart).  The link is:


Note:  We will circle back to complete the discussion of how Agile fares in a world full of moral hazards in two weeks.

Categories: Process Management

Android Wear 2.0 Developer Preview 2

Android Developers Blog - Tue, 07/12/2016 - 18:26

Posted by Hoi Lam, Android Wear Developer Advocate

At Google I/O 2016, we launched the Android Wear 2.0 Developer Preview, which gives developers early access to the next major release of Android Wear. Since I/O, feedback from the developer community has helped us identify bugs and shape our product direction. Thank you!

Today, we are releasing the second developer preview with new functionalities and bug fixes. Prior to the consumer release, we plan to release additional updates, so please send us your feedback early and often. Please keep in mind that this preview is a work in progress, and is not yet intended for daily use.

What’s new?
  • Platform API 24 - We have incremented the Android Platform API version number to 24 to match Nougat. You can now update your Android Wear 2.0 Preview project’s compileSdkVersion to API 24, and we recommend that you also update targetSdkVersion to API 24.
  • Wearable Drawers Enhancements - We launched the wearable drawers as part of the Android Wear 2.0 Preview 1, along with UX guidelines on how to best integrate the navigation drawer and action drawer in your Android Wear app. In Preview 2, we have added additional support for wearable drawer peeking, to make it easier for users to access these drawers as they scroll. Other UI improvements include automatic peek view and navigation drawer closure and showing the first action in WearableActionDrawer’s peek view. For developers that want to make custom wearable drawers, we’ve added peek_view and drawer_content attributes to WearableDrawerView. And finally, navigation drawer contents can now be updated by calling notifyDataSetChanged.
  • Wrist Gestures: Since last year, users have been able to scroll through the notification stream via wrist gestures. We have now opened this system to developers to use within their applications. This helps improve single hand usage, for when your users need their other hand to hold onto their shopping or their kids. See the code sample below to get started with gestures in your app:
 public class MainActivity extends Activity {  
   @Override /* KeyEvent.Callback */  
   public boolean onKeyDown(int keyCode, KeyEvent event) {  
     switch (keyCode) {  
       case KeyEvent.KEYCODE_NAVIGATE_NEXT:  
         Log.d(TAG, "Next");  
         Log.d(TAG, "Previous");  
     // If you did not handle, then let it be handled by the next possible element as deemed by  
     // Activity.  
     return super.onKeyDown(keyCode, event);  

Get started and give us feedback!

The Android Wear 2.0 Developer Preview includes an updated SDK with tools and system images for testing on the official Android emulator, the LG Watch Urbane 2nd Edition LTE, and the Huawei Watch.

To get started, follow these steps:

  1. Take a video tour of the Android Wear 2.0 developer preview
  2. Update to Android Studio v2.1.1 or later
  3. Visit the Android Wear 2.0 Developer Preview site for downloads and documentation
  4. Get the emulator system images through the SDK Manager or download the device system images from the developer preview downloads page
  5. Test your app with your supported device or emulator
  6. Give us feedback

We will update this developer preview over the next few months based on your feedback. The sooner we hear from you, the more we can include in the final release, so don't be shy!

Categories: Programming

SE Radio Episode 262: Software Quality with Bill Curtis

Sven Johann talks with Bill Curtis about Software Quality. They start with what software quality is and then discuss examples of systems which failed to achieve the quality goals (e.g. ObamaCare) and it’s consequences. They then go on with the role of architecture in the overall quality of the system and how to achieve it […]
Categories: Programming

Vertical Scaling Works for Bits and Bites

This is just to delicious a parallel to pass up. 

Here we have Google building a new four story datacenter Scaling Up: Google Building Four-Story Data Centers:


And here we have a new vertical farm from AeroFarms


Both have racks of consumables. One is a rack of bits, the other is a rack of bites. Both used to sprawl horizontally across huge swaths of land and now are building up. Both designs are driven by economic efficiency, extracting the most value per square foot. Both are expanding to meet increased demand. It's a strange sort of convergence.

Categories: Architecture

Python: Scraping elements relative to each other with BeautifulSoup

Mark Needham - Mon, 07/11/2016 - 07:01

Last week we hosted a Game of Thrones based intro to Cypher at the Women Who Code London meetup and in preparation had to scrape the wiki to build a dataset.

I’ve built lots of datasets this way and it’s a painless experience as long as the pages make liberal use of CSS classes and/or IDs.

Unfortunately the Game of Thrones wiki doesn’t really do that so I had to find another way to extract the data I wanted – extracting elements based on their position to more prominent elements on the page.

For example, I wanted to extract Arya Stark‘s allegiances which look like this on the page:

2016 07 11 06 45 37

We don’t have a direct route to her allegiances but we do have an indirect path via the h3 element with the text ‘Allegiance’.

The following code gets us the ‘Allegiance’ element:

from bs4 import BeautifulSoup
file_name = "Arya_Stark"
wikia = BeautifulSoup(open("data/wikia/characters/{0}".format(file_name), "r"), "html.parser")
allegiance_element = [tag for tag in wikia.find_all('h3') if tag.text == "Allegiance"]
> print allegiance_element
[<h3 class="pi-data-label pi-secondary-font">Allegiance</h3>]

Now we need to work out the relative position of the div containing the houses. It’s inside the same parent div so I thought it’d probably be the next sibling:

next_element = allegiance_element[0].next_sibling
> print next_element

Nope. Nothing! Hmmm, wonder why:

> print next_element.name, type(next_element)
None <class 'bs4.element.NavigableString'>

Ah, empty string. Maybe it’s the one after that?

next_element = allegiance_element[0].next_sibling.next_sibling
> print next_element.name, type(next_element)
[<a href="/wiki/House_Stark" title="House Stark">House Stark</a>, <br/>, <a href="/wiki/Faceless_Men" title="Faceless Men">Faceless Men</a>, u' (Formerly)']

Hoorah! Afer this it became a case of working out how the text was structure and pulling out what I wanted.

The code I ended up with is on github if you want to recreate it yourself.

Categories: Programming

Neo4j 3.0 Drivers – Failed to save the server ID and the certificate received from the server

Mark Needham - Mon, 07/11/2016 - 06:21

I’ve been using the Neo4j Java Driver on various local databases over the past week and ran into the following certificate problem a few times:

org.neo4j.driver.v1.exceptions.ClientException: Unable to process request: General SSLEngine problem
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:88)
	at org.neo4j.driver.internal.connector.socket.SocketConnection.<init>(SocketConnection.java:63)
	at org.neo4j.driver.internal.connector.socket.SocketConnector.connect(SocketConnector.java:52)
	at org.neo4j.driver.internal.pool.InternalConnectionPool.acquire(InternalConnectionPool.java:113)
	at org.neo4j.driver.internal.InternalDriver.session(InternalDriver.java:53)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1431)
	at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
	at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214)
	at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186)
	at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.wrap(TLSSocketChannel.java:270)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:131)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:95)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:77)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:70)
	at org.neo4j.driver.internal.connector.socket.SocketClient$ChannelFactory.create(SocketClient.java:251)
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:75)
	... 14 more
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:304)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1497)
	at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:212)
	at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runDelegatedTasks(TLSSocketChannel.java:142)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.unwrap(TLSSocketChannel.java:203)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:127)
	... 19 more
Caused by: java.security.cert.CertificateException: Unable to connect to neo4j at `localhost:10003`, because the certificate the server uses has changed. This is a security feature to protect against man-in-the-middle attacks.
If you trust the certificate the server uses now, simply remove the line that starts with `localhost:10003` in the file `/Users/markneedham/.neo4j/known_hosts`.
The old certificate saved in file is:
The New certificate received is:
	at org.neo4j.driver.internal.connector.socket.TrustOnFirstUseTrustManager.checkServerTrusted(TrustOnFirstUseTrustManager.java:153)
	at sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:936)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1484)
	... 28 more

I got a bit lazy and just nuked the file it mentions in the error message – /Users/markneedham/.neo4j/known_hosts – which led to this error the next time I call the driver in my application:

Failed to save the server ID and the certificate received from the server to file /Users/markneedham/.neo4j/known_hosts.
Server ID: localhost:10003
Received cert:

I recreated the file with no content and tried again and it worked fine. Alternatively we can choose to turn off encryption when working with local databases and avoid the issue:

Config config = Config.build().withEncryptionLevel( Config.EncryptionLevel.NONE ).toConfig();
try ( Driver driver = GraphDatabase.driver( "bolt://localhost:7687", config );
      Session session = driver.session() )
   // use the driver
Categories: Programming

SPaMCAST 402 – Ulises Torres, Benefits of CMMI and Agile Together



Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 402 features our interview with Ulises Torres.  Ulises and I talked about how his firm, Intellego, has leveraged Agile and the CMMI to improve quality, increase customer satisfaction and business. Ulises makes a strong argument that for his company, Agile and the CMMI are better together.

Ulises Torres has over 24 years of experience in IT, either as a Developer, Team Leader, Project Manager or as an Architect, analyzing, designing, building and implementing a large number of applications, mainly with regard to retail, manufacturing, logistics/distribution and financials.  He has worked in software factories, running different projects at the same time and has formal training and proficiency in QA, Scrum, Lean Kanban, Six sigma, OOP, UML, RUP,  CMMI and PMI frameworks.

Ulises work at Intellego, a development of solutions and information management services company with offices in MĂ©xico, Chile, Colombia, Brazil, PerĂş, and USA.

Contact Information:

Email: utorres@intellego.com.mx

Web: http://www.grupointellego.com/en/the-company/

Re-Read Saturday News

This week we continue the Re-read Saturday of  Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 6 and 7.  Practices, Beck notes represent endpoints that need to be pursued using “baby steps” but they are at the core of how we practice XP. Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.


The next Software Process and Measurement Cast will our essay on Agile practices at scale. We will also have a visit from the Software Sensei Kim Pries and Gene Hughson will bring his Form Follows Function Blog to the Software Process and Measurement Cast.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 402 - Ulises Torres, Benefits of CMMI and Agile Together

Software Process and Measurement Cast - Sun, 07/10/2016 - 22:00

The Software Process and Measurement Cast 402 features our interview with Ulises Torres.  Ulises and I talked about how his firm, Intellego, has leveraged Agile and the CMMI to improve quality, increase customer satisfaction and business. Ulises makes a strong argument that for his company, Agile and the CMMI are better together.

Ulises Torres has over 24 years of experience in IT, either as a Developer, Team Leader, Project Manager or as an Architect, analyzing, designing, building and implementing a large number of applications, mainly with regard to retail, manufacturing, logistics/distribution and financials.  He has worked in software factories, running different projects at the same time and has formal training and proficiency in QA, Scrum, Lean Kanban, Six sigma, OOP, UML, RUP,  CMMI and PMI frameworks.

Ulises work at Intellego, a development of solutions and information management services company with offices in México,Chile, Colombia, Brazil, Perú, and USA.

Contact Information:

Email: utorres@intellego.com.mx

Web: http://www.grupointellego.com/en/the-company/ o http://www.grupointellego.com/la-compania/


Re-Read Saturday News

This week we continue the Re-read Saturday of  Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 6 and 7.  Practices, Beck notes represent endpoints that need to be pursued using “baby steps” but they are at the core of how we practice XP. Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.


The next Software Process and Measurement Cast will our essay on Agile practices at scale (Meg 2/23, 2/25 3/1 and 3/2 … others?). We will also have a visit from the Software Sensei Kim Pries and Gene Hughson will bring his Form Follows Function Blog to the Software Process and Measurement Cast.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

A collection of #NoEstimates Responses

Herding Cats - Glen Alleman - Sun, 07/10/2016 - 19:40

The question of the viability of #NoEstimates starts with a simple principles based notion.

Can you make a non-trivial (NOT de minimis) decision in the presence of uncertainty without estimating?

The #NoEstimates advocates didn’t start there. They started with “Estimates are a waste, stop doing them.” Those advocates also started with the notion that estimates are a waste for the developers. Not considering those who pay their salary have a fiduciary obligation to know something about cash demands and profit resulting from that decision in the future.

The size of the “value at risk” is also the starting point for estimates. If the project is small (de minimis) meaning if we over run significantly no one cares, then estimating is likely a waste as well. The size of the project, whether small or multi-million’s doesn't influence the decision to estimate. The decision is determined by “value at risk,” and that is determine by those paying NOT by those consuming. So the fact I personally work on larger projects does not remove the principle of “value at risk.” Any client’s (internal or external) V@R may be much different than my personal experience – but it’s not our money.

Next comes the original post from Woody – “you can make decisions with No Estimates.” If we are having a principled based conversation (which NE’er don’t) then that statement violates the principles of Microeconomics. Making decisions in the presence of uncertainty (and I’m assuming all projects of interest have uncertainty), then estimates are needed to make decision. Those decisions are based in MicroEcon on the Opportunity Cost and the probability of making the best choice for the project involves assessing the probability outcome of those choices, estimating is required.

Real options is a similar process in IT based on estimating. Vasco stated long ago he was using RO along with Bayesian Decision Making. I suspect he was tossing around buzz words without knowing what they actually mean.

From the business side, the final principle is Managerial Finance. This is the basis of business management of its financial assets. The balance sheet is one place these are found. Since the future returns from the expenses of today and the “potential” expenses of the future are carried in that balance sheet, estimating is needed there as well for the financial well being of the firm.

These three principles Value at Risk, MicroEconomics of Decision Making, and Managerial Finance are ignored by the NE advocates when they start with the conjecture that “decisions can be made without estimates,” and continue on with “estimates are a waste of developers time, they should be coding not estimating.”

It’s the view of the world, that as a developer “it’s all about me.” Never looking at their paycheck and asking where did the money come from. That’s a common process and one I did when I started my career 35 years ago as a FORTRAN developer for Missile Defense radar systems and our boss had us take out our paychecks (a paper check in those days) and look at the upper left hand corner. “That money doesn’t come from the Bank of America, El Segundo Blvd, Redondo Beach, it comes from the US Air Force. You young pups need to stay on schedule and make this thing work as it says in our contract.”

In the end, the NE conversation can be about the issues in estimating and there are many - and Steve McConnell speaks to those. I work large federal acquisition programs –  IT and embedded systems. And many times the “over target baseline” root cause is from “bad estimating.” But the Root Cause of those bad estimates is not corrected by No Estimating as #Noestimates would have us believe.

As posted on this blog before and sourced from the Director of “Performance Assessment and Root Cause Analysis,” unanticipated growth in cost has 4 major sources:

1. Unrealistic performance expectations – that will cost more money.
2. Unrealistic cost and schedule estimates – the source of poor estimating.
3. Inadequate assessment of risk and unmitigated risk exposure.
4. Unanticipated technical issues.

Research where I work some times (www.ida.org) has shown these are core to problems of cost overruns in nearly every domain from ERP to embedded software intensive systems. It is unlikely those 4 root causes are not universal.

So what’s #NoEstimates trying to fix? They don’t say, just “exploring” new ways.” In what governance frameworks? They don’t say. They don’t actually say much of anything, just “estimates are waste, stop doing them and get back to coding.”

As my boss in 1978 reminded us newly minted Master’s degreed coders, “it ain’t your money it’s the USAF’s money, act accordingly – give me an estimate of this thing you’re building can be used to find SS-9’s coming our way?” Since then I’ve never forgotten his words, “business (in that case TRW) needs to know how much, when, and what, if it’s going to stay in business.”


Categories: Project Management

Extreme Programming Explained, Second Edition: Week 4

XP Explained

This week we begin getting into the proverbial weeds of Extreme Programming by tackling chapters six and seven in Kent Beck’s Extreme Programing Explained, Second Edition (2005). Chapters six and seven explore the practices that operationalize the values and practices we have explored in previous installments.

Chapter 6: Practices

Practices are the mechanism for applying an idea or method.  While values and principles provide the why, practices deliver the how.  In Chapter 3 we described the relationship as a bridge between two cliffs; values on one side, principles as the bridge and practices on the other side. Linking practices through principles to values is a nice metaphor to illustrate that all three components are needed to ensure that practices don’t become rote and lose value.

Chapter six provides a transition from the high level to the pragmatic.  In this chapter, Beck notes that the practices represent endpoints that need to be pursued using “baby steps” (one of the principles we identified in chapter 5). XP, like most other methodologies, provides the most value if all of the practices are practiced together; however, teams and organizations implementing XP can get value by implementing the practices one at a time.

Chapter 7: Primary Practices

The primary practices are the foundation of XP.  The primary practices are not the whole of XP, Chapter 9 builds on this foundation introduced in Chapter 7 with corollary practices.  The primary practices are:

  1.      Sit together.  Sitting together helps people interact and work together. Death to cube farms that keep people apart.  Teams need both a collaborative space so they can talk and interact and private space so that individuals can focus when needed.  This simple practice is one of the most difficult for most organizations.  Distributed teams can’t sit together, and even when teams are collocated it is difficult to find the right balance between togetherness and alone time. Creeping up on getting people to sit together (Beck’s words) is a good idea so that teams can find the right balance.  Recognize that the balance can change based on team composition and the type of work being tackled.
  2.      Whole team. Use cross-functional teams that include all the skills and perspectives necessary for the project to succeed. Recognize that the definition of what constitutes the whole team is dynamic. Teams in XP include whole people; fractional people are a bad idea because of the huge productivity penalty caused by task switching.
  3.      Informative workspace. The workspace is not just a shell to hold people, but should also be part of telling the story of the work to the stakeholders and the team members. Story cards hung on the walls, lists of team norms and other big information radiators show the team’s goals and their progress.
  4.      Energized work- Team members should work as many hours as they can be productive and only as many as can be consistently sustained.  During one stretch of my career, I worked 80 or more hours a week to keep up with a colleague nicknamed DEFCON Don (he was an ex-missileer).While the competition was fun, I am not sure our productivity was as high as it could have been if we work a more reasonable schedule. Beck puts it succinctly, 

    “Software development is a game of insight, and insight comes to the prepared, rested, relaxed mind.”

  5.      Pair programming. Pair programming is easily one of the most controversial practices in XP.  The practice of pair programming occurs when two people and one keyboard collaborate on programming.  Typically one person talks and watches while the other types, rotating roles frequently.  This practice can be useful for most any function including analysis, design, and testing. When pairing, rotate pairs frequently, respect culture, personal space, respect personal hygiene rules and don’t pair if you have the flu!
  6.      Stories. Plan work using user stories, which are units of customer-visible functionality. The concept of the user story shifts the discussion of needs away from the permanence and obligatory nature of the word requirement. Estimate user stories as soon as they are written to provide the business and technical perspectives a chance to interact while the story is still top of mind. User stories have migrated into almost every Agile implementation.
  7.      Weekly cycle. XP suggests planning work a week at a time. While Scrum has settled roughly on a two-week cycle, XP recommends re-planning every week.   The use of a single week ensures that teams break work into small chunks and generate feedback quickly.  XP’s work cycle begins by writing tests followed by coding and finally running the tests to prove the solution.  The cycle repeats until the tests are passed.  XP introduced the concept of the planning meeting now used in Scrum. The weekly heartbeat gives you a convenient, frequent and predictable platform.
  8.      Quarterly cycle. Plan work a quarter at a time. The quarterly cycle allows the team to reflect and consider the big picture so they stay aligned with larger organizational or program goals.  The quarterly cycle is very much akin to the planning increment in SAFe.
  9.      Slack. Teams require slack to deal with issues that will invariably pop up.  Overcommitment causes waste as teams are stressed to make up time by pushing too hard or cutting corners.  The innovation sprint in SAFe is an example of planning slack into the process.
  10.  10-minute build. Build the whole system and run all of the tests in 10 minutes. The concept of the 10-minute build is that builds that take substantially longer will be done less often, reducing the chance for feedback.
  11.  Continuous integration. Integrate all changes into the whole application after no more than a couple of hours. Continuous integration provides feedback to prove that the system works. Beck recommends using synchronous builds in which all team members commit their code to the build at specific times.  The build is then tested, and if it passes coding begins anew (if not, a fix is needed).  Teams often begin with asynchronous builds because they require less coordination and then evolve to synchronous builds.
  12.  Test-first programming. Test-first programming is a powerful practice that begins by writing the tests that the developer will use to prove they have solved the business problem.  Test-first programming reduces scope creep, increases technical cohesion, trust, and rhythm. I have found that test-first development also helps to ensure that testers, business analysts, and developers collaborate.
  13.  Incremental design. Invest in the design every day using the knowledge that was gained the previous day as a mechanism, so the design continually evolves based on need. Incremental design is a form of real options theory.  Making small decisions helps reduce the cost of change and allows decisions to be made closer to the point of need.

The 13 primary practices provide the foundation for the practice of XP.  Even if an organization is not practicing XP, many of the primary practices are used to augment less technical Agile frameworks, such as Scrum.

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3 

Week 3, Chapters 4 -5 


Categories: Process Management

Baseball as Incremental and Iterative Development

Herding Cats - Glen Alleman - Sat, 07/09/2016 - 16:50


Sitting in our seats at last night's Rockie v. Phillies game and dawned on me the analogy between Moneyball strategy and good management of software development. In Moneyball, Billy Beane was faced with a limited budget for players.  He hired a statistics guy and they figured out the getting on base was just as important as hitting a homerun and a hell of alot cheaper.

Last night there were a few home runs. But most of the action were singles and a few doubles. If you do the simple minded math, when the rotation all get singles with batting averages of 0.303, then there'll be runs scored every inning. 4 singles equals a home run. Getting on base is the key to winning games.

Getting incrementally more software out the door - assuming it's the right software needed by the customer and that customer can put the software to work - then progress toward the win is being made.

So the Strawman of Waterfall and Big Bang is just that a Strawman. The Straw Man of No Projects is also nonsense, along with No estimates. The manager of the Rockies has a plan in the presence of uncertanty and emerging situations. That's why he's called the MANAGER because he manages in the presence of uncertainty. And in doing that job he makes estimates on the probability of success of the emerging play options. 

We can learn a lot from Baseball about managing projects. First get on base. You can't score unless you're on base. First, Second, Third, then Home. You can't count on hitting Home Runs to win the game. It doesn't work that way. Offense of good, but so is defense. Managing the risks is defense. Defense in baseball is more than just putting players on the field. It's how those players react when the ball is hit. Go for the out at first? Try for a double play? Hold the ball after a single bounce in the outfield?

While baseball is not a contact sport, it still requires teamwork. I played competitive Volleyball in College - the ultimate Team Sport, since you're only as strong as the weakest player. Much like the software development team. But all teams have a strategy, a game play that changes as the game emerges and most of all - as Peter Kretzman has lambasted some NE advocates who have not likely ever played baseball - all the players are making estimates all the time in order to catch the ball, keep control of the ball and the emerging situation of the game.

Categories: Project Management

R: Sentiment analysis of morning pages

Mark Needham - Sat, 07/09/2016 - 07:36

A couple of months ago I came across a cool blog post by Julia Silge where she runs a sentiment analysis algorithm over her tweet stream to see how her tweet sentiment has varied over time.

I wanted to give it a try but couldn’t figure out how to get a dump of my tweets so I decided to try it out on the text from my morning pages writing which I’ve been experimenting with for a few months.

Here’s an explanation of morning pages if you haven’t come across it before:

Morning Pages are three pages of longhand, stream of consciousness writing, done first thing in the morning.

*There is no wrong way to do Morning Pages* – they are not high art.

They are not even “writing.” They are about anything and everything that crosses your mind– and they are for your eyes only.

Morning Pages provoke, clarify, comfort, cajole, prioritize and synchronize the day at hand. Do not over-think Morning Pages: just put three pages of anything on the page…and then do three more pages tomorrow.

Most of my writing is complete gibberish but I thought it’d be fun to see how my mood changes over time and see if it identifies any peaks or troughs in sentiment that I could then look into further.

I’ve got one file per day so we’ll start by building a data frame containing the text, one row per day:

files = list.files(root)
df = data.frame(file = files, stringsAsFactors=FALSE)
df$fullPath = paste(root, df$file, sep = "/")
df$text = sapply(df$fullPath, get_text_as_string)

We end up with a data frame with 3 fields:

> names(df)
[1] "file"     "fullPath" "text"

Next we’ll run the sentiment analysis function – syuzhet#get_nrc_sentiment – over the data frame and get a score for each type of sentiment for each entry:

get_nrc_sentiment(df$text) %>% head()
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     7           14       5    7   8       6        6    12       14       27
2    11           12       2   13   9      10        4    11       22       24
3     6           12       3    8   7       7        5    13       16       21
4     5           17       4    7  10       6        7    13       16       37
5     4           13       3    7   7       9        5    14       16       25
6     7           11       5    7   8       8        6    15       16       26

Now we’ll merge these columns into our original data frame:

df = cbind(df, get_nrc_sentiment(df$text))
df$date = ymd(sapply(df$file, function(file) unlist(strsplit(file, "[.]"))[1]))
df %>% select(-text, -fullPath, -file) %>% head()
  anger anticipation disgust fear joy sadness surprise trust negative positive       date
1     7           14       5    7   8       6        6    12       14       27 2016-01-02
2    11           12       2   13   9      10        4    11       22       24 2016-01-03
3     6           12       3    8   7       7        5    13       16       21 2016-01-04
4     5           17       4    7  10       6        7    13       16       37 2016-01-05
5     4           13       3    7   7       9        5    14       16       25 2016-01-06
6     7           11       5    7   8       8        6    15       16       26 2016-01-07

Finally we can build some ‘sentiment over time’ charts like Julia has in her post:

posnegtime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(negative = mean(negative), positive = mean(positive)) %>% 
names(posnegtime) <- c("date", "sentiment", "meanvalue")
posnegtime$sentiment = factor(posnegtime$sentiment,levels(posnegtime$sentiment)[c(2,1)])
ggplot(data = posnegtime, aes(x = as.Date(date), y = meanvalue, group = sentiment)) +
  geom_line(size = 2.5, alpha = 0.7, aes(color = sentiment)) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment score") + 
  ggtitle("Sentiment Over Time")

2016 07 05 06 47 12

So overall it seems like my writing displays more positive sentiment than negative which is nice to know. The chart shows a rolling one week average and there isn’t a single week where there’s more negative sentiment than positive.

I thought it’d be fun to drill into the highest negative and positive days to see what was going on there:

> df %>% filter(negative == max(negative)) %>% select(date)
1 2016-03-19
> df %>% filter(positive == max(positive)) %>% select(date)
1 2016-01-05
2 2016-06-20

On the 19th March I was really frustrated because my boiler had broken down and I had to buy a new one – I’d completely forgotten how annoyed I was, so thanks sentiment analysis for reminding me!

I couldn’t find anything particularly positive on the 5th January or 20th June. The 5th January was the day after my birthday so perhaps I was happy about that but I couldn’t see any particular evidence that was the case.

Playing around with the get_nrc_sentiment function it does seem to identify positive sentiment when I wouldn’t say there is any. For example here’s some example sentences from my writing today:

> get_nrc_sentiment("There was one section that I didn't quite understand so will have another go at reading that.")
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     0            0       0    0   0       0        0     0        0        1
> get_nrc_sentiment("Bit of a delay in starting my writing for the day...for some reason was feeling wheezy again.")
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     2            1       2    2   1       2        1     1        2        2

I don’t think there’s any positive sentiment in either of those sentences but the function claims 3 bits of positive sentiment! It would be interesting to see if I fare any better with Stanford’s sentiment extraction tool which you can use with syuzhet but requires a bit of setup first.

I’ll give that a try next but in terms of getting an overview of my mood I thought I might get a better picture if I looked for the difference between positive and negative sentiment rather than absolute values.

The following code does the trick:

difftime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(diff = mean(positive) - mean(negative))
ggplot(data = difftime, aes(x = as.Date(date), y = diff)) +
  geom_line(size = 2.5, alpha = 0.7) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment difference score") + 
  ggtitle("Sentiment Over Time")
2016 07 09 07 05 34

This one identifies peak happiness in mid January/February. We can find the peak day for this measure as well:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(date)
1 2016-02-25

Or if we want to see the individual scores:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(-text, -file, -fullPath)
  anger anticipation disgust fear joy sadness surprise trust negative positive       date diff
1     0           11       2    3   7       1        6     6        3       31 2016-02-25   28

After reading through the entry for this day I’m wondering if the individual pieces of sentiment might be more interesting than the positive/negative score.

On the 25th February I was:

  • quite excited about reading a distributed systems book I’d just bought (I know?!)
  • thinking about how to apply the tag clustering technique to meetup topics
  • preparing my submission to PyData London and thinking about what was gonna go in it
  • thinking about the soak testing we were about to start doing on our project

Each of those is a type of anticipation so it makes sense that this day scores highly. I looked through some other days which specifically rank highly for anticipation and couldn’t figure out what I was anticipating so even this is a bit hit and miss!

I have a few avenues to explore further but if you have any other ideas for what I can try next let me know in the comments.

Categories: Programming

Changes to Trusted Certificate Authorities in Android Nougat

Android Developers Blog - Fri, 07/08/2016 - 18:41

Posted by Chad Brubaker, Android Security team

In Android Nougat, we’ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:

  • Safe and easy APIs to trust custom CAs.
  • Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default.
  • All devices running Android Nougat offer the same standardized set of system CAs—no device-specific customizations.

For more details on these changes and what to do if you’re affected by them, read on.

Safe and easy APIs

Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we improved the APIs for customizing trust.

User-added CAs

Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 will—by design—not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data.

Customizing trusted CAs

Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see the full documentation.

Trusting custom CAs for debugging

To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable.

                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
Trusting custom CAs for a domain

To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config.

           <domain includeSubdomains="true">internal.example.com</domain>  
                <!-- Only trust the CAs included with the app  
                     for connections to internal.example.com -->  
                <certificates src="@raw/cas" />  
Trusting user-added CAs for some domains

To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config.

           <domain includeSubdomains="true">userCaDomain.com</domain>  
           <domain includeSubdomains="true">otherUserCaDomain.com</domain>  
                  <!-- Trust preinstalled CAs -->  
                  <certificates src="system" />  
                  <!-- Additionally trust user added CAs -->  
                  <certificates src="user" />  
Trusting user-added CAs for all domains except some

To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config.

                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           <domain includeSubdomains="true">sensitive.example.com</domain>  
                <!-- Only allow sensitive content to be exchanged  
             with the real server and not any user or  
    admin configured MiTMs -->  
                <certificates src="system" />  
Trusting user-added CAs for all secure connections

To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config.

                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
Standardized set of system-trusted CAs

To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in AOSP.

Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices.

What if I have a CA I believe should be included on Android?

First, be sure that your CA needs to be included in the system. The preinstalled CAs are only for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the Customizing trusted CAs section above.

If you operate a CA that you believe should be included in Android, first complete the Mozilla CA Inclusion Process and then file a feature request against Android to have the CA added to the standardized set of system CAs.

Categories: Programming

Stuff The Internet Says On Scalability For July 8th, 2016

Hey, it's HighScalability time:

Juno: 165,000mph, 1.7 billion miles, missed orbit by 10 miles. Dang buggy software. 


If you like this sort of Stuff then please support me on Patreon.
  • $3B: damages awarded to HP from Oracle; 37%: when to stop looking through your search period; 70%: observed Annualized Failure Rate (AFR) in production datacenters for some models of SSDs; 

  • Quotable Quotes:
    • spacerodent: After Christmas there was this huge excess capacity and that is when I first learned of the EC2 project. It was my belief EC2 came out of a need to utilize those extra Gurupa servers during the off season:)
    • bcantrill: That said, I think Sun's problem was pretty simple: we thought we were a hardware company long after it should have been clear that we were a systems company. As a result, we made overpriced, underperforming (and, it kills me to say, unreliable) hardware. And because we were hardware-fixated, we did not understand the economic disruptive force of either Intel or open source until it was too late. 
    • @cmeik: I am not convinced the blockchain and CRDTs *work.*
    • daly: Managers make decisions. Only go to management with your need for a decision and always present the options. They went to management with what was, in essence, a complaint. Worse, it was a complaint that had nothing to do with the business. Clearly they were not keeping the business uppermost in their priority queue. So management made a business decision and fixed the problem.
    • @colettecello: Architect: "we should break this down into 6 microservices" Me: "you have 6 teams who hate each other?" Architect: "how did you know that?"
    • Matt Stats: The differences between BSD and Linux all derive from basic philosophical differences. Once you understand those, everything else falls into place pretty neatly.
    • @wattersjames: "Last year, Johnson & Johnson turned off its last mainframe"
    • Allan Kelly: But in the world of software development this mindset [economies of scale] is a recipe for failure and under performance. The conflict between economies of scale thinking and diseconomies of scale working will create tension and conflict.
    • Jeff G: Today, a large part of my business is migrating companies off the monolithic Java EE containers into lightweight modular containers. Yes, even the tried and true banking and financial industries are moving away from Java EE. 
    • collyw: "Weeks of programming can save hours of planning" is a favorite quote of mine.
    • xiongchiamiov: when I see a team responsible for hundreds of microservices, it's not at all surprising when I find they're completely underwater and struggling to keep up with maintenance, much less new features.
    • Robert Plomin: We're always talking about differences. The only genetics that makes a difference is that 1 percent of the 3 billion base pairs. But that is over 10 million base pairs of DNA. We're looking at these differences and asking to what extent they cause the differences that we observe. 
    • @jmferdegue: Micro services as a cost reduction strategy for project delivery. Marco Cullen from @OpenCredo at #micromanchester
    • J.R.R. Tolkien: I like, and even dare to wear in these dull days, ornamental waistcoats.
    • @johnregehr~ HN commenter has reached enlightenment : In both cases, after about a year, we found ourselves wishing we had not rewritten the network stack.
    • @nigelbabu: OH: 9.9999% uptime is still five 9s.
    • @jessfraz: "We are going to need a floppy and a shaman" @ryanhuber
    • Peter Cohen: So, why the cloud? Because, the developer.
    • @CompSciFact: 'The fastest algorithm can frequently be replaced by one that is almost as fast and much easier to understand.' -- Douglas W. Jones
    • @igrigorik: Improved font loading in WebKit: http://bit.ly/29eaxV2  - tl;dr: 3s timeout, WOFF2, unicode-range, Font Loading API. hooray!
    • @danielbryantuk: "I've worked on teams with 200+. We had 3 people just to make JPA work" @myfear on scaling issues #micromanchester
    • AWS Origin Story: Jassy tells of an executive retreat at Jeff Bezos’ house in 2003. It was there that the executive team conducted an exercise identifying the company’s core competencies
    • @sheeshee: ".. you are charged for every 100ms your code executes and the number of times your code is triggered." the 1970ies are back. (aws lambda)
    • @KentBeck: accepting mediocrity as the price of scaling misunderstands the power law distribution of payoffs.
    • @cowtowncoder: that is: cost efficiency from AWS et al is for SMALL deployments, and at some point it always, invariably becomes cheaper to DIY
    • Exascale Computing Research priorities: Total power requirements suggest that CPUs will not be suitable commodity processors for supercomputers in the future.

  • Here's how Instagram does it. Instagram + Android: Four Years Later: At the core of this principle is the idea that the Instagram app is simply a renderer of server-provided data, much like a web browser. Almost all complex business logic happens server-side, where it is easier to fix bugs and add new features. We rely on the server to be perfect, enforced through continuous integration testing, and dispense with null-checking or data-consistency checking on the client.

  • Good story on how WePay is moving from a monolith to a services based architecture on top of Kubernetes. Advantages: autoscaling, rolling updates, a pure model independent of software assigned to specific machines.  WePay on Kubernetes: ‘It Changed Our Business’.

  • Julia Ferraioli with a really fun explaination of Kubernetes using legos

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Accounting for Software Development - Products or Projects

Herding Cats - Glen Alleman - Fri, 07/08/2016 - 15:33

There appears to be a resurgence in the No Projects conversation, similar to the No Estimates notion that has been around for awhile.

I’m going to suggest that most of the disconnects around ideas of software development ‒ from No Estimates to No Projects to whatever ‒ starts with Developers and the assumption It’s their money. It’s not their money. If it is they can do with it as they please, no one cares.

There are standard business accounting processes in any business that creates products or services -- including Software - for money. Unless the developers are Staff Aug (just labor) to another firm, the accounting processes are defined in several places. FASB 86, FASB 10, even GAAP for capitalization and expenses of the cost of developing that product for use internally and for sale externally.

Here's a start …

The separation of Products from Projects at the software development process level is understandable. I work a Program that has both. Both for good reasons for both. A Product Line enhancement is usually on continuing system – Version 9 of a legacy system is a Product Line extension of a system that as be in place for 11 years. A “new” product” Version 1.0 in the same domain is treated as a Project. The Project is to establish Version 1.0 which will then be extended over its life.

If the No Projects approach goes that same way as the No Estimates approach, those paying for the work will be intentionally excluded from the conversation. When I asked one of the originators of the #NoEstimates conjecture to go check your idea with the CFO, I got silence. 

Follow the Money is advice I received from my colleague – former NASA Cost Director

This is good advice for any anyone proposing an initiative that pretends to change the status quo, tilt at the windmill of supposed bad management, or any suggested evil as seen by those spending the money provided by someone else. Remember this when making suggestions...

It's not Your Money, act accordingly. Your opinion should be considered as appropriate, but it's not your money. Those whose money it is have a fiduciary responsibility to spend it in a manner compliant with the accounting principles of the firm, be that private or public.

Categories: Project Management