Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

More nxlog logging tricks

Agile Testing - Grig Gheorghiu - Tue, 04/14/2015 - 00:11
In a previous post I talked about "Sending Windows logs to Papertrail with nxlog". In the mean time I had to work through a couple of nxlog issues that weren't quite obvious to solve -- hence this quick post.

Scenario 1: You don't want to send a given log file to Papertrail

My solution:

In this section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

add a line which drops the current log line if the file name contains the pattern you are looking to skip. For example, for a file name called skip_this_one.log (from the same log directory), the new stanza would be:
# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop();  Exec if file_name() =~ /skip_this_one.log/ drop();
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET
Scenario 2: You want to prefix certain log lines depending on their directory of origin
Assume you have a test app and a dev app running on the same box, with the same exact log format, but with logs saved in different directories, so that in the Input sections you would have 
File 'C:\\MyTestApp\\logs\\*.log' for the test app and File 'C:\\MyDevApp\\logs\\*.log' for the dev app.
The only solution I found so far was to declare a filewatcher_transformer Processor section for each app. The default filewatcher_transformer section I had before looked like this:

START_ANGLE_BRACKET  Processor filewatcher_transformer END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
I created instead these 2 sections:
START_ANGLE_BRACKET Processor filewatcher_transformer_test END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "TEST_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET

START_ANGLE_BRACKET Processor filewatcher_transformer_dev END_ANGLE_BRACKET  Module pm_transformer    # Uncomment to override the program name  # Exec $SourceName = 'PROGRAM NAME';  Exec $SourceName = "DEV_" + $SourceName;  Exec $Hostname = hostname();  OutputFormat syslog_rfc5424START_ANGLE_BRACKET/Processor END_ANGLE_BRACKET
As you can see, I chose to prefix $SourceName, which is the name of the log file in this case, with either TEST_ or DEV_ depending on the app.
There is one thing remaining, which is to define a specific route for each app. Before, I had a common route for both apps:
START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyAppTest, MyAppDev=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
I replaced the common route with the following 2 routes, each connecting an app with its respective Processor section.
START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyAppTest=> filewatcher_transformer_test => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
START_ANGLE_BRACKET  Route 3 END_ANGLE_BRACKET
Path MyAppDev=> filewatcher_transformer_dev => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
At this point, I restarted the nxlog service and I started to see log filenames in Papertrail of the form DEV_errors.log and TEST_errors.log.

The Realtime API: In memory mode, debug tools, and more

Google Code Blog - Mon, 04/13/2015 - 21:20

Posted by Cheryl Simon Retzlaff, Software Engineer on the Realtime API team

Originally posted to the Google Apps Developer blog

Real-time collaboration is a powerful feature for getting work done inside Google docs. We extended that functionality with the Realtime API to enable you to create Google-docs style collaborative applications with minimal effort.

Integration of the API becomes even easier with a new in memory mode, which allows you to manipulate a Realtime document using the standard API without being connected to our servers. No user login or authorization is required. This is great for building applications where Google login is optional, writing tests for your app, or experimenting with the API before configuring auth.

The Realtime debug console lets you view, edit and debug a Realtime model. To launch the debugger, simply execute gapi.drive.realtime.debug(); in the JavaScript console in Chrome.

Finally, we have refreshed the developer guides to make it easier for you to learn about the API as a new or advanced user. Check them out at https://developers.google.com/drive/realtime.

For details on these and other recent features, see the release note.

Categories: Programming

Best Creativity Books

NOOP.NL - Jurgen Appelo - Mon, 04/13/2015 - 20:34
best-creativity-books

After my lists of mindfulness books and happiness books, here you can find the 20 Best Creativity Books in the World.

This list is created from the books on GoodReads tagged with “creativity”, sorted using an algorithm that favors number of reviews, average rating, and recent availability.

The post Best Creativity Books appeared first on NOOP.NL.

Categories: Project Management

Mobile Sync for Mongo

Eric.Weblog() - Eric Sink - Mon, 04/13/2015 - 19:00

We here at Zumero have been exploring the possibility of a mobile sync solution for MongoDB.

We first released our Zumero for SQL Server product almost 18 months ago, and today there are bunches of people using mobile apps which sync using our solution.

But not everyone uses SQL Server, so we often wonder what other database backends we should consider supporting. In this blog entry, I want to talk about some progress we've made toward a "Zumero for Mongo" solution and "think out loud" about the possibilities.

Background: Mobile Sync

The basic idea of mobile sync is to keep a partial copy of the database on the mobile device so the app doesn't have to go back to the network for every single CRUD operation. The benefit is an app that is faster, more reliable, and works offline. The flip side of that coin is the need to keep the mobile copy of the database synchronized with the data on the server.

Sync is tricky, but as mobile continues its explosive growth, this approach is gaining momentum:

If the folks at Mongo are already working on something in this area, we haven't seen any sign of it. So we decided to investigate some ideas.

Pieces of the puzzle

In addition to the main database (like SQL Server or MongoDB or whatever), a mobile sync solution has three basic components:

Mobile database
  • Runs on the mobile device as part of the app

  • Probably an embedded database library

  • Keeps a partial replica of the main database

  • Wants to be as similar as possible to the main database

Sync server
  • Monitors changes made by others to the main database

  • Sends incremental changes back and forth between clients and the main database

  • Resolves conflicts, such as when two participants want to change the same data

  • Manages authentication and permissions for mobile clients

  • Filters data so that each client only gets what it needs

Sync client
  • Monitors changes made by the app to the mobile database

  • Talks over the network to the sync server

  • Pushes and pulls incremental changes to keep the mobile database synchronized

  • For this blog entry, I want to talk mostly about the mobile database. In our Zumero for SQL Server solution, this role is played by SQLite. There are certainly differences between SQL Server and SQLite, but on the whole, SQLite does a pretty good job pretending to be SQL Server.

    What embedded database could play this role for Mongo?

    This question has no clear answer, so we've been building a a lightweight Mongo-compatible database. Right now it's just a prototype, but its development serves the purpose of helping us explore mobile sync for Mongo.

    Embeddable Lite Mongo

    Or "Elmo", for short.

    Elmo is a database that is designed to be as Mongo-compatible as it can be within the constraints of mobile devices.

    In terms of the status of our efforts, let me begin with stuff that does NOT work:

    • Sharding is an example of a Mongo feature that Elmo does not support and probably never will.

    • Elmo also has no plans to support any feature which requires embedding a JavaScript engine, since that would violate Apple's rules for the App Store.

    • We do hope to support full text search ($text, $meta, etc), but this is not yet implemented.

    • Similarly, we have not yet implemented any of the geo features, but we consider them to be within the scope of the project.

    • Elmo does not support capped collections, and we are not yet sure if it should.

    Broadly speaking, except for the above, everything works. Mostly:

    • All documents are stored in BSON

    • Except for JS code, all BSON types are supported

    • Comparison and sorting of BSON values (including different types) works

    • All basic CRUD operations are implemented

    • The update command supports all the update operators except $isolated

    • The update command supports upsert as well

    • The findAndModify command includes full support for its various options

    • Basic queries are fully functional, including query operators, projection, and sorting

    • The matcher supports Mongo's notion of query predicates matching any element of an array

    • CRUD operations support resolution of paths into array subobjects, like x.y to {x:[{y:2}]}

    • Regex works, with support for the i, s, and m options

    • The positional operator $ works in update and projection

    • Cursors and batchSize are supported

    • The aggregation pipeline is supported, including all expression elements and all stages (except geo)

    More caveats:

    • Support for indexes is being implemented, but they don't actually speed anything up yet.

    • The dbref format is tolerated, but is not [yet] resolved.

    • The $explain feature is not implemented yet.

    • For the purpose of storing BSON blobs, Elmo is currently using SQLite. Changing this later will be straightforward, as we're basically just using SQLite as a key-value store, so the API between all of Elmo's CRUD logic and the storage layer is not very wide.

    Notes on testing:

    • Although mobile-focused Elmo does not need an actual server, it has one, simply so that we can run the jstests suite against it.

    • The only test suite sections we have worked on are jstests/core and jstests/aggregation.

    • Right now, Elmo can pass 311 of the test cases from jstests.

    • We have never tried contacting Elmo with any client driver except the mongo shell. So this probably doesn't work yet.

    • Elmo's server only supports the new style protocol, including OP_QUERY, OP_GET_MORE, OP_KILL_CURSORS, and OP_REPLY. None of the old "fire and forget" messages are implemented.

    • Where necessary to make a test case pass, Elmo tries to return the same error numbers as Mongo itself.

    • All effort thus far has been focused on making Elmo functional, with no effort spent on performance.

    How Elmo should work:

    • In general, our spec for Elmo's behavior is the MongoDB documentation plus the jstests suite.

    • In cases where the Mongo docs seem to differ from the actual behavior of Mongo, we try to make Elmo behave like Mongo does.

    • In cases where the Mongo docs are silent, we often stick a proxy in front of the Mongo server and dump all the messages so we can see exactly what is going on.

    • We occasionally consult the Mongo server source code for reference purposes, but no Mongo code has been copied into Elmo.

    Notes on the code:

    • Elmo is written in F#, which was chosen because it's an insanely productive environment and we want to move quickly.

    • But while F# is a great language for this exploratory prototype, it may not be the right choice for production, simply because it would confine Elmo use cases to Xamarin, and Miguel's world domination plan is not quite complete yet. :-)

    • The Elmo code is now available on GitHub at https://github.com/zumero/Elmo. Currently the license is GPLv3, which makes it incompatible with production use on mobile platforms, which is okay for now, since Elmo isn't ready for production use anyway. We'll revisit licensing issues later.

    Next steps:

    • Our purpose in this blog entry is to start conversations with others who may be interested in mobile sync solutions for Mongo.

    • Feel free to post a question or comment or whatever as an issue on GitHub: https://github.com/zumero/Elmo/issues

    • Or email me: eric@zumero.com

    • Or Tweet: @eric_sink

    • If you're interested in a face-to-face conversation or a demo, we'll be at MongoDB World in NYC at the beginning of June.

     

Three Fast Data Application Patterns

This is guest post by John Piekos, VP Engineering at VoltDB. I understand this is a little PRish, but I think the ideas are solid.

The focus of many developers and architects in the past few years has been on Big Data, specifically mining historical intelligence from the Data Lake (usually a Hadoop stack containing terabytes to petabytes of data).

Now, product architects are asking how they can use this business intelligence for competitive advantage. As a result, application developers have come to see the value of using and acting in real-time on streams of fast data; using OLAP reporting wisdom, they can realize the benefits of both fast data and Big Data. As a result, a new set of application patterns have emerged. The applications are designed to capture value from fast-moving streaming data, before it reaches Hadoop.

At VoltDB we call this new breed of applications “fast data” applications. The goal of these fast data applications is to do more than just push data into Hadoop asap, but also to capture real-time value from the data the moment the data arrives.  

Because traditional databases historically haven’t been fast enough, developers have been forced to go to great effort to build fast data applications - they build complex multi-tier systems often involving a handful of tools typically utilizing a dozen or more servers.  However, a new class of database technology, especially NewSQL offerings, has changed this equation.

If you have a relational database that is fast enough, highly available, and able to scale horizontally, the ability to build fast data applications becomes less esoteric and much more manageable. Three new real-time application patterns have emerged as the necessary dataflows to implement real-time applications. These patterns, enabled by new, fast database technology, are:

Categories: Architecture

The Flaw of Averages and Not Estimating

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 16:04

There is a popular notion in the #NoEstimates paradigm that Empirical data is the basis of forecasting the future performance of a development project. In principle this is true, but the concept is not complete in the way it is used. Let's start with the data source used for this conjecture.

There are 12 sample in the example used by #NoEstimates. In this case stickies per week. From this time series an average is calculated for the future. This is the empirical data is used to estimate in the No Estimates paradigm. The Average is 18.1667 or just 18 stickies per week.

Data

 

But we all have read or should have read Sam Savage's The Flaw of Averages. This is a very nice populist book. By populist I mean an easily accessible text with little or not mathematics in the book. Although Savage's work is highly mathematically based with his tool set.

There is a simple set of tools that can be applied for Time Series analysis, using past performance to forecast future performance of the system that created the previous time series. The tool is R and is free for all platforms. 

Here's the R code for performing a statistically sound forecast to estimate the possible ranges values the past empirical stickies can take on in the future.

Put the time series in an Excel file and save it as TEXT named BOOK1

> SPTS=ts(Book1) - apply the Time Series function in R to convert this data to a time series
> SPFIT=arima(SPTS) - apply the simple ARIMA function to the time series
> SPFCST=forecast(SPFIT) - build a forecast from the ARIMA outcome
> plot(SPFCST) - plot the results

Here's that plot. This is the 80% and 90% confidence bands for the possible outcomes in the future from the past performance - empirical data from the past. 

The 80% range is 27 to 10 and the 90% range is 30 to 5.

Rplot

So the killer question.

Would you bet your future on a probability of success with a +65 to -72% range of cost, schedule, or technical performance of the outcomes?

I hope not. This is a flawed example I know. Too small a sample, no adjustment of the ARIMA factors, just a quick raw assessment of the data used in some quarters as a replacement for actually estimating future performance. But this assessment shows how to  empirical data COULD  support making decisions about future outcomes in the presence of uncertainty using past time series once the naive assumptions of sample size and wide variances are corrected..

The End

If you hear you can make decisions without estimating that's pretty much a violation of all established principles of Microeconomics and statistical forecasting. When answer comes back we sued empirical data, that your time series empirical data, download R, install all the needed packages, put the data in a file, apply the functions above and see if you really want to commit to spending other peoples money with a confidence range of +65 to -72%  of performing like you did in the past? I sure hope not!!

Related articles Flaw of Averages Estimating Probabilistic Outcomes? Of Course We Can! Critical Success Factors of IT Forecasting Herding Cats: Empirical Data Used to Estimate Future Performance Some More Background on Probability, Needed for Estimating Forecast, Automatic Routines vs. Experience Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Why Comments Are Stupid, a Real Example

Making the Complex Simple - John Sonmez - Mon, 04/13/2015 - 16:00

Nothing seems to stir up religious debate more so than when I write a post or do a YouTube video that mentions how most of the time comments are not necessary and are actually more harmful than helpful. I first switched sides in this debate when I read the second edition of Code Complete. In […]

The post Why Comments Are Stupid, a Real Example appeared first on Simple Programmer.

Categories: Programming

By: Excellent Resources for Business Analysts | Practical Analyst

Software Requirements Blog - Seilevel.com - Mon, 04/13/2015 - 14:11

[…] Karl Wiegers are participants. I learn something new every time I visit the Seilevel board. The blog is also […]

Categories: Requirements

Who Builds a House without Drawings?

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 05:46

This month's issue of Communications of the ACM, has a Viewpoint article titled "Who Builds a House without Drawing Blueprints?" where two ideas are presented:

  • It is a good idea to think about what we are about to do before we do ¬†it.
  • If we're going to write a good program, we need to think above to code level.

The example from the last bullet is there are many coding methods - test driven development, agile programming, and others ...

If the only sorting algorithm we know is a bubble sort no coding method will produce code that sorts in O(n log n) time.

Not only do we need to have somes sense of what capabilities the software needs to deliver in exchange for the cost of the software, but also do those capabilities meet the needs? What are the Measures of Effectiveness and Measures of Performance the software must fulfill? In what order must these be fulfilled? What supporting documentation is needed for the resulting product or service in order to maintain it over it's life cycle.

If we do not start with a specification, every line of code we write is a patch.†

This notion brings up several other gaps in our quest to build software that fulfills the needs of those paying. There are several conjectures floating around that willfully ignore the basic principles of providing solutions acceptable to the business. Since the business operates on the principles of Microeconomics of decision making, let's look at developing software from the point of view of those paying for our work. It is conjectured that ...

  • Good code is it's own documentation.
  • We can develop code just by sitting down and¬†doing it. Our¬†mob¬†of coders can come up with the best solution as they go.
  • We don't need to estimate the final cost and schedule, we'll just use some short term highly variable empirical data to show us the average progress and project that.
  • All elements of the software can be¬†sliced to a standard size and we'll use Kanban to forecast future outcomes.
  • We're bad at estimating and our managers misuse those numbers, so the solution is to Not Estimate and that will fix the root cause of those symptoms of Bad Management.

There are answers to each of these in the literature for the immutable principles of project management, but I came across a dialog that illustrates to na√Įvety ¬†around spending other people's money to develop software without knowing how much, what, and when.

Here's a conversation - following Galileo Galilei's Dialogue Concerning the Two Chief World Systems - between Salviati who argues for the principles of celestial mechanics and Simplicio who is a dedicated follower that those principles have not value for him as his sees them an example of dysfunction. 

I'll co-op the actual social media conversation and use those words by Salviati and Simplicio as the actors. The two people on the social media are both fully qualified to be Salviati. Galileo used Simplicio as a double entendre to make his point, so neither is Simplicio here:

  • Simplicio - my first born is a novice software developer but is really bad at math and especially statistics and those pesky estimating requests asked by the managers he works for. he's thinking he needs to find a job where they let him develop code, where there is #NoMath needed to make those annoying estimates.
  • Salviati - Maybe you tell him you're not suggesting he not learn math, but simply reduce his dependence on math in his work, since it is hard and he's not very good at it.
  • Simplicio - Yes, lots of developers struggle with answering estimate questions based on ¬†statistics and other know and tested approaches. I'm suggesting he find some alternative to having to make estimates, since he's so bad at them.
  • Salviati - I'll agree for the moment, since he doesn't appear to be capable of learning the needed math. Perhaps he should seek other ways to answering the questions asked of him by those paying his salary. Ways in which he can apply #NoMath to answering those questions needed by the business people to make decisions.
  • Simplicio - Maybe he can just pick the most important thing to work on first, do that, then go back and start the next most important thing, and do that until he is done. Then maybe those paying him will stop asking¬†when will you be done and how much will it cost when that day arrives, and oh yes, all that code you developed it will meet the needed¬†capabilities¬†I'm paying¬†you¬†to¬†develop right?
  • Salviati - again this might be a possible solution to your son's dilemma. After all we're not all good at using statistics and other approaches to estimate those numbers needed to make business decisions. Since we really like to just start coding, maybe the idea of #NoMath is a good one and he can just be an excellent coder.¬†Those paying for his work really only want it to work on the needed day for the expected cost and provide the needed capabilities - all within the confidence levels needed to fulfill their business case needs so they can stay in business. ¬†
  • Simplicio - He heard of this idea on the internet. Collect old data and use those for projecting the new data. That'd of course not be be the same as analyzing the future risks, changing sizes of work and all the other probabilistic outcomes. Yea, that's work, add up all the past estimates, find the average and use that.
  • Salviati - that might be useful for him, but make sure you caution him, that those numbers from the past may not represent the numbers in the future if he doesn't assess what capabilities are needed in the future and what the structure of the solution is for those capabilities. And while he's at it, make sure the uncertainties in the future are the same as the uncertainties in the past, otherwise that past numbers are pretty much worthless for making decisions about the future.
  • Simplicio - Sure, but at his work, his managers abuse those numbers and take them as¬†point values and ignore the¬†probabilistic¬†ranges he places on them. His supervisor - the one with the pointy hair - simply doesn't recognize that all project work is probabilistic and wants his developers to just do it.
  • Salviati - Maybe your son can ask his supervisors boss - the one that provides the money for his work,¬†Five Whys¬†as to why he even needs an estimate. Maybe that person will be happy to have you son spend his money with no need to know how much it will cost in the end, or when he'll be done, or really what will be done when the money and time runs out.
  • Simplicio - yes that's the solution. All those books, classes, and papers he should have read, all those tools he could have used, really don't matter any more. He can go back and tell the person paying for the work that he can produce the result without using any math whatsoever. Just take whatever he is producing, one slice at a time, and eventually he'll get what he needs to fulfill his business case, hopefully before time and money runs out.

† Viewpoint: Who Builds a House without Drawing Blueprints?, Leslie Lamport, CACM, Vol.58 No.4, pp. 38-41.

Categories: Project Management

SPaMCAST 337 ‚Äď Agile Release Plan, Baselining Software, Executing Communication

 www.spamcast.net

Listen Now

Subscribe on iTunes

In this episode of the Software Process and Measurement Cast we feature three columns!¬† The first is our essay on the Agile release plans.¬† Even after 12 years or more with Agile we are still asked what we will deliver, when a features will be delivered and how much the project will cost.¬† Agile release plans are a tool to answer those questions.¬† Our second column this week is from the Software Sensei, Kim Pries. Kim asks why is baselining so important. Kim posits that if we do not baseline, we cannot tell whether a change is negative, positive, or indifferent‚ÄĒwe simply do NOT know. Finally Jo Ann Sweeney will complete the communication cycle in her Explaining Change column by discussing delivery with a special focus on social media.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Stephen Parry.¬† Stephen is a returning interviewee.¬† We discussed adaptable organizations. Stephen recently wrote: ‚ÄúOrganizations¬†which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things:¬†vision,¬†imagination¬†and ‚Äď most importantly of all – implicit¬†trust in their own people.‚ÄĚ We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 337 - Agile Release Plan, Baselining Software, Executing Communication

Software Process and Measurement Cast - Sun, 04/12/2015 - 22:00

In this episode of the Software Process and Measurement Cast we feature three columns!¬† The first is our essay on the Agile release plans.¬† Even after 12 years or more with Agile we are still asked what we will deliver, when a features will be delivered and how much the project will cost.¬† Agile release plans are a tool to answer those questions.¬† Our second column this week is from the Software Sensei, Kim Pries. Kim asks why is baselining so important. Kim posits that if we do not baseline, we cannot tell whether a change is negative, positive, or indifferent‚ÄĒwe simply do NOT know. Finally Jo Ann Sweeney will complete the communication cycle in her Explaining Change column by discussing delivery with a special focus on social media.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Stephen Parry.¬† Stephen is a returning interviewee.¬† We discussed adaptable organizations. Stephen recently wrote: ‚ÄúOrganizations¬†which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things:¬†vision,¬†imagination¬†and ‚Äď most importantly of all - implicit¬†trust in their own people.‚ÄĚ We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

R: Creating an object with functions to calculate conditional probability

Mark Needham - Sun, 04/12/2015 - 08:55

I’ve been working through Alan Downey’s Thinking Bayes and I thought it’d be an interesting exercise to translate some of the code from Python to R.

The first example is a simple one about conditional probablity and the author creates a class ‘PMF’ (Probability Mass Function) to solve the following problem:

Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.

Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla.

What is the probability that it came from Bowl 1?

In Python the code looks like this:

pmf = Pmf()
pmf.Set('Bowl 1', 0.5)
pmf.Set('Bowl 2', 0.5)
 
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
 
pmf.Normalize()
 
print pmf.Prob('Bowl 1')

The ‘PMF’ class is defined here.

  • ‘Set’ defines the prior probability of picking a cookie from either bowl i.e. in our case it’s random.
  • ‘Mult’ defines the likelihood of picking a vanilla biscuit from either bowl
  • ‘Normalize’ applies a normalisation so that our posterior probabilities add up to 1.

We want to create something similar in R and the actual calculation is stragiht forward:

pBowl1 = 0.5
pBowl2 = 0.5
 
pVanillaGivenBowl1 = 0.75
pVanillaGivenBowl2 = 0.5
 
> (pBowl1 * pVanillaGivenBowl1) / ((pBowl1 * pVanillaGivenBowl1) + (PBowl2 * pVanillaGivenBowl2))
0.6
 
> (pBowl2 * pVanillaGivenBowl2) / ((pBowl1 * pVanillaGivenBowl1) + (pBowl2 * pVanillaGivenBowl2))
0.4

The problem is we have quite a bit of duplication and it doesn’t read as cleanly as the Python version.

I’m not sure of the idiomatic way of handling this type of problem in R with mutable state in R but it seems like we can achieve this using functions.

I ended up writing the following function which returns a list of other functions to call.

create.pmf = function() {
  priors <<- c()
  likelihoods <<- c()
  list(
    prior = function(option, probability) {
      l = c(probability)  
      names(l) = c(option)
      priors <<- c(priors, l)
    },
    likelihood = function(option, probability) {
      l = c(probability)  
      names(l) = c(option)
      likelihoods <<- c(likelihoods, l)
    },
    posterior = function(option) {
      names = names(priors)
      normalised = 0.0
      for(name in names) {
        normalised = normalised + (priors[name] * likelihoods[name])
      }
 
      (priors[option] * likelihoods[option]) / normalised
    }    
  )
}

I couldn’t work out how to get ‘priors’ and ‘likelihoods’ to be lexically scoped so I’ve currently got those defined as global variables. I’m using a list as a kind of dictionary following a suggestion on Stack Overflow.

The code doesn’t handle the unhappy path very well but it seems to work for the example from the book:

pmf = create.pmf()
 
pmf$prior("Bowl 1", 0.5)
pmf$prior("Bowl 2", 0.5)
 
pmf$likelihood("Bowl 1", 0.75)
pmf$likelihood("Bowl 2", 0.5)
 
> pmf$posterior("Bowl 1")
Bowl 1 
   0.6 
> pmf$posterior("Bowl 2")
Bowl 2 
   0.4

How would you solve this type of problem? Is there a cleaner/better way?

Categories: Programming

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 8

IMG_1249

I first read The Goal: A Process of Ongoing Improvement when I actively became involved in process improvement.  I was bit late to the party; however since my first read of this business novel, a copy has always graced my bookshelf.  The Goal uses the story of Alex Rogo, plant manager, to illustrate the theory of constraints and how the wrong measurement focus can harm an organization. The focus of the re-read is less on the story, but rather on the ideas that have shaped lean thinking. Even though set in a manufacturing plant the ideas are useful in understanding how all projects and products can be delivered more effectively.  Earlier entries in this re-read are:

Part 1                Part 2                  Part 3                      Part 4                Part 5           Part 6             Part 7

 

Chapter 21: This chapter is bookended by Alex’s martial travails. The chapter opens with Alex making a date with Julie (Alex’s wife). I remember the first time I read the book being worried about Alex’s ability to set a date and time and meet it . . . heck, his plant can’t meet dates.

When Alex’s team identifies the late orders (which was easy), they find that the majority of the late orders are processed through one or both of the bottleneck steps. Some people conceive of manufacturing sector as a set of assembly lines in which the process seldom, if ever, varies. The description of the manufacturing in The Goal is much more akin to a job shop in which the process steps vary depending on the job. This is VERY similar to much of the work done in software development, enhancement and maintenance. If you surface read The Goal you might not see the applicability to software development or other business processes because of this false impression.

Alex tasks his team to ensure that the work following through the bottlenecks is focused on the critical orders. The team sets out to implement the initial wave of changes they discussed with Johan. As with any change the unless specifically addressed those asked to change do not automatically understand why which casues resistance.¬†The Goal uses two situations to push the ideas of communication and transparency as a change management tool. The first occurs when Alex finds the NCX-10 is not running. The inventory for the next critical order is not at the machine therefore the operator is waiting. The material is at an earlier stage. The operator did not recognize the material as being important, therefore opted to follow standard operating procedure. This scenario generates one of the best lines in the book, “if you have to break the rules to do the right thing, maybe the rules are not the right rules.” Similarly, the union head immediately pushed back when asked to change the work rules needed to keep the NCX-10 running during lunch and breaks. Alex recognizes that everyone needs to see the big picture and needs a signal to know which work behavior is needed. Based on these two issues, Alex immediately implements two changes. The first is a¬†¬† signal card (red tag) to indicate priority orders (very similar to Kanban), and second Alex and his staff begin briefing EVERYONE in the plant (transparency, one of the pillars of Scrum) on the the impact of the changes they were being asked to make. The transparency of the management team about reason the plant was being asked to change helped sway the union. ¬†After Alex personally address everyone in the plant in small meetings the union agreed to the work rule changes and the other changes begin to take hold.

Alex’s date with Julie?¬†In the end Alex¬†showed up on time and they went¬†for the date.

Chapter 22: Alex reviews the progress with his team. The situation has improved by ensuring that the priority orders move through the bottlenecks first and moving quality control to before the both of the bottlenecks. Before the changes the latest order was 58 days behind, now the latest order was only 44 days behind. The problem as Alex sees it, it is not enough. ¬†Goldratt speaking though Alex says, “a few weeks ago we were limping along; now we’re¬†walking but we ought to jogging.” ¬†The changes generated some progress, but not enough. Alex pushes Donavan to address the other ideas that they had addressed with Johan, These included outsourcing¬†some of the processing through the bottlenecks to generate capacity.

As they use the signaling system (red card = a critical order), they begin to discover problems. For example, parts that are queued for processing through the bottleneck steps are not easily distinguishable before and after the process, risking mistakes. ¬†A yellow flag¬†is added to the red card to signal processing is complete (the changes to the process is a reflection of an attitude of continuous process improvement). Alex points out that he does not want stop gap measures and is assured that his team will get to the bottom of the problem, fix the process and re-train the staff (dealing with the deeper issue is a reflection of a culture where root-cause analysis is performed so that problems aren’t just glossed over). The tweaks to the process improve throughput, but don’t generate the quantum change the plant needs.

The chapter ends with Donavon, who has been missing in action, who shows up with the old machines of type that the NCX-10 replaced. Even though, as they put it, the machines he had scavenged from another plan were “state of the art circa 1942″, but they provided extra capacity for the bottlenecked NCX-10. The added capacity, when implemented, will increase the capacity of the bottlenecked step. Remember the overall process capacity is directly governed by the capacity of the bottleneck.

I was talking recently with a Scrum team that included a product owner, coach, 4 coders and one tester. The tester was the only person allowed to test. It had been six months since the team was consistently able to complete a story during the sprint it was accepted in. They wondered if increasing the sprint duration from one week to three would solve their consistency problem. We built a Kanban board to visualize the flow of work through the team. Once the board was built it was immediately apparent that the bottleneck was the single tester. The question I left them to wrestle with was whether the answer would be to reduce the number of coders (implement work-in-progress limits) or to increase the number of testers (add capacity). This is a real life example of how the ideas and concepts expressed in The Goal are just relevant in the world of software development as they are in manufacturing.

Summary of The Goal so far:

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4¬†through¬†6¬†shift the focus from steps in the process to the process as a whole. Chapters 4 ‚Äď 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won‚Äôt exist.

Chapters 7 through 9¬†show Alex‚Äôs commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept ‚Äúthat you get what you measure‚ÄĚ is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‚Äėyou get what you measure‚Äô is that if you measure the wrong thing ‚Ķyou get the wrong thing. We begin to see Alex‚Äôs urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters¬†19 through¬†20¬†begins with Johan coaching¬†Alex’s team to help them to identify a ¬†pallet of possible solutions. They discover that¬†every time the capacity of a bottleneck is increased more product can be shipped. ¬†Changing the capacity of a bottleneck includes reducing down time and the the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Competent People and Conference Keynotes

James Bach’s Blog - Sat, 04/11/2015 - 14:16

My colleague and friend Anne-Marie Charrett has a thing about women. A) She is one. B) She feels that not enough of them are speaking at testing conferences. (See also Fiona Charles’ post on this subject.) I support Anne-Marie’s cause partly because I support the woman herself and it would make her happy. This is how humanity works: we are tribal creatures. Don’t deny it, you will just sound silly.

There is another reason I support their cause, though. It’s related to the fact that we people are not only tribal creatures. We are also creatures of myth, story, and principle. Each of us lives inside a story, and we want that story to “win,” whatever that may mean to us. Apart from tribal struggles, there is a larger meta-tribal struggle over what constitutes the “correct” or “good” or “moral” story.

In other words, it isn’t only whom we like that motivates us, but also what seems right. I’m not religious, so I won’t bother to talk about that aspect of things. But in the West, the professional status of women is a big part of the story of good and proper society; about what seems right.

The story I’m living, these days, is about competence. And I think most people speaking at testing conferences are not competent enough. A lot of what’s talked about at testing conferences is the muttering of idiots. By idiot, I mean functionally stupid people: people who choose not to use their minds to find excellent solutions to important problems, but instead speak ritualistically and uncritically about monsters and angels and mathematically invalid metrics and fraudulent standards and other useless or sinister tools that are designed to amaze and confuse the ignorant.

I want to see at least 50% of people speaking at conferences to be competent. That’s my goal. I think it is achievable, but it will take a lot of work. We are up against an entrenched and powerful interest: the promoters-of-ineptness (few of whom realize the damage they do) who run the world and impose themselves on our craft.

Why are there so many idiots and why do they run the world? The roots and dynamics of idiocracy are deep. It’s a good question but I don’t want to go into it here and now.

What I want to say is that Anne-Marie and Fiona, along with some others, can help me, and I can help them. I want to encourage new voices to take a place in the Great Conversation of testing because I do believe there is an under-tapped pool of talent among the women of testing. I am absolutely opposed to quotas or anything that simply forces smaller people with higher voices onto the stage for the sake of it. Instead let’s find and develop talent that leads us into a better future. This is what SpeakEasy is all about.

Maybe, if we can get more women speaking and writing in the craft, we will be able to imagine a world where more than 50% of keynote speakers are not spouting empty quotes from great thinkers and generic hearsay about projects and incoherent terminology and false dichotomies and ungrounded opinions and unworkable heuristics presented in the form of “best practices.”

I am not a feminist. I’m not going to be one. This is why I have work to do. I am not naturally biased in favor of considering women, and even if I were, can I be so sure that I’m not biased in favor of the attractive ones? Or against them? Research suggests no one can be complacent about overcoming our biology and gender identity. So, it’s a struggle. Any honest man will tell you that. And, I must engage that struggle while maintaining my implacable hostility to charlatans and quacks. The story I am living tells me this is what I must do. Also, Anne-Marie has asked for my help.

Here’s my call to action: To bring new beautiful minds forth to stand against mediocrity, we need to make the world a better, friendlier place especially for the women among us. I’m asking all you other non-feminists out there to consider working with me on this.

Categories: Testing & QA

Word Puzzles with Neo4J

Mistaeks I Hav Made - Nat Pryce - Sat, 04/11/2015 - 12:04
Stuck on a long train journey with only a single newspaper, my companion and I ran out of reading material and turned to the puzzle pages. One puzzle caught our attention: the ‚ÄúWord Morph‚ÄĚ. Start with a four-letter word and turn it into another four letter word in a fixed number of steps, where at each step you can change only one letter in the word to create a new, valid word. For example, a puzzle may be to turn ‚ÄúHALT‚ÄĚ into ‚ÄúSILO‚ÄĚ in four steps. The solution is ‚ÄúHALT‚ÄĚ, ‚ÄúSALT‚ÄĚ, ‚ÄúSILT‚ÄĚ and finally ‚ÄúSILO‚ÄĚ. What interested me about this puzzle was that the domain can be be represented as a graph, with a node for every four-letter word and edges between every pair of words that differ by a single letter. Puzzles can then be solved by a path-finding algorithm. Word Morph puzzles represented as a graph So, I popped open my laptop and fired up Neo4J to experiment. I first needed a list of all four-letter words. Linux stores dictionary files in /usr/share/dict/ and I could easily extract all the four-letter words with grep: grep -E '^[[:alpha:]]{4}$' /usr/share/dict/british-english The data was a bit dirty: it contained a mix of upper and lower case and some letters had accents. I cleaned it up with the unaccent command, normalised it to lowercase with tr, and ensured there were no duplicates with sort -u: grep -E '^[[:alpha:]]{4}$' /usr/share/dict/british-english \ | unaccent utf-8 \ | tr [[:upper:]] [[:lower:]] \ | sort -u \ > puzzle-words I then imported the data into Neo4J as CSV data (the list of words is a valid single-column CSV file). The Cypher command to load it into the database is: load csv from "file:////puzzle-words" as cols create (:Word{word: cols[0]}) That gave me about 3000 nodes in my database, one for each word, but they were not yet linked by any relationships. I needed to link the words that are different by a single letter. The database is tiny, so brute-forcing the calculation for all pairs of words was practical in the time available, despite being O(n2) in the number of words. The trickiest bit was calculating the number of letter differences between two words. Unsurprisingly, Cypher doesn‚Äôt have a built-in function for this. Unlike SQL, Cypher does have some nice functional programming constructs that can be used to perform calculations: list comprehensions, EXTRACT (Cypher‚Äôs term for map), REDUCE (fold), COLLECT (used to turn rows of a result set into a collection) and UNWIND (used to turn a collection into rows in the result set). I used a list comprehension to create a list of indices where the letters of two words differ. The number of differences was then the length of this list. Given all pairs of words and the number of letter differences between the words in each pair, I created edges between the words (both ways) if there was a single letter difference: match (w1:Word), (w2:Word) where w2.word > w1.word with w1, w2, length([i in range(0,3) where substring(w1.word,i,1) <> substring(w2.word,i,1)]) as diffCount where diffCount = 1 create (w1)-[:STEP]->(w2) create (w2)-[:STEP]->(w1) That completed the data model. Now solving a puzzle was a simple Cypher query using the shortestPath function: match (start {word:'halt'}), (end {word:'silo'}), p = shortestPath((start)-[*..3]->(end)) unwind nodes(p) as w return w.word as step Giving: step ---- halt salt silt silo Returned 4 rows in 121 ms. Success! But my travelling companion was not happy. She didn‚Äôt want a computer program to solve the puzzle. She wanted to solve it herself. I‚Äôd ruined that. It looked like the rest of the train journey was going to pass amid a somewhat frosty atmosphere! But the same graph model that solves puzzles can be used to generate new puzzles: match (w) with collect(w) as words with words[toInt(rand()*length(words))] as start match p = start -[*3]-> (end) with collect(p) as puzzles with extract(n in nodes(puzzles[toInt(rand()*length(puzzles))]) | n.word) as puzzle unwind puzzle as step return step Again, a brute-force approach to selecting random words and puzzles that only works because of the tiny dataset1. And the query picks a random starting word without looking at its relationships with other words, so sometimes picks a word that cannot be morphed into another by three steps and returns an empty resultset. So, not perfect but good enough to pass the time until the end of the journey. For a more serious application of Neo4J, James Richardson and I gave a talk at ACCU2014 on how we used Neo4J to analyse the heap memory use of embedded Java code in Sky‚Äôs PVR system. Update: I found another implementation of this puzzle, which uses Java to build the graph and the Neo4J Java traversal API to solve puzzles. I‚Äôm struck how far Neo4J and Cypher have come since 2014, when that article was written. Cypher‚Äôs CSV import and functional programming constructs make it possible to solve the puzzle without any Java programming at all. Neo4J has become one of my go-to tools for quick-but-not-dirty data analysis. It would be great if a future version Neo4J added an operator for returning random rows of the result set‚Ü©
Categories: Programming, Testing & QA

How to Improve OKRs (Flow, not Sync)

NOOP.NL - Jurgen Appelo - Sat, 04/11/2015 - 09:37
improve-okrs

After failing dramatically with my professional OKRs in the first quarter of this year (hint: I was no exception in my team), I want to take a moment to evaluate how the practice works for me, and how it doesn’t. Let’s do a Perfection Game with OKRs!

The post How to Improve OKRs (Flow, not Sync) appeared first on NOOP.NL.

Categories: Project Management

Stuff The Internet Says On Scalability For April 10th, 2015

Hey, it's HighScalability time:


Beautiful, isn't it? It's the cerebral cortex of a rat that is organized like a mini-Internet.
  • $47 million: value of Cannabis per square km; $3.7 trillion: worldwide IT spending in 2014;  $41B: spend on spectrum; 48,000 square km: How Much Land Would it Take to Power the US via Solar; 2,000: Hadoop clusters in the world; 650 pounds: projected size of ET
  • Quotable Quotes:
    • John Hugg: The number one rule of 21st century data management: If a problem can be solved with an instance of MySQL, it’s going to be.
    • @sarahnovotny: "there is no compression algorithm for experience" - great quote from Andy Jassy at #AWSSummit
    • Steve Martin: I did stand-up comedy for eighteen years. Ten of those years were spent learning, four years were spent refining, and four were spent in wild success.
    • Yossi Vardi: Revenues kill the dream.
    • @AWSSummits: AdRoll's retargeting and real-time betting operates at 6 billion impressions/day at 100ms latency on #AWS #AWSSummit 
    • @AWS_Partners: Nike is operating 70+ services as production loads in #aws today #AWSSummit 
    • @bernardgolden: S3 usage up 102% YOY, ec2 93%: #AWSSummit
    • @bernardgolden: AWS growing over 40% yoy. Next earnings announcement s/b v interesting. #awssummit 
    • @AlexBalk: Here is my Apple Watch review: Your life is largely meaningless. No gadget can obscure its emptiness. You are dying every day.
    • Jonas: Google: all apps become search. Facebook: all apps become feeds. 
    • @jon_moore: most scalable/fast/reliable systems follow these principles: elastic; responsive; resilient; message-driven. #phillyete
    • mrmondo: NVMe [Non-Volatile Memory Express] is one of the most important changes to storage over the past decade.
    • Peter Thiel: Often the smarter people are more prone to trendy, fashionable thinking because they can pick up on things, they can pick up on cues more easily, and so they’re even more trapped by it than people of average ability
    • @nickstenning: The women and men who wrote the nearly bug-free code that controlled a $4Bn space shuttle and the lives of astronauts worked 8am to 5pm.

  • Have you been let down by miracle materials like carbon nanotubes, buckyballs, and graphene? MOFs  (metal–organic frameworks) are here and they are real. This Nature podcast and article tells you all about them (about 13 minutes in). MOFs are scaffolds made of metal containing nodes linked by carbon-based struts. They are pieces that you can plug together and build up into big networks which have spaces in-between. It's those spaces that make MOFs useful. You can trap things in those holes and do things to the molecules when they are trapped. You can store gasses like methane and hydrogen. You can separate mixture of things by varying the pore sizes. Carbon capture is one big use. They also can be used as chemical sensors, maybe in some future version of your watch. Also perhaps write-once-read-many times memory.

  • Is Amazon recreating the Sun ecosystem in the cloud? We now have the Amazon Elastic File System so everything is remote mounted. WorkSpaces feels like diskless workstations. Storage is over on some NAS. The database is somewhere on the network. And so on. Let's hope NFS lock contention failures and network UI jitter don't also make a comeback. OK, I don't remember having anything like Amazon Machine Learning

  • Etsy is giving Facebook's HipHop Virtual Machine (HHVM) for PHP a try. Why? Their API and web code was diverging under parallel development pressures. And they were developing many small API endpoints that used many small requests instead of larger requests that do more work per request. And instead of sharing state in an inherently shared nothing architecture they went with the strategy of just making things faster. This is where HHMV comes in.

  • OK, that's impressive. Migrating from Heroku to AWS (using Docker). It took two engineers about one month. Performance increased 2x and average API response time dropped from around 220ms to under 100ms, and our background task execution times dropped in half as well. Half the number of servers were needed.

  • I was excited to see AWS is opening up Lambda. It's close to some ideas I've been talking about for a while (Building Super Scalable Systems, What Google App Engine Price Changes Say About The Future Of Web Architecture). When it first came out I rehabed my atrophied node.js skills and gave it a shot. Played around a bit, got some code working, but the problem was Lambda only exposed a few integration points and none of those were anything I cared about. Now, they've made Lambda much more general and in the process much more useful. Worth another look. I also suspect their NFS product was necessary to generalize Lambda. Code could be instantly available on every machine via a mount point. Just like back in the day.

  • How Early Adopters Are Using Unikernels - With and Without Containers: The creator of MirageOS, Anil Madhavapeddy’s group is working on a new tool stack called Jitsu (Just-in-Time Summoning of Unikernels), which can start a unikernel in ~20ms in response to a network request. < Also, Towards Heroku for Unikernels: Part 2 - Self Scaling Systems.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Top Project Management Blogs of 2015

Herding Cats - Glen Alleman - Fri, 04/10/2015 - 15:20

PS-Blog-Best-Blogs

The List in alphabetical order includes this blog. Thanks

 

Categories: Project Management

Bringing apps to the workplace with Google Play for Work

Android Developers Blog - Fri, 04/10/2015 - 11:19
Posted by Matt Goodridge, Google Play team
Work doesn’t just happen in an office from 9 to 5 anymore. Today’s workers are mobile workers, and they need to be able to get things done as efficiently and collaboratively as possible, at any time. That’s why the Android for Work initiative is bringing together partners across the ecosystem, from device and app makers to networking and management solutions, to provide businesses with a secure, flexible and reliable mobility platform that users already know and love.
Google Play for Work allows businesses to securely deploy and manage enterprise-grade apps, across all of their users running Android for Work. Google Play for Work simplifies the process of distributing apps to employees and ensures that IT approves every deployed app. For developers, this is an opportunity to reach a new audience at scale through bulk installs or purchasing, which enables easy installation of your app across enterprises.
How to join Google Play for Work Free apps will be available on Google Play for Work at launch with no action needed on your part. If you have a paid app, you’ll soon be able to opt-in to make your app available for bulk purchase on Google Play for Work in the Developer Console during the app publishing process. Find out more about publishing in the Google Play Developer Help Center.
Designing great apps for Android for Work Apps that are installed from Google Play for Work will function without code changes. However, please note that some of the controls that Android for Work offers IT admins could affect how your app works. To ensure the best possible experience for your users, watch the first in our series of Android for Work DevBytes below to understand the best practices you should be following in developing your app.

More DevBytes will be posted to our YouTube channel soon. Find out more about Android for Work.
Join the discussion on

+Android Developers
Categories: Programming

Beta Channel for the Android WebView

Android Developers Blog - Fri, 04/10/2015 - 11:19
Posted by Richard Coles, Software Engineer, Google London
Many Android apps use a WebView for displaying HTML content. In Android 5.0 Lollipop, Google has the ability to update WebView independently of the Android platform. Beginning today, developers can use a new beta channel to test the latest version of WebView and provide feedback.
WebView updates bring numerous bug fixes, new web platform APIs and updates from Chromium. If you’re making use of the WebView in your app, becoming a beta channel tester will give you an early start with new APIs as well as the chance to test your app before the WebView rolls out to your users.
The first version offered in the beta channel will be based on Chrome 40 and you can find a full list of changes on the chromium blog entry.
To become a beta tester, join the community which will enable you to sign up for the Beta program; you’ll then be able to install the beta version of the WebView via the Play Store. If you find any bugs, please file them on the Chromium issue tracker.
Join the discussion on

+Android Developers
Categories: Programming