Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Manage Your Job Search is Out; You Get the Presents

I am happy to announce that Manage Your Job Search is available on all platforms: electronic and print. And, you get the presents!

For one week, I am running a series of special conference calls to promote the book. Take a look at my celebration page.

I also have special pricing on Hiring Geeks That Fit as a bundle at the Pragmatic Bookshelf, leanpub, and on Amazon. Why? Because you might want to know how great managers hire.

Please do join me on the conference calls. They’ll be short, sweet, and a ton of fun.

Categories: Project Management

Using Key Performance Indicators (KPI) and Features in Project Success Measurements

Software Requirements Blog - Seilevel.com - Wed, 04/09/2014 - 12:27
As I pointed out in a prior post on project success measurements, overall project success and the success of the related IT development effort can be mutually exclusive of each other.  A business can achieve the objectives for a certain initiative regardless of whether the related IT effort succeeds or not.  Similarly, an IT initiative […]

Using Key Performance Indicators (KPI) and Features in Project Success Measurements is a post from: http://requirements.seilevel.com/blog

Categories: Requirements

Engagement and Early Feedback and the Success of Agile Implementation

Engagement and feedback are interrelated like the bricks in the aqueduct.

Engagement and feedback are interrelated like the bricks in the aqueduct.

In Senior Management and the Success of Agile Implementation,I described the results of a survey of experienced process improvement personnel, testers or developers felt contribute to a successful Agile implementation. Tied for second place in the survey were team engagement and generating early feedback. These two concepts are curiously inter-related.

Team engagement is a reflection of motivated and capable individuals working together.  Agile provides teams with the tools to instill unity of purpose. Working with the business on a continuous basis provides the team a clear understanding of the project’s purpose. Short iterations provide the team with a sense of progress. Self-management and retrospectives provide teams with a degree of control over how they tackle impediments.  Finally, the end-of-sprint demonstrations provide early feedback. Feedback helps reinforce the team’s sense of purpose, which reinforces motivation.

Early feedback was noted in the survey as often as team engagement. In classic software development projects, the project would progress from requirements through analysis, design, coding and testing before customers would see functional code.  Progress in these methods is conveyed through process documents (e.g. requirements documents) and status reports. On the other hand, one of the most important principles of Agile states:

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Delivering functional software provides all of the project’s stakeholders with explicit proof of progress and provides stakeholders with a chance to provide feedback based on code they can execute. Early feedback increases stakeholder engagement and satisfaction, which also helps to motivate the team. As importantly, since stakeholders see incremental progress, any required course corrections are also incremental.  Incremental course corrections help to ensure that when the project is complete that most value possible has been delivered.

Team engagement and early feedback are both important to successful Agile implementations. Interestingly, both concepts are inter-twined. Feedback helps to generate engagement and motivation. As one of the respondents to the survey stated, “Agile succeeds when it instills ‘unity of purpose’ and builds a ‘community of trust’ within an organization.” Team engagement and early feedback provides a platform for Agile success.


Categories: Process Management

Data Types

Eric.Weblog() - Eric Sink - Tue, 04/08/2014 - 19:00

(This entry is part of a series. The audience: SQL Server developers. The topic: SQLite on mobile devices.)

Different types of, er, types

At the SQL language level, the biggest difference with SQLite is the way it deals with data types. There are three main differences to be aware of:

  1. There are only a few types

  2. And types are dynamic

  3. (But not entirely, because they have affinity)

  4. And type declarations are weird

Okay, so actually that's FOUR things, not three. But the third one doesn't really count, so I'm not feeling terribly obligated to cursor all the way back up to the top just to fix the word "three". Let's keep moving.

Only a few types

SQLite values can be one of the following types:

  • INTEGER

  • REAL

  • TEXT

  • BLOB

The following table shows roughly how these compare to SQL Server types:

SQL Server SQLite Notes tinyint smallint int bigint bit INTEGER In SQLite, all integers are up to 64 bits wide (like bigint), but smaller values are stored more efficiently. real float REAL In SQLite, all floating point numbers are 64 bits wide. char varchar nchar nvarchar text ntext TEXT In SQLite, all strings are Unicode, and it doesn't care about widths on TEXT columns. binary varbinary image BLOB Width doesn't matter here either. decimal numeric money smallmoney INTEGER ? These are problematic, as SQLite 3 does not have a fixed point type. (In Zumero, we handle synchronization of these by mapping them to INTEGER and handling the scaling.) date datetime datetime2 datetimeoffset smalldatetime time (your choice) SQLite has no data types for dates or times. However, it does have a rich set of built-in functions for manipulating date/time values represented as text (ISO-8601 format), integer (unix time) or real (Julian day). Types are dynamic

In SQL Server, the columns in a table are strictly typed. If you define a column to be of type smallint, then every value in that column must be a 16 bit signed integer.

In contrast, SQLite's approach might be called "dynamic typing". Quoting from its own documentation: "In SQLite, the datatype of a value is associated with the value itself, not with its container."

For example, the following code will fail on SQL Server:

CREATE TABLE [foo] (a smallint);
INSERT INTO [foo] (a) VALUES (3);
INSERT INTO [foo] (a) VALUES (3.14);
INSERT INTO [foo] (a) VALUES ('pi');

But on SQLite, it will succeed. The value in the first row is an INTEGER. The value in the second row is a REAL. The value in the third row is a TEXT string.

sqlite> SELECT a, typeof(a) FROM foo;
3|integer
3.14|real
pi|text

The column [a] is a container that simply doesn't care what you place in it.

Type affinity

Well, actually, it does care. A little.

A SQLite column does not have a type requirement, but it does have a type preference, called an affinity. I'm not going to reiterate the type affinity rules from the SQLite website here. Suffice it to say that sometimes SQLite will change the type of a value to fit match the affinity of the column, but you probably don't need to know this, because:

  • If you declare of column of type TEXT and always insert TEXT into it, nothing weird will happen.

  • If you declare of column of type INTEGER and always insert INTEGER into it, nothing weird will happen.

  • If you declare of column of type REAL and always insert REAL into it, nothing weird will happen.

In other words, just store values of the type that matches the column. This is the way you usually do things anyway.

Type declarations are weird

In a column declaration, SQLite has a rather funky set of rules for how it parses the type. It uses these rules to try its very best to Do The Right Thing when somebody ports SQL code from another database.

For example, all of the columns in the following table end up with TEXT affinity, which is probably what was intended:

CREATE TABLE [foo] 
(
[a] varchar(50),
[b] char(5),
[c] nchar,
[d] nvarchar(5),
[e] nvarchar(max),
[f] text
);

But in some cases, the rules are funky. Here are more declarations which all end up with TEXT affinity, even though none of them look right:

CREATE TABLE [foo] 
(
[a] characters,
[b] textish,
[c] charbroiled,
[d] context
);

And if you want to be absurd, SQLite will let you. Here's an example of a declaration of a column with INTEGER affinity:

CREATE TABLE [foo] 
(
[d] My wife and I went to Copenhagen a couple weeks ago
    to celebrate our wedding anniversary 
    and I also attended SQL Saturday while I there
    and by the way we saw
    Captain America The Winter Soldier 
    there as well which means I got to see it 
    before all my friends back here in Illinois 
    and the main reason this blog entry is late is 
    because I spent most of the following week gloating
);

SQLite will accept nearly anything as a type name. Column [d] ends up being an INTEGER because its ridiculously long type name contains the characters "INT" (in "Winter Soldier").

Perhaps we can agree that this "feature" could be easily abused.

There are only four types anyway. Pick a name for each type and stick to it. Once again, the official names are:

  • INTEGER

  • REAL

  • TEXT

  • BLOB

(If you want a little more latitude, you can use INT for INTEGER. Or VARCHAR for TEXT. But don't stray very far, mkay?)

Pretend like these are the only four things that SQLite will allow, and then it will never surprise you.

Summary

SQLite handles types very differently from SQL Server, but its approach is mostly a superset of your existing habits. The differences explained above might look like a big deal, but in practice, they probably won't affect you all that much.

 

Microservices - Not a free lunch!

This is a guest post by Benjamin Wootton, CTO of Contino, a London based consultancy specialising in applying DevOps and Continuous Delivery to software delivery projects. 

Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services.

Though they aren't a particularly new idea, Microservices seem to have exploded in popularity this year, with articles, conference tracks, and Twitter streams waxing lyrical about the benefits of building software systems in this style.

This popularity is partly off the back of trends such as Cloud, DevOps and Continuous Delivery coming together as enablers for this kind of approach, and partly off the back of great work at companies such as Netflix who have very visibly applied the pattern to great effect.

Let me say up front that I am a fan of the approach. Microservices architectures have lots of very real and significant benefits:

Categories: Architecture

Don't Start With Requirements Start With Capabilities

Herding Cats - Glen Alleman - Tue, 04/08/2014 - 15:08

Lot's of myth floating around about requirements elicitation and management. Starting with requirements is not how large, complex, mission critical DOD and NASA programs work. Start with Capabilities. This cna be directly transferred to Enterprise IT.

Here's a map of the planned capabilities for an ERP system. This figure is from Performance-Based Project Management® where all the deatails of this and other principles, practices, and processes needed for project success can be found.

Here each business systems capability is outlined in the order needed to maximize the business value. In agile parlance, the customer has prioritized the deliverables. But in fact the prioritization is part of the strategic planning needed to assure that the cost investment returns the maximum value to assure the business receive maintains a positive ROI throughout the life of the project

Capabilities Map

The first step is to identify the needed capabilities. Here's the steps

Capabilities Based Planning

Only when we have the capabilities defined for each stage of the project can we start on the requirements. All the capabilities need to be identified and sequenced, otherwise we can't be assured to business goals can be met for the planned investment. With the planned capabilities, we can start on the requirements

Requirements Steps

With requirements in place for each capability, we can then start development. This is done incrementally and iterative using our favorite agile method. Doesn't mater as long an incremental delivery of value of the approach.

Categories: Project Management

Google Play App Translation Service

Android Developers Blog - Tue, 04/08/2014 - 07:04

Posted by Ellie Powers, Google Play team

Today we are happy to announce that the App Translation Service, previewed in May at Google I/O, is now available to all developers. Every day, more than 1.5 million new Android phones and tablets around the world are turned on for the first time. Each newly activated Android device is an opportunity for you as a developer to gain a new user, but frequently, that user speaks a different language from you.

To help developers reach users in other languages, we launched the App Translation Service, which allows developers to purchase professional app translations through the Google Play Developer Console. This is part of a toolbox of localization features you can (and should!) take advantage of as you distribute your app around the world through Google Play.

We were happy to see that many developers expressed interest in the App Translation Service pilot program, and it has been well received by those who have participated so far, with many repeat customers.

Here are several examples from developers who participated in the App Translation Service pilot program: the developers of Zombie Ragdoll used this tool to launch their new game simultaneously in 20 languages in August 2013. When they combined app translation with local marketing campaigns, they found that 80% of their installs came from non-English-language users. Dating app SayHi Chat expanded into 13 additional languages using the App Translation Service. They saw 120% install growth in localized markets and improved user reviews of the professionally translated UI. The developer of card game G4A Indian Rummy found that the App Translation Service was easier to use than their previous translation methods, and saw a 300% increase with user engagement in localized apps. You can read more about these developers’ experiences with the App Translation Service in Developer Stories: Localization in Google Play.

To use the App Translation Service, you’ll want to first read the localization checklist. You’ll need to get your APK ready for translation, and select the languages to target for translation. If you’re unsure about which languages to select, Google Play can help you identify opportunities. First, review the Statistics section in the Developer Console to see where your app has users already. Does your app have a lot of installs in a certain country where you haven’t localized to their language? Are apps like yours popular in a country where your app isn’t available yet? Next, go to the Optimization Tips section in the Developer Console to make sure your APK, store listing, and graphics are consistently translated.

You’ll find the App Translation Service in the Developer Console at the bottom of the APK section — you can start a new translation or manage an existing translation here. You’ll be able to upload your app’s file of string resources, select the languages you want to translate into, select a professional translation vendor, and place your order. Pro tip: you can put your store listing text into the file you upload to the App Translation Service. You’ll be able to communicate with your translator to be sure you get a great result, and download your translated string files. After you do some localization testing, you’ll be ready to publish your newly translated app update on Google Play — with localized store listing text and graphics. Be sure to check back to see the results on your user base, and track the results of marketing campaigns in your new languages using Google Analytics integration.

Good luck! Bonne chance ! ご幸運を祈ります! 행운을 빌어요 ¡Buena suerte! Удачи! Boa Sorte!

Join the discussion on

+Android Developers
Categories: Programming

Improved App Insight by Linking Google Analytics with Google Play

Android Developers Blog - Tue, 04/08/2014 - 02:02

Posted by Ellie Powers, Google Play team

A key part of growing your app’s installed base is knowing more about your users — how they discover your app, what devices they use, what they do when they use your app, and how often they return to it. Understanding your users is now made easier through a new integration between Google Analytics and the Google Play Developer Console.

Starting today, you can link your Google Analytics account with your Google Play Developer Console to get powerful new insights into your app’s user acquisition and engagement. In Google Analytics, you’ll get a new report highlighting which campaigns are driving the most views, installs, and new users in Google Play. In the Developer Console, you’ll get new app stats that let you easily see your app’s engagement based on Analytics data.

This combined data can help you take your app business to the next level, especially if you’re using multiple campaigns or monetizing through advertisements and in-app products that depend on high engagement. Linking Google Analytics to your Developer Console is straightforward — the sections below explain the new types of data you can get and how to get started.

In Google Analytics, see your app’s Google Play referral flow

Once you’ve linked your Analytics account to your Developer Console, you’ll see a new report in Google Analytics called Google Play Referral Flow. This report details each of your campaigns and the user traffic that they drive. For each campaign, you can see how many users viewed listing page in Google Play and how many went on to install your app and ultimately launch it on their mobile devices.

With this data you can track the effectiveness of a wide range of campaigns — such as blogs, news articles, and ad campaigns — and get insight into which marketing activities are most effective for your business. You can find the Google Play report by going to Google Analytics and clicking on Acquisitions > Google Play > Referral Flow.

In the Developer Console, see engagement data from Google Analytics

If you’re already using Google Analytics, you know how important it is to see how users are interacting with your app. How often do they launch it? How much do they do with it? What are they doing inside the app?

Once you link your Analytics account, you’ll be able to see your app’s engagement data from Google Analytics right in the Statistics page in your Developer Console. You’ll be able to select two new metrics from the drop-down menu at the top of the page:

  • Active users: the number of users who have launched your app that day
  • New users: the number of users who have launched your app for the first time that day

These engagement metrics are integrated with your other app statistics, so you can analyze them further across other dimensions, such as by country, language, device, Android version, app version, and carrier.

How to get started

To get started, you first need to integrate Google Analytics into your app. If you haven’t done this already, download the Google Analytics SDK for Android and then take a look at the developer documentation to learn how to add Analytics to your app. Once you’ve integrated Analytics into your app, upload the app to the Developer Console.

Next, you’ll need to link your Developer Console to Google Analytics. To do this, go to the Developer Console and select the app. At the bottom of the Statistics page, you’ll see directions about how to complete the linking. The process takes just a few moments.

That’s it! You can now see both the Google Play Referral Flow report in Google Analytics and the new engagement metrics in the Developer Console.

Join the discussion on

+Android Developers
Categories: Programming

Senior Management and the Success of Agile Implementation

Senior leadership needs to lead by example.

Senior leadership needs to lead by example.

Over the past few weeks I have been asking friends and colleagues to formally answer the following question:

What are the top reasons you think an organization succeeds in implementing Agile?

The group that participated in this survey are from a highly experienced cohort of process improvement personnel, testers or developers. Not all of the respondents were sure Agile and success belonged in the same sentence (more on that later in the week). There was a rich range of answers, however after the first dozen responses a consensus formed. Today I would like to explore the most important success factor as reported in this survey: senior leadership support.

Senior management support was the most often mentioned factor influencing Agile Success. By far one of the significant nuances in the senior management support was exhibiting a true understanding of Agile. In particular, senior managers must understand what Agile really is rather than falling prey to buzzword bingo.  One of the respondents suggested that, “I feel most senior leaders that I have dealt with don’t have a full understanding of what is needed and it trickles down to the rest of the organization.” Senior leaders need to walk the talk when it comes to Agile, if they expect to implement Agile successfully.  They need to prove to both team members and middle managers that they understand how Agile impacts the flow of work through a sprint and that Agile teams are expected to self-organize. Senior leaders will help pull the transition to Agile forward by asking questions that elicit proof that teams are acting Agile.  For example, asking to see team’s burn down chart rather than report-based status reporting sends a strong message that leads behavior.

In many organizations, following the process is as important as the outcome of any specific project. This is based on the presumption that precisely following the process foreshadows success. In the role of process champion, senior leaders own one of the more significant barriers to change. Senior leadership needs to incent teams to try new processes such as Scrum. Senior managers need to understand that Agile frameworks are scaffolds that that teams that need to be tailored to fit project needs and requirements. Providing incentive for teams to experiment will create an environment of flexibility so that teams can decide how address impediments as soon as they are encountered.

Teams need support from senior leadership to allow innovation or Agile implementations will fail. Support for Agile innovation derives from the expectations of senior management that teams will use Agile techniques.  These expectations need to part of the annual goals and objectives and be in evidence in the questions that leaders ask of middle managers and project teams. The power of asking for questions that require that teams prove they are using Agile is a VERY power evidence of a senior leader’s expectations.


Categories: Process Management

Why does it work in staging but not in production?

Agile Testing - Grig Gheorghiu - Mon, 04/07/2014 - 23:07
This is a question that I am sure was faced by every developer and operation engineer out there. There can be multiple answers to this question, and I'll try to offer some of the ones we arrived at, having to do mainly with our Chef workflow, but that can be applied I think to any other configuration management tool.

1) A Chef cookbook version in staging is different from the version in production

This is a common scenario, and it's supposed to work this way. You do want to test out new versions of your cookbooks in staging first, then update the version of the cookbook in production.

2) A feature flag is turned on in staging but turned off in production

We have Chef attributes defined in attributes/default.rb that serve as feature flags. If a certain attribute is true, some recipe code or template section gets included which wouldn't be included if the attribute were false. The situation can occur where a certain attribute is set to true in the staging environment but is set to false in the production environment, at which point things can get out of sync. Again, this is expected, as you do want to test new features out in staging first, but don't forget to turn them on in production at some point.

3) A block of code or template is included in staging but not in production

We had this situation very recently. Instead of using attributes as feature flags, we were directly comparing the environment against 'stg' or 'prod' inside an if block in a template, and only including that template section if the environment was 'stg'. So things were working perfectly in staging, but mysteriously the template section wasn't even there in production. An added difficulty was that the template in question was peppered with non-indented if blocks, so it took us a while to figure out what was going on.

Two lessons here:

a) Make your templates readable by indenting code blocks.

b) Use attributes as feature flags, and don't compare directly against the current environment. This way, it's easier to always look at the default attribute file and see if a given feature flag is true or false.

4) A modification is made to the cookbook version in production directly on the Chef server

I blogged about this issue in the past. Suppose you have an environments file that pins a given cookbook (let's designate it as cookbook C) to 1.0.1 in staging and to 1.0.0 in production. You want to upgrade production to 1.0.1, because it was tested in staging and it worked fine. However, instead of i) modifying the environments/prod.rb file and pinning the cookbook C to 1.0.1, ii) updating the Chef server via "knife environment from file environments/prod.rb" and iii) committing your changes in git, you modify the version of the cookbook C directly on the Chef server with "knife environment edit prod".

Then, the next time you or somebody else modifies environments/prod.rb to bump up another cookbook to the next version, the version of cookbook C in that file is still 1.0.0, so when you upload environments/prod.rb to the Chef server, it will downgrade cookbook C from 1.0.1 to 1.0.0. Chaos will ensue the next time chef-client runs on the nodes that have recipes from cookbook C. Production will be broken, while staging will still happily work.

Here are 2 other scenarios not related directly to staging vs production, but instead having the potential to break production altogether.

You forget to upload the new version of the cookbook to the Chef server

You make all of your modifications to the cookbook, you commit your code to git, but for some reason you forget to upload the cookbook to the Chef server. Particularly if you keep the same version of the cookbook that is in staging (and possibly in production), then your modifications won't take effect and you may spend some quality time pulling your hair.

You upload a cookbook to the Chef server without bumping its version

There is another, even worse, scenario though: you do upload your cookbook to the Chef server, but you realize that you didn't bump up the version number compared to what is currently pinned to production. As a consequence, all the nodes in production that have recipes from that cookbook will be updated the next time they run chef-client. That's a nasty one. It does happen. So make sure you pay attention to your cookbook versioning process and stick to it!

Announcing FrontRowAgile.com for Video Training

Mike Cohn's Blog - Mon, 04/07/2014 - 22:14

I’m happy to announce the release of a new website, FrontRowAgile.com. FrontRowAgile.com will provide the highest quality video training on agile and Scrum.

The site launches with two courses from me and with courses from others soon to follow. In addition to hosting all my current and upcoming video courses, FrontRowAgile.com will soon feature:

  • Ken Rubin on Agile Portfolio Management
  • Ilan Goldstein on Scrum Shortcuts: Agile Tactics, Tools and Tips
  • Mitch Lacey on The Scrum Field Guide Online and Scrum for Managers
  • Pete Deemer on Distributed Scrum Primer and The Manager and Scrum

Currently, FrontRowAgile.com hosts my Agile Estimating and Planning video course plus a new course I’m happy to announce: The Scrum Repair Guide.

The Scrum Repair Guide will help you overcome some of the most common and difficult problems that ScrumMasters and their teams face.

Featuring 36 videos split into short, easily watched segments, each video addresses one topic. Watch them all or watch just the ones you’re interested in. With the same number of Emmy Award nominations as Season 8 of Game of Thrones (none so far), you’re sure to find this course informative and entertaining.

The Agile Estimating and Planning course has been a favorite since it was published on the Mountain Goat Software site. It is now available exclusively on www.FrontRowAgile.com.

In it you will find advice on sprint planning; release planning; story points vs. ideal time; fixed-date, fixed-scope, and fixed-price plans; and estimating on multi-team projects.

 

[Note: If you own a license to Agile Estimating and Planning on the Mountain Goat site, please continue logging in here and watching the course as you have been. We have plans to migrate all Mountain Goat Software video course owners to Front Row Agile as soon as we can.]

Looking to earn PDUs toward your PMI-ACP or PMP credentials? Or pursuing a Certified Scrum Professional (CSP) designation? These courses each earn you four PDUs and SEUs.

Additionally, each course will earn you a valuable certificate of completion and badges you can share on your own website, resume, or social media profiles.

 

 

 

 

 

 

 

 

 

And there’s more good news: With the move to Front Row Agile, all streaming licenses are being moved from six-month licenses to permanent licenses. That’s right—you’ll be able to watch these videos long after Doctor Who goes off the air.

And to celebrate, I’ve cut the price of Agile Estimating and Planning licenses in half. It used to be $200 for a six-month streaming license. Now it’s $100 for a permanent streaming license. The Scrum Repair Guide is available at the same price. Each of these courses offers well over 3 hours of video you can watch over and over.

If you want to watch without an Internet connection, download licenses are still available for each course. Quantity discounts are available and we have an innovative approach to company (site) licenses—email us at info@frontfrowagile and we’ll tell you more.

FrontRowAgile.com is committed to becoming the leading provider of video-based training on agile and Scrum. Sample videos from each course are available on the site.

Press Inquiries:

If you are interested in publishing a review of FrontRowAgile.com or of either course, please email us for one of a limited number of promotional licenses we are making available. Let us know which course you’re interested in reviewing and a link to where you would publish the review.

Announcing FrontRowAgile.com for Video Training

Mike Cohn's Blog - Mon, 04/07/2014 - 22:14

I’m happy to announce the release of a new website, FrontRowAgile.com. FrontRowAgile.com will provide the highest quality video training on agile and Scrum.

The site launches with two courses from me and with courses from others soon to follow. In addition to hosting all my current and upcoming video courses, FrontRowAgile.com will soon feature:

  • Ken Rubin on Agile Portfolio Management
  • Ilan Goldstein on Scrum Shortcuts: Agile Tactics, Tools and Tips
  • Mitch Lacey on The Scrum Field Guide Online and Scrum for Managers
  • Pete Deemer on Distributed Scrum Primer and The Manager and Scrum

Currently, FrontRowAgile.com hosts my Agile Estimating and Planning video course plus a new course I’m happy to announce: The Scrum Repair Guide.

The Scrum Repair Guide will help you overcome some of the most common and difficult problems that ScrumMasters and their teams face.

Featuring 36 videos split into short, easily watched segments, each video addresses one topic. Watch them all or watch just the ones you’re interested in. With the same number of Emmy Award nominations as Season 8 of Game of Thrones (none so far), you’re sure to find this course informative and entertaining.

The Agile Estimating and Planning course has been a favorite since it was published on the Mountain Goat Software site. It is now available exclusively on www.FrontRowAgile.com.

In it you will find advice on sprint planning; release planning; story points vs. ideal time; fixed-date, fixed-scope, and fixed-price plans; and estimating on multi-team projects.

 

[Note: If you own a license to Agile Estimating and Planning on the Mountain Goat site, please continue logging in here and watching the course as you have been. We have plans to migrate all Mountain Goat Software video course owners to Front Row Agile as soon as we can.]

Looking to earn PDUs toward your PMI-ACP or PMP credentials? Or pursuing a Certified Scrum Professional (CSP) designation? These courses each earn you four PDUs and SEUs.

Additionally, each course will earn you a valuable certificate of completion and badges you can share on your own website, resume, or social media profiles.

 

 

 

 

 

 

 

 

 

And there’s more good news: With the move to Front Row Agile, all streaming licenses are being moved from six-month licenses to permanent licenses. That’s right—you’ll be able to watch these videos long after Dr. Who goes off the air.

And to celebrate, I’ve cut the price of Agile Estimating and Planning licenses in half. It used to be $200 for a six-month streaming license. Now it’s $100 for a permanent streaming license. The Scrum Repair Guide is available at the same price. Each of these courses offers well over 3 hours of video you can watch over and over.

If you want to watch without an Internet connection, download licenses are still available for each course. Quantity discounts are available and we have an innovative approach to company (site) licenses—email us at info@frontfrowagile and we’ll tell you more.

FrontRowAgile.com is committed to becoming the leading provider of video-based training on agile and Scrum. Sample videos from each course are available on the site.

Press Inquiries:

If you are interested in publishing a review of FrontRowAgile.com or of either course, please email us for one of a limited number of promotional licenses we are making available. Let us know which course you’re interested in reviewing and a link to where you would publish the review.

Is There Such a Thing As Making Decisions Without Knowing the Cost?

Herding Cats - Glen Alleman - Mon, 04/07/2014 - 18:46

How Much is that DogyyExtensive research has shown given that a current project is more than fifteen percent complete, the overrun at completion will not be less than the overrun incurred to date; and the percent overrun at completion will be greater than the percent overrun incurred to date. Assuming no change in scope or reduction in delivered capabilities, this overrun is locked.

Without knowing the original Estimate At Complete (EAC), the funders of the project have no way of making decisions about the project's total cost, its  incremental cost, or how to adjust scope and duration to meet the expected cost incurred in exchange for the expected value produced from this cost. Without this cost information, and related schedule, and techncial performanc information, the notion of decision making is nonsense. 

We can't make a decision without knowing the cost and benefits of the resulting decision

The absence of an estimated cost, duration, and delivered capability prevents the business from knowing if they are making the right decision about the investment in the project. So if decision-making is what management does, not knowing this information prevents credible decisions from being made

Not estimating these three data items – cost, schedule, and technical performance (delivered capabilities) – is simply bad business management. What ever unfavorable outcomes – overruns, failed business capabilities, unhappy customers (ACA the most recent example) – is well deserved.

The notion that we can make decision in the absence of estimates of their cost and benefit, appears to be unfounded conjecture, with no evidence of its validity. 

Categories: Project Management

Google Finds: Centralized Control, Distributed Data Architectures Work Better than Fully Decentralized Architectures

For years a war has been fought in the software architecture trenches between the ideal of decentralized services and the power and practicality of centralized services. Centralized architectures, at least at the management and control plane level, are winning. And Google not only agrees, they are enthusiastic adopters of this model, even in places you don't think it should work.

Here's an excerpt from Google Lifts Veil On “Andromeda” Virtual Networking, an excellent article by Timothy Morgan, that includes a money quote from Amin Vahdat, distinguished engineer and technical lead for networking at Google:

Like many of the massive services that Google has created, the Andromeda network has centralized control. By the way, so did the Google File System and the MapReduce scheduler that gave rise to Hadoop when it was mimicked, so did the BigTable NoSQL data store that has spawned a number of quasi-clones, and even the B4 WAN and the Spanner distributed file system that have yet to be cloned.

 

"What we have seen is that a logically centralized, hierarchical control plane with a peer-to-peer data plane beats full decentralization,” explained Vahdat in his keynote. “All of these flew in the face of conventional wisdom,” he continued, referring to all of those projects above, and added that everyone was shocked back in 2002 that Google would, for instance, build a large-scale storage system like GFS with centralized control. “We are actually pretty confident in the design pattern at this point. We can build a fundamentally more efficient system by prudently leveraging centralization rather than trying to manage things in a peer-to-peer, decentralized manner.

The context of the article is Google's impressive home brew SDN (software defined network) system that uses a centralized control architecture instead of the Internet's decentralized Autonomous System model, which thinks of the Internet as individual islands that connect using routing protocols.

SDN completely changes that model as explained by Greg Ferro:

Categories: Architecture

Scrum Masters: What Makes a Good One?

Making the Complex Simple - John Sonmez - Mon, 04/07/2014 - 16:00

Yes, that’s right. I am writing a blog post today about Scrum and Scrum Masters. No, I haven’t lost my mind. I just realized that out of everything I’ve written about Agile and Scrum, I never talked about what makes a good Scrum Master. I’ve both been a Scrum Master and I’ve worked on a […]

The post Scrum Masters: What Makes a Good One? appeared first on Simple Programmer.

Categories: Programming

Software Development Mantra

From the Editor of Methods & Tools - Mon, 04/07/2014 - 14:51
We believe developers should have a particular attitude when writing code. There are actually several we’ve come up with over time – all being somewhat consistent with each other but saying things a different way. The following are the ones we’ve held to date:Avoid over- and under-design. Minimize complexity and rework. Never make your code worse (the Hippocratic Oath of Coding). Only degrade your code intentionally. Keep your code easy to change, robust, and safe to change.Source: “Essential Skills for the Agile Developer – A Guide to Better Programming and Design”, Alan Shalloway, Scott ...

Workshop Management 3.0

NOOP.NL - Jurgen Appelo - Mon, 04/07/2014 - 11:52
Workshop Management 3.0

The approach to organizing my new one-day workshops is a bit weird, I admit. That’s because I don’t care about doing things the “normal” way. What I care about is exploring new ways to organize events. What I also care about is practicing what I preach in terms of modern management practices.

The post Workshop Management 3.0 appeared first on NOOP.NL.

Categories: Project Management

Google Play Services 4.2

Android Developers Blog - Mon, 04/07/2014 - 04:27
gps

Google Play services 4.2 is now available on Android devices worldwide. It introduces the full release of the Google Cast SDK, for developing and publishing Google Cast-ready apps, and other new APIs.

You can get started developing today by downloading the Google Play services SDK from the SDK Manager.

Google Cast SDK

The Google Cast SDK makes it easy to bring your content to the TV. There’s no need to create a new app — just incorporate the SDK into your existing mobile and web apps. You are in control of how and when you publish your Google Cast-ready app to users through the Google Cast developer console.

You can find out more about the Cast SDK by reading Ready to Cast on the Google Developers Blog. For complete information about the Cast SDK and how to use the Cast APIs, see the Google Cast developer page.

Google Drive

The Google Drive API introduced in Google Play services 4.1 has graduated from developer preview. The latest version includes refinements to the API as well as improvements for performance and stability.

Google client API

This release introduces a new Google API client that unifies the connection model across Google services. Instead of needing to work with separate client classes for each API you wanted to use, you can now work with a single client API model. This makes it easier to build Google services into your apps and provides a more continuous user experience when you are using multiple services.

For an introduction to the new Google client API and what it means for your app, start by reading New Client API in Google Play Services.

More About Google Play Services

To learn more about Google Play services and the APIs available to you through it, visit the Google Services area of the Android Developers site. Details on the APIs are available in the API reference.

For information about getting started with Google Play services APIs, see Set Up Google Play Services SDK

Join the discussion on

+Android Developers The latest release of Google Play services has begun rolling out to Android devices worldwide. It includes the full release of the Google Cast SDK, for developing and publishing Google Cast-ready apps.

Once the rollout is complete, you'll be able to download the Google Play services SDK using the SDK Manager and get started with the new APIs. Watch for more information coming soon.

-->
Categories: Programming

Unlocking the Power of Google for Your Games, at GDC

Android Developers Blog - Mon, 04/07/2014 - 04:26

By Greg Hartrell, Google Play Games team

Today, everyone is a gamer — in fact, 3 in every 4 Android users are playing games, allowing developers to reach an unprecedented audience of players in an Android ecosystem that’s activated over one billion devices. This has helped Google Play Games — Google’s cross-platform game service and SDK for Android, iOS and the web (which lets you easily integrate features like achievements, leaderboards, multiplayer and cloud save into your games) — grow at tremendous speed. The momentum continues on Google Play, where four times more money was paid out to developers in 2013 than in 2012.

With the Game Developers Conference (GDC) this week, we announced a number of new features for Google Play Games and other Google products. As they launch over the coming weeks, these new services and tools will help you unlock the power of Google to take your games to the next level.

Power your game and get discovered

With game gifts, players in your games can send virtual in-game objects to anyone in their circles or through multiplayer search.

To help players get the most out of your games, Play Games will be expanding engagement and discovery options.

We'll be introducing game gifts, a new service that lets players send virtual in-game objects to anyone in their circles or through player search. The Play Games app now supports multiplayer invites directly, further helping players discover your game and keep them playing. And the Google Play Store will also feature 18 new game categories, making it easier for players to find games they'll love.

Tools to take your game to the next level

Further enhancing Google Play Game services, we're expanding multiplayer to support iOS, bringing turn-based and real-time multiplayer capabilities to both Android and iOS.

To further help with cross platform game development, we're updating our Play Games Unity Plug-in to support cross-platform multiplayer services, and introducing an early Play Games C++ SDK to support achievements and leaderboards.

In addition, we're launching enhanced Play Games statistics on the Google Play Developer Console, providing easy game analytics for Play Games adopters. Developers will gain a daily dashboard that visualizes player and engagement statistics for signed in users, including daily active users, retention analysis and achievement, and leaderboard performance.

Ad features to better optimize your business

Of course, once you build a great gaming experience, it's important to get rewarded for your work, which is why we'll also be introducing new features to the AdMob platform. We're making Google Analytics available directly in the AdMob interface, so you can gain deeper insights into how users are interacting with your app. Turning those insights into effective action is vital, so we're excited by the opportunities that in-app purchase ads will offer — enabling you to target users with specific promotions to buy items in your game. Advertising continues to be a core vehicle driving many game developers' success, so we're also bringing you new ways to optimize your ads to earn the most revenue.

Watch the Google Sessions at GDC

Check out the stream from our Google Developer Day sessions at GDC 2014. Learn more about how to reach and engage with hundreds of millions of users on Google Play, build Games that scale in the cloud, grow in-game advertising businesses with AdMob, track revenue with Google Analytics, as well as explore new gaming frontiers, like Glass.



Join the discussion on

+Android Developers
Categories: Programming

install4j and AppleScript: Creating a Mac OS X Application Bundle for a Java application

Mark Needham - Mon, 04/07/2014 - 01:04

We have a few internal applications at Neo which can be launched using ‘java -jar ‘ and I always forget where the jars are so I thought I’d wrap a Mac OS X application bundle around it to make life easier.

My favourite installation pattern is the one where when you double click the dmg it shows you a window where you can drag the application into the ‘Applications’ folder, like this:

2014 04 07 00 38 41

I’m not a fan of the installation wizards and the installation process here is so simple that a wizard seems overkill.

I started out learning about the structure of an application bundle which is well described in the Apple Bundle Programming guide. I then worked my way through a video which walks you through bundling a JAR file in a Mac application.

I figured that bundling a JAR was probably a solved problem and had a look at App Bundler, JAR Bundler and Iceberg before settling on Install4j which we used for Neo4j desktop.

I started out by creating an installer using Install4j and then manually copying the launcher it created into an Application bundle template but it was incredibly fiddly and I ended up with a variety of indecipherable messages in the system error log.

Eventually I realised that I didn’t need to create an installer and that what I actually wanted was a Mac OS X single bundle archive media file.

After I’d got install4j creating that for me I just needed to figure out how to create the background image telling the user to drag the application into their ‘Applications’ folder.

Luckily I came across this StackOverflow post which provided some AppleScript to do just that and with a bit of tweaking I ended up with the following shell script which seems to do the job:

#!/bin/bash
 
rm target/DBench_macos_1_0_0.tgz
/Applications/install4j\ 5/bin/install4jc TestBench.install4j
 
title="DemoBench"
backgroundPictureName="graphs.png"
applicationName="DemoBench"
finalDMGName="DemoBench.dmg"
 
rm -rf target/dmg && mkdir -p target/dmg
tar -C target/dmg -xvf target/DBench_macos_1_0_0.tgz
cp -r src/packaging/.background target/dmg
ln -s /Applications target/dmg
 
cd target
rm "${finalDMGName}"
umount -f /Volumes/"${title}"
 
hdiutil create -volname ${title} -size 100m -srcfolder dmg/ -ov -format UDRW pack.temp.dmg
device=$(hdiutil attach -readwrite -noverify -noautoopen "pack.temp.dmg" | egrep '^/dev/' | sed 1q | awk '{print $1}')
 
sleep 5
 
echo '
   tell application "Finder"
     tell disk "'${title}'"
           open
           set current view of container window to icon view
           set toolbar visible of container window to false
           set statusbar visible of container window to false
           set the bounds of container window to {400, 100, 885, 430}
           set theViewOptions to the icon view options of container window
           set arrangement of theViewOptions to not arranged
           set icon size of theViewOptions to 72
           set background picture of theViewOptions to file ".background:'${backgroundPictureName}'"
           set position of item "'${applicationName}'" of container window to {100, 100}
           set position of item "Applications" of container window to {375, 100}
           update without registering applications
           delay 5
           eject
     end tell
   end tell
' | osascript
 
hdiutil detach ${device}
hdiutil convert "pack.temp.dmg" -format UDZO -imagekey zlib-level=9 -o "${finalDMGName}"
rm -f pack.temp.dmg
 
cd ..

To summarise, this script creates a symlink to ‘Applications’, puts a background image in a directory titled ‘.background’, sets that as the background of the window and positions the symlink and application appropriately.

Et voila:

2014 04 07 00 59 56

The Firefox guys wrote a couple of blog posts detailing their experiences writing an installer which were quite an interesting read as well.

Categories: Programming