Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 361 ‚Äď Why Software Measurement, Who Needs Architects

Software Process and Measurement Cast - Sun, 09/27/2015 - 22:00

This week‚Äôs Software Process and Measurement Cast includes two columns.¬† The first is our essay on software measurement. When we measure we are sending an explicit message about what is important to the organization, and therefore sending an explicit signal about how we expect people to act. Remember the old adage, ‚Äúyou get what you measure.‚ÄĚ

Our second column this week is from Gene Hughson and his Form Follows Function blog. In this installment Gene throws down the gauntlet to ask the questions, ‚ÄúWho needs architects?‚ÄĚ

Call to Action!

For the remainder of September let’s try something a little different.  Forget about iTunes reviews, and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.¬† This week we tackle the essay titled ‚ÄúSharp Tools‚ÄĚ! Check out the new installment at Software Process and Measurement Blog.


Upcoming Events

Agile Development Conference East
November 8-13, 2015
Orlando, Florida

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Agile Philly - AgileTour 2015
October 5, 2015

I will be speaking on Agile Risk Management

 More conferences next week, including Agile DC!


The next Software Process and Measurement Cast features our interview with Chris Nurre.  Chris is a developer and Agile Coach extraordinaire. We explored the role of a coach from the point of view of someone that is actively involved in changing the world, one team at a time.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 361 ‚Äď Why Software Measurement, Who Needs Architects


Listen Now

Subscribe on iTunes

This week‚Äôs Software Process and Measurement Cast includes two columns.¬† The first is our essay on software measurement. When we measure we are sending an explicit message about what is important to the organization, and therefore sending an explicit signal about how we expect people to act. Remember the old adage, ‚Äúyou get what you measure.‚ÄĚ

Our second column this week is from Gene Hughson and his Form Follows Function blog. In this installment Gene throws down the gauntlet to ask the questions, ‚ÄúWho needs architects?‚ÄĚ

Call to Action!

For the remainder of September let’s try something a little different.  Forget about iTunes reviews, and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.¬† This week we tackle the essay titled ‚ÄúSharp Tools‚ÄĚ! Check out the new installment at Software Process and Measurement Blog.


Upcoming Events

Agile Development Conference East
November 8-13, 2015
Orlando, Florida

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Agile Philly – AgileTour 2015
October 5, 2015

I will be speaking on Agile Risk Management

More conferences next week, including Agile DC!


The next Software Process and Measurement Cast features our interview with Chris Nurre.  Chris is a developer and Agile Coach extraordinaire. We explored the role of a coach from the point of view of someone that is actively involved in changing the world, one team at a time.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Risk Management is How Adults Manage Projects

Herding Cats - Glen Alleman - Sun, 09/27/2015 - 20:59

Risk Management is How Adults Manage Projects - Tim Lister

Let's start with some background on Risk Management

Tim's quote sets the paradigm for managing the impediments to success in all our endeavors

It says volumes about project management and project failure. It also means that managing risk is managing in the presence of uncertainty. And managing in the presence of uncertainty means making estimates about the impacts of our decision on future outcomes. So you can invert the statement when you hear we can make decisions in the absence of estimates.

Tim's update is titled Risk Management is Project Management for Grownups.

For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, here's a collection from the office library, in no particular order

Here's a summary at a recent meeting around decision making in the presence of risk

Earning value from risk (v4 full charts) from Glen Alleman
Categories: Project Management

Complex, Complexity, Complicated (Update)

Herding Cats - Glen Alleman - Sun, 09/27/2015 - 16:14

Cynefin_as_of_1st_June_2014The popular notion that Cynefin can be applied in the software development domain as a way of discussing the problems involved in writing software for money is missing the profession of Systems Engineering. From Wikipedia Cynefin is...

The framework provides a typology of contexts that guides what sort of explanations or solutions might apply. It draws on research into complex adaptive systems theory, cognitive science, anthropology, and narrative patterns, as well as evolutionary psychology, to describe problems, situations, and systems.

While Cynefin uses the term Complexity and Complex Adaptive System, it is applied from the observational point of view. That is the system exists outside of our influence on the system to control its behavior - we are observers of the systems, not engineers of the solutions in the form of a system that provides needed capabilities to solve a problem.

Read carefully the original paper on Cynefin The New Dynamics of Strategy: Sense Making in a Complex and Complicated World This post is NOT about those types of systems, but about the conjecture that the development of software is by its nature Chaotic. This argument is used by many in the agile world for avoid the engineering disciplines of INCOSE style Systems Engineering.  

There are certainly engineered systems that transform into complex adaptive systems with emergent behaviors that cause the system to fail. Example below. This is not likely to be the case when engineering principles are applied in the domains of Complex and Complicated.

A good starting point for the complex, complicated, and chaotic view of engineered systems is Complexity and Chaos - State of the Art: List of Works, Experts, Organizations, Projects, Journals, Conferences, and Tools There is a reference to Cynefin as organization modeling. While organizational modeling is important - I suspect Cynefin advocates would suggest the only important item - the engineered aspects  of applying Systems Engineering to complex, complicated, and emergent systems is mandatory for any organization to get the product out the door on time, on budget, and on specification.

For another view of the complex systems problem Principles of Complex Systems for Systems Engineering is a good place to start along with the resources from INCOSE and AIAA like Complexity Primer for Systems Engineers, Engineering Complex Systems, Complex System Classification, and many others.

So Let's Look At the Agile Point of View

In the agile community it is popular to use the terms complex, complexity, complicated, complex adaptive system many times interchangeably and many times wrongly - to assert we can't possibly plan ahead, know what we're going to need, and establish a cost and schedule because the systems complex, and emergent.

These terms are many times overloaded with an agenda used to push a process or even a method. As well, in the agile community it is popular to claim we have no control over the system, so we must adapt to its emerging behavior. This is likely the case in one condition - the chaotic behaviors of Complex Adaptive Systems. But this is only the case when we fail to establish the basis for how the CAS was formed and what sub-systems are driving that behaviors, and most importantly what are the dynamics of the interfaces between those subsystems - the System of Systems architecture - that create the chaotic behaviors . 

It is highly unlikely those working in the agile community actually work on complex systems that evolve AND at the same time are engineered at the lower levels to meet specific capabilities and resulting requirements of the system owner. They've simply let the work and the resulting outcomes emerge and become Complex, Complicated, and create Complexity. They are observers  of the outcomes, not engineers of the outcomes.

Here's one example of an engineered system that actually did become a CAS because of poor efforts of the Systems Engineers. I worked on Class I and II sensor platforms. Eventually FCS was canceled for all the right reasons. But for small teams of agile developers the outcomes become complex when the Systems Engineering processes are missing. Cynefin partitions beyond obvious emerge for the most part when Systems Engineering is missing.


First some definitions

  • Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
  • Complex System -¬†is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
  • Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems.¬†
  • Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term¬†is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
    • Complex behaviour
    • Complex mechanisms
    • Complex situations
    • Complex systems
    • Complex data
  • Complexity Theory -¬†states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.¬†This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work.

One more item we need is the types of Complexity

  • Type 1 - fixed systems, where the structure doesn't change as a function of time.
  • Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
  • Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
  • Type 4 - are self organizing where we can¬†combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.

And Now To The Point

When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system. Don't take anyone saying well the system is emerging and becoming too complex for us to manage Unless in fact that is the case after all the Systems Engineering activities have been exhausted. It's a cheap excuse for simply not doing the hard work of engineering the outcomes.

It is common use the terms complex, complicated, and complexity interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.

We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point. 

The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's [1] with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.

[1] Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.

Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Sharp Tools, Re-Read Saturday: The Mythical Man-Month, Part 12

The Mythical Man-Month

The Mythical Man-Month

Sharp Tools is the twelfth essay of The Mythical Man-Month by Fred P. Brooks. Brooks begins this essay with a discussion of a development environment where each person has his or her own tools. Tools that were either handcrafted or acquired as thier career progresses. To the individual, this scenario feels efficient and effective; however at team or program level things are not as sanguine. Brooks likens developers to master craftsmen that develop their own tools in order to build works of art. Brooks wrote this essay from the point of view of his experience building operating systems, and when I re-read the essay I initially was unsure whether the message was relevant to the 21st century. However, reflection while jogging is a powerful tool for making connections. The relevance was brought home as I reflected on the time I spend with software development teams. People in these teams self-identify as coders, testers, business analysts, Scrum masters and even an occasional project manager. Each person assembles as set of tools and languages that they are comfortable using. Many of those tools are mandated by the organization or software platform they use. However, just as often development practitioners have accumulated open source tools, or even in some cases hand crafted utilities, over their career. In any practical sense the world described by Brooks is no different than it is today.

In a development project, the manager establishes the development philosophy and sets aside resources for the building of common tools. In today’s environment you might acquire a common toolset instead of building it; however, someone must provide guidance and direction. Many organizations feel the need to centralize tool decisions and then inform everyone involved of the toolset they will use. This type of decision process is often considered efficient (reasons can include central research and leverage in purchasing), but centralization often ignores the specialized needs of each team. Invariably this leads to team members sneaking in their own tools or doing work at home in less controlled environments. Brooks suggests a better answer is for each time to team have their own tool/environment builder guided by the program leader’s philosophy (Brooks used the term manager) rather than a central tool team. A community of practice based on the manager’s or organization’s philosophy delivers similar results using more current leadership philosophies. Again Brooks was ahead of the curve.

Regardless of whether you are involved in the development or enhancement of functional software you need access to an functional computer environment with a core set of tools.  Brooks identifies four basics that every software effort requires include:

  1. A computer facility with machines and a scheduling philosophy
  2. An operating system and service philosophy
  3. Language and a language philosophy
  4. Utilities

I will not belabor this re-read entry with a detailed explanation of each category. What is more important is that efficient and effective delivery requires an integrated environment where the target (production environment) and vehicle (development/QA environments) are closely related, and the right tools available to get the job done. Examples of organizations that still ignore one or more of these four categories abound. For example, teams trying to do Agile without tools to automatically test builds or adopting the concept of user stories without defining testable user acceptance criteria. The combination of environments, tools and processes based on a consistent philosophy is required to implement user stories and builds with automated testing.

When I began re-reading Sharp Tools I was concerned whether the message was still current. A discussion with a team about their adoption of continuous build and automated testing using Jenkins reminded me that 20th century developers and development teams that Brooks described are not materially different in nature from 21st century developers (Agile or waterfall). If we don’t have tools we are comfortable with close at hand, we find new tools. In order to work together in a team or a team of teams we need latitude to react within the constraints that a manager, architect or visionary provides as philosophy.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Calling the Shot

Ten Pounds in a Five‚ÄďPound Package

The Documentary Hypothesis

Plan to Throw One Away

Categories: Process Management

Why Do Projects Fail?

Herding Cats - Glen Alleman - Sat, 09/26/2015 - 17:25

Project-management-failureWe all wish that there was a simple answers to this question, but there are not. Anyone suggesting there is, doesn't understand the complexities of non-trivial projects in any domain.

There are enough opinions to paper the side of a battle ship.  With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.

In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.

What are the core principles, processes, and practices for project success?

I will suggest there are three common denominators consistently mentioned in the literature that are key to a project’s success:

  1. Requirements management. Success was not just defined by well-documented technical requirements, but well-defined programmatic requirements/thresholds. Requirements creep is a challenge for all projects, no matter what method is used to develop the products or services from those projects. Requirements creep comes in many forms. But the basis for dealing with requirements creep starts with a Systems Engineering strategy to manage those requirements. Mist IT and business software projects don't know about Systems Engineering, and that's a common cause failure mode.
  2. Early and continuous risk management , with specific steps defined for managing the risk once identified.
  3.  Project planning. Without incredible luck, no project will succeed without a realistic and thorough plan for that success. It's completely obvious (at least to those managing successful projects), the better the planning, the more likely the outcome will match the plan.

Of the 155 defense project failures studied in ‚ÄúThe core problem of project failure,‚ÄĚ T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.

  • 115 ‚Äď Project managers did not know what to do.
  • 120 ‚Äď Project manager overlooked implementing a project ¬† management principle.
  • 125 ‚Äď PMs allowed constraints imposed at higher levels to prevent ¬† them from doing what they should do.
  • 130 ‚Äď PMs do not believe the project management principles add ¬† value.
  • 145 ‚Äď Policies / directives prevented PMs from doing what they ¬† should do.
  • 150 ‚Äď Statutes prevented PMs from doing what they should do.
  • 140 ‚Äď PMs primary goal was other than project success.
  • 135 ‚Äď PMs believed a project management principle was flawed.

From this research these numbers can be summarized into two larger classes

  • Lack of knowledge - the project managers and the development team did not know what to do
  • Improper application of this knowledge - this start with ignoring or overlooking a core principle of project success. This cover most ¬†of the sins of Bad Management, from compressed schedules, limited budge, to failing to produce credible estimates for the work.¬†

So where do we start?

Let's start with some principles. But first a recap

  • Good management doesn't simply happen. It takes qualified managers ‚Äď on both the buyer and supplier side, to appropriately apply project management methods.
  • Good planning doesn‚Äôt simply happen. Careful planning of work scope, WBS, realistic milestones, realistic metrics, and a realistic cost baseline is needed.
  • It is hard work to provide accurate data about schedule, work performed, and costs on a periodic basis.¬†Constant communications and trained personnel is necessary.

Five Immutable Principles of Project Success

Screen Shot 2015-09-26 at 9.03.00 AM

  1. What capabilities are needed to fulfill the Concept of Operations, the Mission and Vision, or the Business System Requirements? Without knowing the answers to these questions, requirements, features, deliverables have no home. They have no testable reasons for being in the project. 
  2. What technical and operational requirements are needed to deliver these capabilities? With the needed capabilities confirmed by those using the outcomes of the project, the technical and operational requirements can be defined. This can be stated up front, or they can emerg as the project progresses. The Capabilities are stable, all other things can change as discovery takes place. If you keep changing the capabilities, you're going to be on a Death March project
  3. What schedule delivers the product or services on time to meet the requirements? Do you have enough money, time, and resources to show up as planned. No credible project doesn't have a deadline and a set and mandated capabilities. Knowing there is sufficient everything on day one and every day after that is the key to managing in the presence of uncertainty. 
  4. What impediments to success, their mitigations, retirement plans, or “buy downs are in place to increase the probability of success?" Risk Management is how Adults Manage Projects - Tim Lister is a go place to start. The uncertainties of all project work come in two type - reducible and irreducible. For irreducible we need margin. For reducible we need specific retirement activities.
  5. What periodic measures of physical percent complete assure progress to plan? This question is based on a critical principle - how long are we willing to wait before we find out we're late?  This period varies but what ever it is it must ve short enough to take corrective action to arrive as planned. Agile does is every two to four week. In formal DOD procurement, measures of physical percent complete are done every four weeks. The advantage of Agile is working products must be produced every period. Not the case in larger more formal processes.

 With these Principles, here's five Practiuces that can put them to work

Screen Shot 2015-09-26 at 10.13.08 AM

  1. Identify Needed Capabilities to achieve program objectives or the particular end state. Define these capabilities through scenarios from the customer point of view in units of Measure of Effectiveness (MoE) meaningful to the customer.
    • Describe the business function that will be enabled by the outcomes of the project.
    • Assess these functions be assessed in terms of Effectiveness and Performance.
  2. Define the Technical And Operational Requirements that must be fulfilled for the system capabilities to be available to the customer at the planned time and planned cost. Define these requirements in terms that are isolated from any implementation technical products or processes. Only then bind the requirements with technology.
    • This can be a formal Work Breakdown Structure or an Agile Backlog
    • The planned work is described in terms of deliverables.¬†
    • Describe the technical and operation Performance measures for each feature
  3. Establish the Performance Measurement Baseline describing the work to be performed, the budgeted cost for this work, the organizational elements that produce the deliverables from this work, and the Measures of Performance (MoP) showing this work is proceeding according to cost, schedule, and technical performance.
  4. Execute the PMB’s Work Packages in the planned order, assuring all performance assessments are 0%/100% complete before proceeding. No rework, no transfer of activities to the future. Assure every requirement is traceable to work and all work is traceable to requirements.
    • If there is no planned order the work processes will be simple.
    • This is a rarely on any enterprise or non-trivial project, since the needed capabilities usually have some sequential dependencies. Accept Produce Purchase Request before issuing Purchase Order.
  5. Perform Continuous Risk Management for each Performance‚ÄďBased Project Management¬ģ process area to Identify, Analyze, Plan, Track, Control, and Communicate programmatic and technical risk.

The integration of these five Practices are the foundation of Performance‚ÄďBased Project Management¬ģ.¬†Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.

Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project. If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.

Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.

Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us  To Stop?

All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to spend wisely. Visibility to the obligated spend is part of Managerial Finance. Opportunity Cost is the basis of Microeconomics of decision making. 

The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses. 

This is how adults manage projects

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Software Engineering Economics

Herding Cats - Glen Alleman - Sat, 09/26/2015 - 04:51

When confronted with making decisions on software projects in the presence of uncertainty, we can turn to an established and well tested set of principles found in Software Engineering Economics.

First a definition from Guide to the Systems Engineering Body of Knowledge (SEBoK)

Software Engineering Economics is concerned with making decisions within the business context to align technical decisions with the business goals of an organization. Topics covered include fundamentals of software engineering economics (proposals, cash flow, the time-value of money, planning horizons, inflation, depreciation, replacement and retirement decisions); not for-profit decision-making (cost-benefit analysis, optimization analysis); estimation, economic risk and uncertainty (estimation techniques, decisions under risk and uncertainty); and multiple attribute decision making (value and measurement scales, compensatory and non-compensatory techniques).

Engineering Economics is one of the Knowledge Areas for educational requirements in Software Engineering defined by INCOSE, along with Computing Foundations, Mathematical Foundations, and Engineering Foundations. 

A critical success factor for all software development is to model the system under development as holistic, value-providing entities have been gaining recognition as a central process of systems engineering. The use of modeling and simulation during the early stages of the system design of complex systems and architectures can:

  • Document system needed capabilities, functions and requirements,
  • Assess the mission performance,
  • Estimate costs, schedule, and needed product performance capabilities¬†
  • Evaluate tradeoffs,¬†
  • Provide insights to improve performance, reduce risk, and manage costs.

The process above can be performed in any lifecycle duration. From formal top down INCOSE VEE to Agile software development. The process rhythm is independent of the principles.

This is a critical communication factor - separation of Principles, Practices, and Processes, establishes the basis of comparing these Principles, Practices, and Processes across a broad spectrum of domains, governance models, methods, and experiences. Without a shared set of Principles, it's hard to have a conversation.  

Engineering Economics

Developing products or services with other peoples money means we need a paradigm to guide our activities. Since we are spending other peoples money, the economics of that process is guided by Engineering Economics.

Engineering economic analysis concerns techniques and methods that estimate output and evaluate the worth of products and services relative to their costs. (We can't determine the value of our efforts, without knowing the cost to produce that value) Engineering economic analysis is used to evaluate system affordability. Fundamental to this knowledge area are value and utility, classification of cost, time value of money and depreciation. These are used to perform cash flow analysis, financial decision making, replacement analysis, break-even and minimum cost analysis, accounting and cost accounting. Additionally, this area involves decision making involving risk and uncertainty and estimating economic elements. [SEBok, 2015]

The Microeconomic aspects of the decision making process is guided by the principles of  making decisions regarding the allocation of limited resources. In software development we always have limited resources - time, money, staff, facilities, performance limitations of software and hardware.

If we are going to increase the probability of success for software development projects we need to understand how to manage in the presence of the uncertainty surrounding time, money, staff, facilities, performance of products and services and all the other probabilistic attributes of our work.

To make decisions in the presence of these uncertainties, we need to make estimates about the impacts of those decisions. This is an unavoidable consequence of how the decision making process works.

The opportunity cost of any decision between two or more choices means there is a cost for NOT choosing one or more of the available choices. This is the basis of microeconomics of decision making. What's the cost of NOT selecting an alternative?

So when it is conjectured we can make a decision in the presence of uncertainty without estimating the impact of that decision, it's simply NOT true.

That notion violates the principle of Microeconomics   

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Develop a sweet spot for Marshmallow: Official Android 6.0 SDK & Final M Preview

Android Developers Blog - Sat, 09/26/2015 - 00:04

By Jamal Eason, Product Manager, Android

Android 6.0 Marshmallow

Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we're introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.

Get your apps ready for Android Marshmallow

The final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.

Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.

Check your App Permissions

Along with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features a new permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.

How to Get the Update

The Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.

Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you'll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.

What is New

Compared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:

  • Android Platform Change:
    • Final Permissions User Interface ‚ÄĒ we updated the permissions user interface and enhanced some of the permissions behavior.
  • API Change:
    • Updates to the Fingerprint API ‚ÄĒ which enables better error reporting, better fingerprint enrollment experience, plus enumeration support for greater reliability.
Upload your Android Marshmallow apps to Google Play

Google Play is now ready to accept your API 23 apps via the Google Play Developer Console on all release channels (Alpha, Beta & Production). At the consumer launch this fall, the Google Play store will also be updated so that the app install and update process supports the new permissions model for apps using API 23.

To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.

Categories: Programming

Run Apps Script code from anywhere using the Execution API

Google Code Blog - Fri, 09/25/2015 - 20:39

Originally posted to the Google Apps Developer blog

Posted by Edward Jones, Software Engineer, Google Apps Script and Wesley Chun, Developer Advocate, Google Apps

Have you ever wanted a server API that modifies cells in a Google Sheet, to execute a Google Apps Script app from outside of Google Apps, or a way to use Apps Script as an API platform? Today, we’re excited to announce you can do all that and more with the Google Apps Script Execution API.

The Execution API allows developers to execute scripts from any client (browser, server, mobile, or any device). You provide the authorization, and the Execution API will run your script. If you’re new to Apps Script, it’s simply JavaScript code hosted in the cloud that can access authorized Google Apps data using the same technology that powers add-ons. The Execution API extends the ability to execute Apps Script code and unlocks the power of Docs, Sheets, Forms, and other supported services for developers.

One of our launch partners, Pear Deck, used the new API to create an interactive presentation tool that connects students to teachers by converting slide decks into interactive experiences. Their app calls the Execution API to automatically generate a Google Doc customized for each student, so everyone gets a personalized set of notes from the presentation. Without the use of Apps Script, their app would be limited to using PDFs and other static file types. Check out the video below to see how it works.

Bruce McPherson, a Google Developer Expert (GDE) for Google Apps, says: ‚ÄúThe Execution API is a great tool for enabling what I call ‚Äėincremental transition‚Äô from Microsoft Office (and VBA) to Apps (and Apps Script). A mature Office workflow may involve a number of processes currently orchestrated by VBA, with data in various formats and locations. It can be a challenge to move an entire workload in one step, especially an automated process with many moving parts. This new capability enables the migration of data and process in manageable chunks.‚ÄĚ You can find some of Bruce‚Äôs sample migration code using the Execution API here.

The Google Apps Script Execution API is live and ready for you to use today. To get started, check out the developer documentation and quickstarts. We invite you to show us what you build with the Execution API!

Categories: Programming

Stuff The Internet Says On Scalability For September 25th, 2015

Hey, it's HighScalability time:

 How long would you have lasted? Loved The Martian. Can't wait for the game, movie, and little potato action figures. Me, I would have died on the first level.

  • 60 miles: new record distance for quantum teleportation; 160: size of minimum viable Mars colony; $3 trillion: assets managed by hedge funds; 5.6 million: fingerprints stolen in cyber attack; 400 million: Instagram monthly active users; 27%: increase in conversion rate from mobile pages that are 1 second faster; 12BN: daily Telegram messages; 1800 B.C: oldest beer recipe; 800: meetings booked per day at Facebook; 65: # of neurons it takes to walk with 6 legs

  • Quotable Quotes:
    • @bigdata: assembling billions of pieces of evidence: Not even the people who write algorithms really know how they work
    • @zarawesome: "This is the most baller power move a billionaire will pull in this country until Richard Branson finally explodes the moon."
    • @mtnygard: An individual microservice fits in your head, but the interrelationships among them exceeds any human's ability. Automate your awareness.
    • Ben Thompson~ The mistake that lots of BuzzFeed imitators have made is to imitate the BuzzFeed article format when actually what should be imitated from BuzzFeed is the business model. The business model is creating portable content that will live and thrive on all kinds of different platforms. The BuzzFeed article is relatively unsophisticated, it's mostly images and text, and mostly images.
    • For more Quotable Quotes please see the full article.

  • Is what Volkswagen did really any different that what happens on benchmarks all the time? Cheating and benchmarks go together like a clear conscience and rationalization. Clever subterfuge is part of the software ethos. There are many many examples. Cars are now software is a slick meme, but that transformation has deep implications. The software culture and the manufacturing culture are radically different.

  • Can we ever trust the fairness of algorithms? Of course not. Humans in relation to their algorithms are now in the position of priests trying to divine the will of god. Computer Scientists Find Bias in Algorithms: Many people believe that an algorithm is just a code, but that view is no longer valid, says Venkatasubramanian. “An algorithm has experiences, just as a person comes into life and has experiences.”

  • Stuff happens, even to the best. But maybe having a significant percentage of the world's services on the same platform is not wise or sustainable. Summary of the Amazon DynamoDB Service Disruption and Related Impacts in the US-East Region.

  • According to patent drawings what does the Internet look like? Noah Veltman has put together a fun list of examples: it's a cloud, or a bean, or a web, or an explosion, or a highway, or maybe a weird lump.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Do You Need To Know More Than One Language?

Making the Complex Simple - John Sonmez - Fri, 09/25/2015 - 13:00

I had just committed career suicide. Well, that‚Äôs what I had been told. My coworkers had just learned that I would be leaving the company. Most understood that. What bothered them was that my current company was a Windows shop and we wrote code in However, the position I was moving into developed in […]

The post Do You Need To Know More Than One Language? appeared first on Simple Programmer.

Categories: Programming

Working Effectively with Legacy Tests

Mistaeks I Hav Made - Nat Pryce - Fri, 09/25/2015 - 12:00
At the Agile Cambridge conference on the 1st of October, Duncan McGregor and I are running a workshop entitled Working Effectively with Legacy Tests. Legacy what‚ÄĹ In Working Effectively with Legacy Code, Michael Feathers defined "legacy code" as "code without tests". But not all test code is a panacea. I have often encountered what I can only describe legacy tests. I define legacy tests as: You have legacy tests if you usually change the code then fix the tests to match the code Legacy tests are still useful. They reduce risk by catching unexpected regressions. But it is often difficult to know why a regression occurs and time is wasted diagnosing the underlying problem. Legacy tests don't pull their weight. They add a continual overhead to every change you make. Symptoms of legacy tests I have encountered include: Programmers have to read the production code to work out what the system does, rather than the tests. When a test fails programmers usually have to fix the test, not production code. When a test fails programmers have to step through code in the debugger to work out what failed. It takes a lot of effort to write a test, and after a few tests, writing another test isn‚Äôt any easier. Programmers have to carefully study the code of a test to work out what it is actually testing. Programmers have to carefully study the code of a test to work out what it is not testing. Tests are named after issue identifiers in the company's issue tracker. Bonus points if the issue tracker no longer exists. Test failures are ignored because the team know they are transient or meaningless. Does any of that sound familiar? If so, please bring some examples of your tests to our workshop. We will run the workshop as a cross between a mob programming session and a clinic. Bring some legacy tests from your own projects. We will work together, in a mob programming style, to improve them. Along the way we will share techniques for writing effective tests and refactoring tests from burdensome legacy to a strategic asset. Cartoon by Thomas Rowlandson, uploaded to Flickr by Adam Asar. Used under the Creative Commons Attribution 2.0 Generic license.
Categories: Programming, Testing & QA

R: Querying a 20 million line CSV file ‚Äď data.table vs data frame

Mark Needham - Fri, 09/25/2015 - 07:28

As I mentioned in a couple of blog posts already, I’ve been exploring the Land Registry price paid data set and although I’ve initially been using SparkR I was curious how easy it would be to explore the data set using plain R.

I thought I’d start out by loading the data into a data frame and run the same queries using deployer.

I’ve come across Hadley Wickham’s readr library before but hadn’t used it and since I needed to load a 20 million line CSV file this seemed the perfect time to give it a try.

The goal of readr is to provide a fast and friendly way to read tabular data into R.

Let’s’ get started:

> library(readr)
> system.time(read_csv("pp-complete.csv", col_names = FALSE))
   user  system elapsed 
127.367  21.957 159.963 
> df = read_csv("pp-complete.csv", col_names = FALSE)

So it took a little over 2 minutes to process the CSV file into a data frame. Let’s take a quick look at its contents:

> head(df)
Source: local data frame [6 x 16]
                                      X1     X2     X3       X4    X5    X6    X7    X8    X9
                                   (chr)  (int) (date)    (chr) (chr) (chr) (chr) (chr) (chr)
1 {0C7ADEF5-878D-4066-B785-0000003ED74A} 163000   <NA>  UB5 4PJ     T     N     F   106      
2 {35F67271-ABD4-40DA-AB09-00000085B9D3} 247500   <NA> TA19 9DD     D     N     F    58      
3 {B20B1C74-E8E1-4137-AB3E-0000011DF342} 320000   <NA>   W4 1DZ     F     N     L    58      
4 {7D6B0915-C56B-4275-AF9B-00000156BCE7} 104000   <NA> NE61 2BH     D     N     F    17      
5 {47B60101-B64C-413D-8F60-000002F1692D} 147995   <NA> PE33 0RU     D     N     F     4      
6 {51F797CA-7BEB-4958-821F-000003E464AE} 110000   <NA> NR35 2SF     T     N     F     5      
Variables not shown: X10 (chr), X11 (chr), X12 (chr), X13 (chr), X14 (chr), X15 (chr), address (chr)

Now let’s query the data frame to see which postcode has the highest average sale price. We’ll need to group by the ‘X4’ column before applying some aggregate functions:

> library(dplyr)
> system.time(df %>% 
                group_by(X4) %>% 
                summarise(total = sum(as.numeric(X2)), count = n(), ave = total / count) %>%
   user  system elapsed 
122.557   1.135 124.211 
Source: local data frame [1,164,396 x 4]
         X4     total count      ave
      (chr)     (dbl) (int)    (dbl)
1   SW7 1DW  39000000     1 39000000
2  SW1W 0NH  32477000     1 32477000
3   W1K 7PX  27000000     1 27000000
4  SW1Y 6HD  24750000     1 24750000
5   SW6 1BA  18000000     1 18000000
6  SW1X 7EE 101505645     6 16917608
7    N6 4LA  16850000     1 16850000
8  EC4N 5AE  16500000     1 16500000
9    W8 7EA  82075000     6 13679167
10  W1K 1DP  13500000     1 13500000

What about if instead of the average price by post code we want to find the most expensive property ever sold instead?

> system.time(df %>% group_by(X4) %>% summarise(max = max(X2)) %>% arrange(desc(max)))
   user  system elapsed 
 35.438   0.478  36.026 
Source: local data frame [1,164,396 x 2]
         X4      max
      (chr)    (int)
1  SW10 9SU 54959000
2   SW7 1QJ 50000000
3  SW1X 8HG 46013365
4   SW7 1DW 39000000
5  SW1W 0NH 32477000
6  SW1X 7LJ 29350000
7    W8 7EA 27900000
8   SW3 3SR 27750000
9   W1K 7PX 27000000
10 SW1X 7EE 25533000
..      ...      ...

Interestingly that one was much quicker than the first one even though it seems like we only did slightly less work.

At this point I mentioned my experiment to Ashok who suggested I give data.table a try to see if that fared any better. I’d not used it before but was able to get it up and running reasonably quickly:

> library(data.table)
> system.time(fread("pp-complete.csv", header = FALSE))
Read 20075122 rows and 15 (of 15) columns from 3.221 GB file in 00:01:05
   user  system elapsed 
 59.324   5.798  68.956 
> dt = fread("pp-complete.csv", header = FALSE)
> head(dt)
                                       V1     V2               V3       V4 V5 V6 V7  V8 V9
1: {0C7ADEF5-878D-4066-B785-0000003ED74A} 163000 2003-02-21 00:00  UB5 4PJ  T  N  F 106   
2: {35F67271-ABD4-40DA-AB09-00000085B9D3} 247500 2005-07-15 00:00 TA19 9DD  D  N  F  58   
3: {B20B1C74-E8E1-4137-AB3E-0000011DF342} 320000 2010-09-10 00:00   W4 1DZ  F  N  L  58   
4: {7D6B0915-C56B-4275-AF9B-00000156BCE7} 104000 1997-08-27 00:00 NE61 2BH  D  N  F  17   
5: {47B60101-B64C-413D-8F60-000002F1692D} 147995 2003-05-02 00:00 PE33 0RU  D  N  F   4   
6: {51F797CA-7BEB-4958-821F-000003E464AE} 110000 2013-03-22 00:00 NR35 2SF  T  N  F   5   
               V10         V11         V12                          V13            V14 V15
3:   WHELLOCK ROAD                  LONDON                       EALING GREATER LONDON   A

So we’ve already gained one minute in the parsing time which is pretty nice. Let’s try and find the postcode with the highest average price:

> dt[,list(length(V2), sum(V2)), by=V4][, V2 / V1, by=V4][order(-V1)][1:10]
Error in sum(V2) : invalid 'type' (character) of argument

Hmmm, seems like we need to make column ‘V2’ numeric. Let’s do that:

> dt = dt[, V2:= as.numeric(V2)]
> dt[,list(length(V2), sum(V2)), by=V4][, V2 / V1, by=V4][order(-V1)][1:10]
   user  system elapsed 
  5.108   0.670   6.183 
          V4       V1
 1:  SW7 1DW 39000000
 2: SW1W 0NH 32477000
 3:  W1K 7PX 27000000
 4: SW1Y 6HD 24750000
 5:  SW6 1BA 18000000
 6: SW1X 7EE 16917608
 7:   N6 4LA 16850000
 8: EC4N 5AE 16500000
 9:   W8 7EA 13679167
10:  W1K 1DP 13500000

That’s quite a bit faster than our data frame version – ~5 seconds compared to ~2 minutes. We have lost the total sales and number of sales columns but I expect that’s just because my data.table foo is weak and we could keep them if we wanted.

But a good start in terms of execution time. Now let’s try the maximum sale price by post code query:

> system.time(dt[,list(max(V2)), by=V4][order(-V1)][1:10])
   user  system elapsed 
  3.684   0.358   4.132 
          V4       V1
 1: SW10 9SU 54959000
 2:  SW7 1QJ 50000000
 3: SW1X 8HG 46013365
 4:  SW7 1DW 39000000
 5: SW1W 0NH 32477000
 6: SW1X 7LJ 29350000
 7:   W8 7EA 27900000
 8:  SW3 3SR 27750000
 9:  W1K 7PX 27000000
10: SW1X 7EE 25533000

We’ve got the same results as before and this time it took ~4 seconds compared to ~35 seconds.

We can actually do even better if we set the postcode column as a key:

> setkey(dt, V4)
> system.time(dt[,list(length(V2), sum(V2)), by=V4][, V2 / V1, by=V4][order(-V1)][1:10])
   user  system elapsed 
  1.500   0.047   1.548 
> system.time(dt[,list(max(V2)), by=V4][order(-V1)][1:10])
   user  system elapsed 
  0.578   0.026   0.604

And that’s as far as I’ve got with my experiment. If there’s anything else I can do to make either of the versions quicker do let me know in the comments.

Oh and for a bit of commentary on what we can learn from the queries…Knightsbridge is a seriously expensive area to live!

Categories: Programming

Your DI framework is killing your code

Actively Lazy - Fri, 09/25/2015 - 06:45

I read a really interesting post recently¬†looking at the difference between typical OO code and a more functional style.¬†There’s a lot to be said for the functional style of coding, even in OO languages like Java and C#. The biggest downside I find is always one of code organisation: OO gives you a discoverable way of organising large amounts of code. While in a functional style you might have less code, but it’s harder to organise it clearly.

It’s not Mike’s conclusion that I want to challenge, it’s his starting point: he starts out with what he describes as “typical object oriented C# code”. Unfortunately I think he’s bang on: even in this toy example, it is exactly like almost all so-called “object oriented” code I see in the wild. My issue with code like this is: it isn’t in the least bit object oriented. It’s procedural code haphazardly¬†organised into classes. Just cos¬†you’ve got classes, don’t¬†make it¬†OO.

What do I mean procedural code? The typical pattern I see is a codebase made up of two types of classes:

  1. Value objects, holding all the data with no business logic
    Extra credit here if it’s an object from your persistence layer, nhibernate or the like.
  2. Classes with one or two public functions Р they act on the value objects with no state of their own
    These are almost always “noun-verbers”

A noun-verber is the sure-fire smell of poor OO code: OrderProcessor, AccountManager, PriceCalculator. No, calling it an OrderService doesn’t make it any better. Its still a noun-verber you’re hiding by the meaningless¬†word “service”. A noun-verber means its all function and no state, it’s acting on somebody else’s state. It’s almost certainly a sign of feature envy.

The other design smell with these noun-verbers is they’re almost always¬†singletons. Oh you might not realise they’re singletons, because you’ve cleverly hidden that behind your dependency injection¬†framework: but it’s still a singleton. If there’s no state on it, it might as well be a singleton. It’s a static method in all but name. Oh sure its more testable than if you’d actually used the word “static”. But it’s still a static method. If you’d not lied to yourself with your DI¬†framework and written this as a static method, you’d recoil in horror. But because you’ve tarted it up and called it a “dependency”, you think it’s ok. Well it isn’t. It’s still crap code. What you’ve got is procedures, arbitrarily grouped into classes you laughably call “dependencies”. It sure as hell isn’t OO.

One of the properties of good OO design is that code that operates on data is located close to the data. The way the data is actually represented can be hidden behind behaviours. You focus on the behaviour of objects, not the representation of the data. This allows you to model the domain in terms of nouns that have behaviours. A good OO design should include classes that correspond to nouns in the domain, with behaviours that are verbs that act on those nouns: Order.SubmitForPicking(). UserAccount.UpdatePostalAddress(), Basket.CalculatePriceIncludingTaxes().

These are words that someone familiar with the domain but not software would still understand. Does your customer know what an OrderPriceStrategyFactory is for? No, then it’s not a real thing. Its some bullshit you made up.

The unloved value objects are, ironically, where the real answer is to the design problem. These are almost always actual nouns in the domain. They just lack any behaviours which would make them useful. Back to Mike’s example: he has a Customer class – its public interface is just an email address property, a classic value object: all state and no behaviour [if you want to play along at home, I’ve copied Mike’s code into a git repo; as well as my refactored version].

Customer sounds like a good noun in this domain. I bet the customer would know what a Customer was. If only there were some behaviours this domain object could have.¬†What do Customers do in Mike’s world? Well, this example is all about creating and sending a report. A report is made for a single customer, so I think we could ask the Customer to create its own report. By moving the method from ReportBuilder onto Customer, it is right next to the data it operates on. Suddenly the public email property can be hidden – well this seems a useful change, if we change how we contact customers then only the customer needs to change, not also the ReportBuilder. It’s almost like a single change in the design should be contained within a single class. Maybe someone should write¬†a principle about this single responsibility malarkey…

By following a pattern like this, moving methods from noun-verbers onto the nouns on which they operate, we end up with a clearer design. A Customer can CreateReport(), a Report can RenderAsEmail(), and an Email can Send(). In a domain like this, these seem like obvious nouns and obvious behaviours for these nouns to have. They are all meaningful outside of the made up world of software. The design models the domain and it should be clear how the domain model must change in response to each new requirement Рsince each will represent a change in our understanding of the domain.

So why is this pattern so uncommon? I blame the IoC frameworks. No seriously, they’re completely evil. The first thing I hit when refactoring Mike’s example, even using poor man’s DI, was that my domain objects needed dependencies. Because I’ve now moved the functionality to email a report from ReportingService onto the Report domain object, my domain object¬†now needs to know about Emailer. In the original design it¬†could be injected in, but with an object that’s new’d up, how can I inject the dependency? I either need to pass Emailer into my Report at construction time or on sending as email. When refactoring this I opted to pass in the dependency when it was used,¬†but only because passing in at construction time is cumbersome without support.

Almost all DI frameworks make this a royal pain. If I want to inject dependencies into a class that also has state (like the details of the customer’s report), it’s basically impossible, so nobody does it. It’s better, simpler, quicker to just pull report creation onto a ReportBuilder and leave Customer alone. But it’s wrong. Customer deserves to have some functionality. He¬†wants to be useful. He’s tired of just being a repository for values. If only there was a way of injecting dependencies into your nouns that wasn’t completely bullshit.

For example using NInject Рpretty typical of DI frameworks Рyou can create a domain object that requires injected dependencies and state by string typing. Seriously? In the 21st century, you want me to abandon type safety, and put the names of parameters into strings. No. Just say no.

No wonder when faced with these compromises people settle for noun-verbers. The alternatives are absolutely horrific. The only DI framework I’ve seen which lets you do this properly is Guice‘s assisted injection. Everybody else, as far as I can see, is just plain wrong.

Is your DI framework killing your code? Almost certainly: if you’ve got value objects and noun-verbers, your design is rubbish. And it’s your DI framework’s fault for making a better design too hard.

Categories: Programming, Testing & QA

Charters and Success: Constraints

Constraints encourage creativity

Constraints encourage creativity

Charters in Agile can be used to help a team bond and focus. As efforts scale up to require teams of teams, the charter also is useful as a communication vehicle. The four categories of information are typically included in Agile Charters are:

  1. Envisioning Success
  2. Behavior
  3. Timing
  4. Constraints

Each category addresses different concepts that are important to help a team or a team-of-teams in a scaled effort to act in coordinated manner. We complete our tour of the four basic topics of Agile Charters by reviewing the possible topic areas with the components that can be used to address constraints.

Summarized below are the most common components used to address constraints. A quick definition is included for each component and a recommendation whether the component should be used for either a team or scaled Agile charter. Yes means the component should typically be used, meh is an unenthusiastic maybe and no means don’t.

Constraints: Components in this category identify resources, tools, people, money and other assets available to the effort and the boundaries on the use of these items. The definition of assets and the limitations of those assets are inextricably linked. Understanding the true constraints a team faces makes planning and delivering easier!

  1. Out of Scope identifies the functions, needs or requirements that will not be addressed by the effort.
    1. Scaled Charter Recommendation: Meh ‚Äď Identifying what is out of scope is typically needed when product ownership isn’t a strong practice within the organization. Efforts with weak product owners are often susceptible to scope creep.
    2. Team Charter Recommendation: Meh ‚Äď A section for out of scope really is only needed when the product owner doesn’t understand their role and does not own the backlog.
  2. Constraints describe boundaries within which the effort will operate. Constraints can include people, resources, budget, calendar and technology.
    1. Scaled Charter Recommendation: Yes – Scaled efforts typically need to establish and communicate a common set of expectations.
    2. Team Charter Recommendation: Meh or No – In stable environments, common constraints are generally well understood do not need to be documented.
  3. Assumptions are ideas, concepts or presumptions that are accepted as true without proof. In software development, assumptions are generally shortcuts taken based on perception of the development and business environment. Once upon a time a wise boss informed be the word assume could be broken down to the words ass, u and me. His point was that making unrecognized assumptions would make an ass out of us all.
    1. Scaled Charter Recommendation: Yes ‚Äď In scaled efforts it is easy for a wide range of assumptions to be made and then not shared. Documenting those assumptions are critical for ensuring they are shared. Capturing and recognizing assumptions can be the difference in delivering value or delivering…crap.
    2. Team Charter Recommendation: Yes or Meh – In stable environments and with teams that have worked together for at least a few months, common assumptions are generally well understood do not need to be documented. When the environment or team change consider documenting assumptions.
  4. Resources can be many things ranging from hardware, office space to people. Agile projects have just as many resource constraints as any other type of project. Addressing any constraint begins with
    1. Scaled Charter Recommendation: Yes ‚Äď Documenting resource constraints allows wide groups of people to be cognizant of constraints. Note: Resource constraints that translate to risks should be treated as risks (see below).
    2. Team Charter Recommendation: Meh ‚Äď Common resource constraints such as the size of the team are givens and should be reflected in the planning activities rather than in a document. Only document resource constraints IF they are out of the ordinary. A simple documentation technique is to write the constraint on flip chart and post it on the wall.
  5. Risks are the potential of something going wrong that will reduce the value an effort can deliver. Agile efforts have a different risk profile than classic projects however the risks that exist must be recognized. Rather than putting risks in a charter they should be incorporated into the backlog or into the definition of done.
    1. Scaled Charter Recommendation: No
    2. Team Charter Recommendation: No
  6. Involved Groups define the interface between the effort and external groups. Documenting involved groups can be helpful to ensure the correct conversations and handoffs occur.
    1. Scaled Charter Recommendation: Yes ‚Äď As Agile efforts are scaled up the number of interfaces between groups inside (and potentially outside) of the effort increase.
    2. Team Charter Recommendation: No ‚ÄstThe only reason for documenting involved groups at a team level is if something very out of the ordinary is going on.

Use the components that make sense for your effort. Almost no Agile project will need each component. As mentioned in team-level charters, start with the minimum subset of norms, elevator speech, product box and success criteria. For a scaled efforts, evaluate the sections we identified as Yes in each of the four sections.  A charter for a team or a scaled effort is not anti-Agile if the information captured real needs and facilitate delivery of value.

Categories: Process Management

When Should I List A Programming Language On My Resume?

Making the Complex Simple - John Sonmez - Thu, 09/24/2015 - 16:00

In this episode, I talk about when you should list a programming language on your resume. Full transcript: John:¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† Hey, John Sonmez from I got a question about resumes here and about when you can put a job or rather a programming language in your resume. This question is from Catherine and Catherine says, […]

The post When Should I List A Programming Language On My Resume? appeared first on Simple Programmer.

Categories: Programming

Better Density and Lower Prices for Azure’s SQL Elastic Database Pools

ScottGu's Blog - Scott Guthrie - Wed, 09/23/2015 - 21:41

A few weeks ago, we announced the preview availability of the new Basic and Premium Elastic Database Pools Tiers with our Azure SQL Database service.  Elastic Database Pools enable you to run multiple, isolated and independent databases that can be auto-scaled automatically across a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.

Today, we are announcing some nice changes to the pricing structure of Elastic Database Pools as well as changes to the density of elastic databases within a pool.  These changes make it even more attractive to use Elastic Database Pools to build your applications.

Specifically, we are making the following changes:

  • Finalizing the eDTU price ‚Äď With Elastic Database Pools you purchase units of capacity that we can call eDTUs ‚Äď which you can then use to run multiple databases within a pool.  We have decided to not increase the price of eDTUs as we go from preview->GA.  This means that you‚Äôll be able to pay a much lower price (about 50% less) for eDTUs than many developers expected.
  • Eliminating the per-database fee ‚Äď In additional to lower eDTU prices, we are also eliminating the fee per database that we have had with the preview. This means you no longer need to pay a per-database charge to use an Elastic Database Pool, and makes the pricing much more attractive for scenarios where you want to have lots of small databases.
  • Pool density ‚Äď We are announcing increased density limits that enable you to run many more databases per Elastic Database pool. See the chart below under ‚ÄúMaximum databases per pool‚ÄĚ for specifics. This change will take effect at the time of general availability, but you can design your apps around these numbers.  The increase pool density limits will make Elastic Database Pools event more attractive.



Below are the updated parameters for each of the Elastic Database Pool options with these new changes:


For more information about Azure SQL Database Elastic Database Pools and Management tools go the technical overview here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Games developer, Dots, share their Do’s and Don’ts for improving your visibility on Google Play

Android Developers Blog - Wed, 09/23/2015 - 18:35
Posted by Lily Sheringham, Developer Marketing at Google Play
Editor’s note: A few weeks ago we shared some tips from game developer, Seriously, on how they’ve been using notifications successfully to drive ongoing engagement. This week, we’re sharing tips from Christian Calderon at US game developer, Dots, on how to successfully optimize your Play Store Listing. -Ed.

A well thought-out Google Play store listing can significantly improve the discoverability of your app or game and drive installations. With the recent launch of Store Listing Experiments on the Google Play Developer Console, you can now conduct A/B tests on the text and graphics of your store listing page and use the data to make more informed decisions.

Dots is a US-founded game developer which released the popular game, Dots, and its addictive sequel, TwoDots. Dots used its store listings to showcase its brands and improve conversions by letting players know what to expect.

Christian Calderon, Head of Marketing for Dots, shared his top tips with us on store listings and visibility on Google Play.

Do’s and Don’ts for optimizing store listings on Google Play table, th, td { border: clear; border-collapse: collapse; }
Do be creative and unique with the icon. Try to visually convince the user that your product is interesting and in alignment with what they are looking for.

Don’t spam keywords in your app title. Keep the title short, original and thoughtful and keep your brand in mind when representing your product offering.
Do remember to quickly respond to reviews and implement a scalable strategy to incorporate feedback into your product offering. App ratings are important social proof that your product is well liked.
Don‚Äôt overload the ‚Äėshort description‚Äô. Keep it concise. It should be used as a call-to-action to address your product‚Äôs core value proposition and invite the user to install the application. Remember to consider SEO best practices.

Do invest in a strong overall paid and organic acquisition strategy. More downloads will make your product seem more credible to users, increasing the likeliness that a user will install your app.
Don’t overuse text in your screenshots. They should create a visual narrative for what’s in your game and help users visualize your product offering, using localization where possible.
Do link your Google Play store listing to your website, social media accounts, press releases and any of your consumer-facing channels that may drive organic visibility to your target market. This can impact your search positioning.
Don‚Äôt have a negative, too short or confusing message in your ‚ÄúWhat‚Äôs New‚ÄĚ copy. Let users know what updates, product changes or bug fixes have been implemented in new versions. Keep your copy buoyant, informative, concise and clear.
Do use Video Visualization to narrate the core value proposition. For TwoDots, our highest converting videos consist of gameplay, showcasing features and events within the game that let the player know exactly what to expect.
Don’t flood the user with information in the page description. Keep the body of the page description organized and concise and test different structural patterns that works best for you and your product!

Use Google Play Store Listing Experiments to increase your installs

As part of the 100 Days of Google Dev video series, Kobi Glick from the Google Play team explains how to test different graphics and text on your app or game’s Play Store listing to increase conversions using the new Store Listing Experiments feature in the Developer Console.

Find out more about using Store Listing Experiments to turn more of your visits into installs.
Categories: Programming

How will new memory technologies impact in-memory databases?

This is a guest post by Yiftach Shoolman, Co-founder & CTO of redislabs. Will 3D XPoint change everything? Not as much as you might hope...

Recently, investors, analysts, partners and customers have asked me how the announcement from Intel and Micron about their new 3D XPoint memory technology will affect the in-memory databases market. In these discussions, a common question was “Who needs an in-memory database if all the non in-memory databases will achieve similar performance with 3D XPoint technology?” Well, I think that's a valid question so I've decided to take a moment to describe how we think this technology will influence our market.

First, a little background...

The motivation of Intel and Micron is clear -- DRAM is expensive and hasn’t changed much during the last few years (as shown below). In addition, there are currently only three major makers of DRAM on the planet (Samsung Electronics, Micron and SK Hynix), which means that the competition between them is not as cutthroat as it used to be between four and five major manufacturers several years ago.

DRAM Price Trends
Categories: Architecture

My Business is Software

Making the Complex Simple - John Sonmez - Wed, 09/23/2015 - 13:00

When I attended Drury University, I was a bit of an enigma. ¬† My peer group in the business school didn‚Äôt understand why mathematics and computation fascinated me so strongly. My other peer group in the computer science department called that other building across the street ‚ÄúB-School,‚ÄĚ and in their minds that school hadn‚Äôt even […]

The post My Business is Software appeared first on Simple Programmer.

Categories: Programming