Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Sponsored Post: Pier 1, Aerospike, Clubhouse, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring? 
  • Pier 1 Imports is looking for an amazing Sr. Website Engineer to join our growing team!  Our customer continues to evolve the way she prefers to shop, speak to, and engage with us at Pier 1 Imports.  Driving us to innovate more ways to surprise and delight her expectations as a Premier Home and Decor retailer.  We are looking for a candidate to be another key member of a driven agile team. This person will inform and apply modern technical expertise to website site performance, development and design techniques for Pier.com. To apply please email cmwelsh@pier1.com. More details are available here.

  • Etleap is looking for Senior Data Engineers to build the next-generation ETL solution. Data analytics teams need solid infrastructure and great ETL tools to be successful. It shouldn't take a CS degree to use big data effectively, and abstracting away the difficult parts is our mission. We use Java extensively, and distributed systems experience is a big plus! See full job description and apply here.

  • Advertise your job here! 
Fun and Informative Events
  • DBTA Roundtable OnDemand Webinar: Leveraging Big Data with Hadoop, NoSQL and RDBMS. Watch this recent roundtable discussion hosted by DBTA to learn about key differences between Hadoop, NoSQL and RDBMS. Topics include primary use cases, selection criteria, when a hybrid approach will best fit your needs and best practices for managing, securing and integrating data across platforms. Brian Bulkowski, CTO and Co-founder of Aerospike, presented along with speakers from Cask Data and Splice Machine. View now.

  • Advertise your event here!
Cool Products and Services
  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Etleap provides a SaaS ETL tool that makes it easy to create and operate a Redshift data warehouse at a small fraction of the typical time and cost. It combines the ability to do deep transformations on large data sets with self-service usability, and no coding is required. Sign up for a 30-day free trial.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

Categories: Architecture

Why Getting to Done Is So Important

Mike Cohn's Blog - Tue, 04/11/2017 - 17:00

One of tenets of Scrum is the value of getting work done. At the start of a sprint, the team selects some set of product backlog items and takes on the goal of completing them.

A good Scrum team realizes they are better off finishing 5 product backlog items than being half done with 10.

But why?

Faster Feedback

One reason to emphasize getting work to done is that it shortens feedback cycles. When something is done, users can touch it and see it. And they can provide better feedback.

Teams should still seek feedback as early as possible from users, including while developing the functionality. But feedback is easier to provide, more informed, and more reliable when a bit of functionality is finished rather than half done.

Faster Payback

A second reason to emphasize finishing features is because finished features can be sold; unfinished features cannot.

All projects represent an economic investment--time and money are invested in developing functionality.

An organization cannot begin regaining its investment by delivering partially developed features. A product with 10 half-done features can be thought of as inventory sitting on a warehouse floor. That inventory cannot be sold until each feature is complete.

In contrast, a product with 5 finished features is sellable. It can begin earning money back against the investment.

Progress Is Notoriously Hard to Estimate

A third reason for emphasizing getting features all the way to done is because progress is notoriously hard to estimate.

Suppose you ask a developer how far along he or she is. And the developer says “90% done.”

Great, you think, it’s almost done. A week later you return to speak with the same developer. You are now expecting the feature to be done--100% complete. But the developer again informs you that the feature is 90% done.

How can this be?

It’s because the size of the problem has grown. When you first asked, the developer truly was 90% done with what he or she could see of the problem. A week later the developer could see more of the problem, so the size of the work grew. And the developer is again confident in thinking 90% of the work is done.

This leads to what is known as the 90% syndrome: Software projects are 90% done for 90% of their schedules.

Not Started and Done

In agile, we avoid the 90% syndrome by making sure that at the end of each iteration, all work is either:

  • Not started
  • Done

We’re really good at knowing when we haven’t started something. We’re pretty good at knowing when we’re done with something. We’re horrible anywhere in between.

What’s Your Experience?

Have you experienced problems with teams being 90% done? How have you overcome these problems? Please share your thoughts in the comments below.

Code Health: Google's Internal Code Quality Efforts

Google Testing Blog - Tue, 04/11/2017 - 01:52
By Max Kanat-Alexander, Tech Lead for Code Health and Author of Code Simplicity

There are many aspects of good coding practices that don't fall under the normal areas of testing and tooling that most Engineering Productivity groups focus on in the software industry. For example, having readable and maintainable code is about more than just writing good tests or having the right tools‚ÄĒit's about having code that can be easily understood and modified in the first place. But how do you make sure that engineers follow these practices while still allowing them the independence that they need to make sound engineering decisions?

Many years ago, a group of Googlers came together to work on this problem, and they called themselves the "Code Health" group. Why "Code Health"? Well, many of the other terms used for this in the industry‚ÄĒengineering productivity, best practices, coding standards, code quality‚ÄĒhave connotations that could lead somebody to think we were working on something other than what we wanted to focus on. What we cared about was the processes and practices of software engineering in full‚ÄĒany aspect of how software was written that could influence the readability, maintainability, stability, or simplicity of code. We liked the analogy of having "healthy" code as covering all of these areas.

This is a field that many authors, theorists, and conference speakers touch on, but not an area that usually has dedicated resources within engineering organizations. Instead, in most software companies, these efforts are pushed by a few dedicated engineers in their extra time or led by the senior tech leads. However, every software engineer is actually involved in code health in some way. After all, we all write software, and most of us care deeply about doing it the "right way." So why not start a group that helps engineers with that "right way" of doing things?

This isn't to say that we are prescriptive about engineering practices at Google. We still let engineers make the decisions that are most sensible for their projects. What the Code Health group does is work on efforts that universally improve the lives of engineers and their ability to write products with shorter iteration time, decreased development effort, greater stability, and improved performance. Everybody appreciates their code getting easier to understand, their libraries getting simpler, etc. because we all know those things let us move faster and make better products.

But how do we accomplish all of this? Well, at Google, Code Health efforts come in many forms.

There is a Google-wide Code Health Group composed of 20%contributors who work to make engineering at Google better for everyone. The members of this group maintain internal documents on best practices and act as a sounding board for teams and individuals who wonder how best to improve practices in their area. Once in a while, for critical projects, members of the group get directly involved in refactoring code, improving libraries, or making changes to tools that promote code health.

For example, this central group maintains Google's code review guidelines, writes internal publications about best practices, organizes tech talks on productivity improvements, and generally fosters a culture of great software engineering at Google.

Some of the senior members of the Code Health group also advise engineering executives and internal leadership groups on how to improve engineering practices in their areas. It's not always clear how to implement effective code health practices in an area‚ÄĒsome people have more experience than others making this happen broadly in teams, and so we offer our consulting and experience to help make simple code and great developer experiences a reality.

In addition to the central group, many products and teams at Google have their own Code Health group. These groups tend to work more closely on actual coding projects, such as addressing technical debt through refactoring, making tools that detect and prevent bad coding practices, creating automated code formatters, or making systems for automatically deleting unused code. Usually these groups coordinate and meet with the central Code Health group to make sure that we aren't duplicating efforts across the company and so that great new tools and systems can be shared with the rest of Google.

Throughout the years, Google's Code Health teams have had a major impact on the ability of engineers to develop great products quickly at Google. But code complexity isn't an issue that only affects Google‚ÄĒit affects everybody who writes software, from one person writing software on their own time to the largest engineering teams in the world. So in order to help out everybody, we're planning to release articles in the coming weeks and months that detail specific practices that we encourage internally‚ÄĒpractices that can be applied everywhere to help your company, your codebase, your team, and you. Stay tuned here on the Google Testing Blog for more Code Health articles coming soon!

Categories: Testing & QA

Changes to Device Identifiers in Android O

Android Developers Blog - Mon, 04/10/2017 - 23:33
Posted by Giles Hogben, Privacy Engineer

Android O introduces some improvements to help provide user control over the use of identifiers. These improvements include:

  • limiting the use of device-scoped identifiers that are not resettable
  • updating the Android O Wi-Fi stack in conjunction with changes to the Wi-Fi chipset firmware used by Pixel, Pixel XL and Nexus 5x phones to randomize MAC addresses in probe requests
  • updating the way that applications request account information and providing more user-facing control

Device identifier changes
Here are some of the device identifier changes for Android O:

Android ID
In O, Android ID (Settings.Secure.ANDROID_ID or SSAID) has a different value for each app and each user on the device. Developers requiring a device-scoped identifier, should instead use a resettable identifier, such as Advertising ID, giving users more control. Advertising ID also provides a user-facing setting to limit ad tracking.

Additionally in Android O:

  • The ANDROID_ID value won't change on package uninstall/reinstall, as long as the package name and signing key are the same. Apps can rely on this value to maintain state across reinstalls.
  • If an app was installed on a device running an earlier version of Android, the Android ID remains the same when the device is updated to Android O, unless the app is uninstalled and reinstalled.
  • The Android ID value only changes if the device is factory reset or if the signing key rotates between uninstall and reinstall events.
  • This change is only required for device manufacturers shipping with Google Play services and Advertising ID. Other device manufacturers may provide an alternative resettable ID or continue to provide ANDROID ID.

Build.SERIAL
To be consistent with runtime permissions required for access to IMEI, use of android.os.Build.SERIAL is deprecated for apps that target Android O or newer. Instead, they can use a new Android O API, Build.getSerial(), which returns the actual serial number, as long as the caller holds the PHONE permission. In a future version of Android, apps targeting Android O will see Build.SERIAL as "UNKNOWN". To avoid breaking legacy app functionality, apps targeting prior versions of Android will continue see the device's serial number, as before.

Net.Hostname
Net.Hostname provides the network hostname of the device. In previous versions of Android, the default value of the network hostname and the value of the DHCP hostname option contained Settings.Secure.ANDROID_ID. In Android O, net.hostname is empty and the DHCP client no longer sends a hostname, following IETF RFC 7844 (anonymity profile).

Widevine ID
For new devices shipping with O, the Widevine Client ID returns a different value for each app package name and web origin (for web browser apps).

Unique system and settings properties
In addition to Build.SERIAL, there are other settings and system properties that aren't available in Android O. These include:

  • ro.runtime.firstboot: Millisecond-precise timestamp of first boot after last wipe or most recent boot
  • htc.camera.sensor.front_SN: Camera serial number (available on some HTC devices)
  • persist.service.bdroid.bdaddr: Bluetooth MAC address property
  • Settings.Secure.bluetooth_address: Device Bluetooth MAC address. In O, this is only available to apps holding the LOCAL_MAC_ADDRESS permission.

MAC address randomization in Wi-Fi probe requests
We collaborated with security researchers1 to design robust MAC address randomization for Wi-Fi scan traffic produced by the chipset firmware in Google Pixel and Nexus 5X devices. The Android Connectivity team then worked with manufacturers to update the Wi-Fi chipset firmware used by these devices.

Android O integrates these firmware changes into the Android Wi-Fi stack, so that devices using these chipsets with updated firmware and running Android O or above can take advantage of them.

Here are some of the changes that we've made to Pixel, Pixel XL and Nexus 5x firmware when running O+:

  • For each Wi-Fi scan while it is disconnected from an access point, the phone uses a new random MAC address (whether or not the device is in standby).
  • The initial packet sequence number for each scan is also randomized.
  • Unnecessary Probe Request Information Elements have been removed: Information Elements are limited to the SSID and DS parameter sets.

Changes in the getAccounts API
In Android O and above, the GET_ACCOUNTS permission is no longer sufficient to gain access to the list of accounts registered on the device. Applications must use an API provided by the app managing the specific account type or the user must grant permission to access the account via an account chooser activity. For example, Gmail can access Google accounts registered on the device because Google owns the Gmail application, but the user would need to grant Gmail access to information about other accounts registered on the device.

Apps targeting Android O or later should either use AccountManager#newChooseAccountIntent() or an authenticator-specific method to gain access to an account. Applications with a lower target SDK can still use the current flow.

In Android O, apps can also use the AccountManager.setAccountVisibility()/ getVisibility() methods to manage visibility policies of accounts owned by those apps.

In addition, the LOGIN_ACCOUNTS_CHANGED_ACTION broadcast is deprecated, but still works in Android O. Applications should use addOnAccountsUpdatedListener() to get updates about accounts at runtime for a list of account types that they specify.

Check out Best Practices for Unique Identifiers for more information.


Notes
  1. Glenn Wilkinson and team at Sensepost, UK, C√©lestin Matte, Mathieu Cunche: University of Lyon, INSA-Lyon, CITI Lab, Inria Privatics, Mathy Vanhoef, KU Leuven ‚Ü©
Categories: Programming

Changes to Device Identifiers in Android O

Android Developers Blog - Mon, 04/10/2017 - 23:33
Posted by Giles Hogben, Privacy Engineer

Android O introduces some improvements to help provide user control over the use of identifiers. These improvements include:

  • limiting the use of device-scoped identifiers that are not resettable
  • updating the Android O Wi-Fi stack in conjunction with changes to the Wi-Fi chipset firmware used by Pixel, Pixel XL and Nexus 5x phones to randomize MAC addresses in probe requests
  • updating the way that applications request account information and providing more user-facing control

Device identifier changes
Here are some of the device identifier changes for Android O:

Android ID
In O, Android ID (Settings.Secure.ANDROID_ID or SSAID) has a different value for each app and each user on the device. Developers requiring a device-scoped identifier, should instead use a resettable identifier, such as Advertising ID, giving users more control. Advertising ID also provides a user-facing setting to limit ad tracking.

Additionally in Android O:

  • The ANDROID_ID value won't change on package uninstall/reinstall, as long as the package name and signing key are the same. Apps can rely on this value to maintain state across reinstalls.
  • If an app was installed on a device running an earlier version of Android, the Android ID remains the same when the device is updated to Android O, unless the app is uninstalled and reinstalled.
  • The Android ID value only changes if the device is factory reset or if the signing key rotates between uninstall and reinstall events.
  • This change is only required for device manufacturers shipping with Google Play services and Advertising ID. Other device manufacturers may provide an alternative resettable ID or continue to provide ANDROID ID.

Build.SERIAL
To be consistent with runtime permissions required for access to IMEI, use of android.os.Build.SERIAL is deprecated for apps that target Android O or newer. Instead, they can use a new Android O API, Build.getSerial(), which returns the actual serial number, as long as the caller holds the PHONE permission. In a future version of Android, apps targeting Android O will see Build.SERIAL as "UNKNOWN". To avoid breaking legacy app functionality, apps targeting prior versions of Android will continue see the device's serial number, as before.

Net.Hostname
Net.Hostname provides the network hostname of the device. In previous versions of Android, the default value of the network hostname and the value of the DHCP hostname option contained Settings.Secure.ANDROID_ID. In Android O, net.hostname is empty and the DHCP client no longer sends a hostname, following IETF RFC 7844 (anonymity profile).

Widevine ID
For new devices shipping with O, the Widevine Client ID returns a different value for each app package name and web origin (for web browser apps).

Unique system and settings properties
In addition to Build.SERIAL, there are other settings and system properties that aren't available in Android O. These include:

  • ro.runtime.firstboot: Millisecond-precise timestamp of first boot after last wipe or most recent boot
  • htc.camera.sensor.front_SN: Camera serial number (available on some HTC devices)
  • persist.service.bdroid.bdaddr: Bluetooth MAC address property
  • Settings.Secure.bluetooth_address: Device Bluetooth MAC address. In O, this is only available to apps holding the LOCAL_MAC_ADDRESS permission.

MAC address randomization in Wi-Fi probe requests
We collaborated with security researchers1 to design robust MAC address randomization for Wi-Fi scan traffic produced by the chipset firmware in Google Pixel and Nexus 5X devices. The Android Connectivity team then worked with manufacturers to update the Wi-Fi chipset firmware used by these devices.

Android O integrates these firmware changes into the Android Wi-Fi stack, so that devices using these chipsets with updated firmware and running Android O or above can take advantage of them.

Here are some of the changes that we've made to Pixel, Pixel XL and Nexus 5x firmware when running O+:

  • For each Wi-Fi scan while it is disconnected from an access point, the phone uses a new random MAC address (whether or not the device is in standby).
  • The initial packet sequence number for each scan is also randomized.
  • Unnecessary Probe Request Information Elements have been removed: Information Elements are limited to the SSID and DS parameter sets.

Changes in the getAccounts API
In Android O and above, the GET_ACCOUNTS permission is no longer sufficient to gain access to the list of accounts registered on the device. Applications must use an API provided by the app managing the specific account type or the user must grant permission to access the account via an account chooser activity. For example, Gmail can access Google accounts registered on the device because Google owns the Gmail application, but the user would need to grant Gmail access to information about other accounts registered on the device.

Apps targeting Android O or later should either use AccountManager#newChooseAccountIntent() or an authenticator-specific method to gain access to an account. Applications with a lower target SDK can still use the current flow.

In Android O, apps can also use the AccountManager.setAccountVisibility()/ getVisibility() methods to manage visibility policies of accounts owned by those apps.

In addition, the LOGIN_ACCOUNTS_CHANGED_ACTION broadcast is deprecated, but still works in Android O. Applications should use addOnAccountsUpdatedListener() to get updates about accounts at runtime for a list of account types that they specify.

Check out Best Practices for Unique Identifiers for more information.


Notes
  1. Glenn Wilkinson and team at Sensepost, UK, C√©lestin Matte, Mathieu Cunche: University of Lyon, INSA-Lyon, CITI Lab, Inria Privatics, Mathy Vanhoef, KU Leuven ‚Ü©
Categories: Programming

Five things we’ve learned about monitoring containers and their orchestrators

This is a guest post by Apurva Davé, who is part of the product team at Sysdig.

Having worked with hundreds of customers on building a monitoring stack for their containerized environments, we’ve learned a thing or two about what works and what doesn’t. The outcomes might surprise you - including the observation that instrumentation is just as important as the application when it comes to monitoring.

In this post, I wanted to cover some details around what it takes to build a scale-out, highly reliable monitoring system to work across tens of thousands of containers. I’ll share a bit about what our infrastructure looks like, the design choices we made, and tradeoffs. The five areas I’ll cover:

  • Instrumenting the system

  • Relating your data to your applications, hosts, and containers.

  • Leveraging orchestrators

  • Deciding what to data to store

  • How to enable troubleshooting in containerized environments

For context, Sysdig is the container monitoring company. We’re based on the open source Linux troubleshooting project by the same name. The open source project allows you to see every single system call down to process, arguments, payload, and connection on a single host. The commercial offering turns all this data into thousands of metrics for every container and host, aggregates it all, and gives you dashboarding, alerting, and an htop-like exploration environment.

Ok, let’s get into the details, starting with the impact containers have had on monitoring systems.

Why do containers change the rules of the monitoring game?
Categories: Architecture

Book of the Month

Herding Cats - Glen Alleman - Mon, 04/10/2017 - 16:41

Short Term ForecastingI've been working in the probabilistic estimating business for a decade or two. 

One of the seminal books that started it all is Short-Term Forecasting. This is the basis of Box-Jenkins and Mr. Jenkins quote that is universally misquoted by most people in the Agile community.

All Models are Wrong, Some Models are  Useful

As well - of course - is the nonsense that Forecasts are not estimates, popularized by #NoEstimates advocates.

The Box-Jenkins modeling process expands to the ARIMA (Auto-Regression Integrated Moving Average) models we use for cost and schedule models. This approach makes use of past performance to forecast future outcomes. This empirical method is used nearly universally for forecast time series in the past for outcomes in the future. From stock markets to Estimates to Complete and Estimates at Completion. In our domain, we archive all the performance numbers and then compare them against the planned performance models. This is then used to make adjustments to the Plan or the Estimate from the Root Cause of the variance.

This is called Close Loop Control, and that works on all domains where stochastic process underly the work process, from software development, to process control, to flight control systems, the natural systems.

Related articles The Flaw of Empirical Data Used to Make Decisions About the Future Do The Math Humpty Dumpty and #NoEstimates The Flaw of Averages and Not Estimating Herding Cats: Median, Mean, Mode without Variance is Worthless Doing the Math Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Deadlines Always Matter
Categories: Project Management

Book of the Month

Herding Cats - Glen Alleman - Mon, 04/10/2017 - 16:41

Short Term ForecastingI've been working in the probabilistic estimating business for a decade or two. 

One of the seminal books that started it all is Short-Term Forecasting. This is the basis of Box-Jenkins and Mr. Jenkins quote that is universally misquoted by most people in the Agile community.

All Models are Wrong, Some Models are  Useful

As well - of course - is the nonsense that Forecasts are not estimates, popularized by #NoEstimates advocates.

The Box-Jenkins modeling process expands to the ARIMA (Auto-Regression Integrated Moving Average) models we use for cost and schedule models. This approach makes use of past performance to forecast future outcomes. This empirical method is used nearly universally for forecast time series in the past for outcomes in the future. From stock markets to Estimates to Complete and Estimates at Completion. In our domain, we archive all the performance numbers and then compare them against the planned performance models. This is then used to make adjustments to the Plan or the Estimate from the Root Cause of the variance.

This is called Close Loop Control, and that works on all domains where stochastic process underly the work process, from software development, to process control, to flight control systems, the natural systems.

Related articles The Flaw of Empirical Data Used to Make Decisions About the Future Do The Math Humpty Dumpty and #NoEstimates The Flaw of Averages and Not Estimating Herding Cats: Median, Mean, Mode without Variance is Worthless Doing the Math Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Deadlines Always Matter
Categories: Project Management

De-mystifying Jest Snapshot Test Mocks

Xebia Blog - Mon, 04/10/2017 - 12:48

So, let‚Äôs say you have a nice React Native setup with the Jest testing library. You want to snapshot-test all your components of course! But you‚Äôre getting seemingly unrelated errors when you tried to mock a third party module in your snapshots and you‚Äôre lost in all that API documentation. Let‚Äôs dig into an example […]

The post De-mystifying Jest Snapshot Test Mocks appeared first on Xebia Blog.

JaxDevops 2017

I had the chance to attend JaxDevOps London, here is a valuable session from Daniel Bryant¬†about the common mistakes done for Microservices…

  1.  7 (MORE) DEADLY SINS:
    1. Lust [Use the Unevaluated Latest and Greatest Tech]:
      1. Be an expert on Evaluation
      2. Spine Model: Going up the spine solves the problems, not the first step: Tools, but Practices, Principles, Values, Needs.
    2. Gluttony: Communication Lock-In
      1. Don’t rule out RPC [eg. GRPC]
      2. Stick to the Principle of Least Surprise: [Json over Https]
      3. Don’t let API Gateway murphing into EBS
      4. Check the cool tools: Mulesoft,Kong, Apigee, AWS API Gateway
    3. Greed: What Is Mine [within the Org]
      1. “We’ve decided to reform our teams around squads, chapters, and Guilds”: ¬†Be aware of Cargo-Culting:
    4. Sloth: Getting Lazy with NFR:
      1. Ilities: “Availability, Scalability, Auditability, Testability” can be Afterthought
      2. Security: Aaron Grattafiori DockerCon2016 Talk/InfoQ
      3. Thoughtworks: AppSec & Microservices
      4. Build Pipeline:
        1. Perfromance and load testing:
          1. Gatling/JMeter
          2. Flood.IO [upload Gatling script/scale]
        2. Security Testing:
          1. FindSecBugs/OWasp dependency check
          2. Bdd-Security (Owasp Zap)/ Arachi
          3. Gaunltl /Serverspec
          4. Docker Bench for security/Clair
    5. Wrath: Blowing Up When Bad Things Happen
      1. Michael Nyard (Release It) : Turn ops to Simian Army
      2. Distributed Transactions:
        1. Don’t push transactional scope into Single Service
        2. Supervisor/Processor Manager: Erlang OTP, Akka, EIP
      3. Focus on What Matters:
        1. CI/CD
        2. Mechanical Sympathy
        3. Logging
        4. Monitoring
      4. Consider:
        1. DEIS
        2. CloudFoundry
        3. OpenShift
    6. Envy: The Shared Single Domain and (Data Store) Fallacy
      1. Know your DD:
        1. Entities
        2. Value Objects
        3. Aggregates and Roots
        4. Book:
          1. Implementing Domain-Driven Design
          2. Domain-Driven Distilled [high level]
            1. Context Mapping [Static] & Event Storming [Dynamic]
              1. infoq
              2. ziobrando
            2. Data Stores:
              1. RDBMS:
              2. Cassandra
              3. Graph -> Neo4J, Titan
              4. Support! Op Overhead
    7. Pride: Testing in the World
      1. Testing Strategies in a Microservice Architecture [Martin Fowler]
      2. Andew Morgan [Virtual API Service Testing]
      3. Service Virtualisation:
        1. Classic Ones:
          1. CA Service Virtualization
          2. Parasoft Virtualize
          3. HPE Service Virtualization
          4. IBM Test Virtualization Server
        2. New kids:
          1. [SpectoLabs] Hoverfly: Lightweight
            1. Fault Injection
            2. Chaos Monkey
          2. Wiremock
          3. VCR/BetaMax
          4. MounteBank
          5. Mirage

 

 

 

 

 

 

 

 


Categories: Programming

SPaMCAST 437 Steven Adams, Five Dysfunctions of a Team

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 437 features a discussion of our recent re-read of  The Five Dysfunctions of a Team by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing) with Steven Adams.  Steve has participated on nearly all of the re-reads, providing his unique wisdom.  It was a great talk that helped me understand why the book has (and continues to have) such a large impact on how I view Agile and software development. Steve also has some advice on how to get the most out of the re-read feature.

Steve lives in the San Francisco Bay Area (a.k.a, Silicon Valley) where he has a successful career in software development.  Steve has worked for Hewlett Packard, Access Systems Inc,, Trilliant Inc., and Sony Mobile Communications; plus has consulted at Cisco Systems.  Steve has a computer science degree from California State University at Chico, learned software project management at Hewlett-Packard and, in 2009, started his Agile journey with Sony Ericsson.  Steve enjoys listening to technical podcasts, and SPaMCAST was one of the first and is a favorite!  Steve is also an avid bicyclist (road) and is on track to log over 3,500 miles in 2016.

Blog: https://sadams510.wordpress.com/

Twitter: @stevena510

Re-Read Saturday News

This week we begin our read of Holacracy with a few logistics and a review of the introduction.  We have a short entry this week that will give you time to buy a copy today and read along!  If you have not listened to my interview with Jeff Dalton on Software Process and Measurement Cast 433, I would suggest a quick listen. Jeff has practical experience with using the concepts of holacracy in his company and as a tool in his consultancy.  

Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson was published by Henry Holt and Company in 2015.  The book is comprised of a forward, 10 chapters in three parts, notes, acknowledgments, and an index.  My plan is to read and review one chapter per week.  We will move on to a new book in approximately 12 weeks.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

We will also have a new column from Gene Hughson who brings his Form Follows Function blog to the Cast and a new column from Kim Pries , the Software Sensei.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 437 Steven Adams, Five Dysfunctions of a Team

Software Process and Measurement Cast - Sun, 04/09/2017 - 22:00

The Software Process and Measurement Cast 437 features a discussion of our recent re-read of  The Five Dysfunctions of a Team by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing) with Steven Adams.  Steve has participated on nearly all of the re-reads, providing his unique wisdom.  It was a great talk that helped me understand why the book has (and continues to have) such a large impact on how I view Agile and software development. Steve also has some advice on how to get the most out of the re-read feature.

Steve lives in the San Francisco Bay Area (a.k.a, Silicon Valley) where he has a successful career in software development.  Steve has worked for Hewlett Packard, Access Systems Inc, Trilliant Inc., and Sony Mobile Communications; plus has consulted at Cisco Systems.  Steve has a computer science degree from California State University at Chico, learned software project management at Hewlett-Packard and, in 2009, started his Agile journey with Sony Ericsson.  Steve enjoys listening to technical podcasts, and SPaMCAST was one of the first and is a favorite!  Steve is also an avid bicyclist (road) and is on track to log over 3,500 miles in 2016.

Blog: https://sadams510.wordpress.com/

Twitter: @stevena510

Re-Read Saturday News

This week we begin our read of Holacracy with a few logistics and a review of the introduction.  We have a short entry this week that will give you time to buy a copy today and read along!  If you have not listened to my interview with Jeff Dalton on Software Process and Measurement Cast 433, I would suggest a quick listen. Jeff has practical experience with using the concepts of holacracy in his company and as a tool in his consultancy.  

Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson was published by Henry Holt and Company in 2015.  The book is comprised of a forward, 10 chapters in three parts, notes, acknowledgments, and an index.  My plan is to read and review one chapter per week.  We will move on to a new book in approximately 12 weeks.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

We will also have a new column from Gene Hughson who brings his Form Follows Function blog to the Cast and a new column from Kim Pries, the Software Sensei.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 

Categories: Process Management

Median, Mean, Mode without Variance is Worthless

Herding Cats - Glen Alleman - Sun, 04/09/2017 - 16:54

Just had a conversation of sorts where it was stated, I look at the median EQF of a portfolio as one gauge of the quality of the overall underlying data. The problem is without the variances of any single point metric, that metric is pretty much worthless.

Here's a little exercise we use at conferences and training sessions.

  • I'm going to send you on a mission to determine the Mode of a measurement. You'll get a clipboard, a hat, a folding chair, and a few other items. You'll go to two locations for 365 days and record the high temperature of the day.¬†
  • Let's start with Trinidad-Tobago. You've got your clipboard, a nice folding chair sitting on the beach, a good sun hat and sunscreen.¬†
  • Next, you'll go to Cody Wyoming, just north of my home and do the same thing

Now let's look at the Mode of those numbers. Why the Mode. It's the most recurring value in a time series. This is what we use when modeling - Estimating - variables in project planning. The Mode of a Task's work duration is the Most Likely value drawn from a sample of possible values in the Probability Distribution Function. It is the value that occurs most often in the Task's life. If we have a similar Task - through a Reference Class Forecasting process, we want to use the Mode of the duration. Not the Mean (average) or the Median (the middle most). The Mode is what the Task will take most of the time.

So now let's look at the data.

  • For Trinidad-Tobago, the number that occurs¬†most often across the 365 days of sitting in your chair writing down numbers is 84¬į F
  • In Cody Wyoming, the number that occurs¬†most often in the 365 days is - wait for it - 84¬įF

Now, these are the Mode numbers. The Most Likely To Occur. For temperatures, this is a bit of a stretch because temperatures are driven by a periodic cycle of seasons and weather.

But for task durations, this is a legitimate starting point for estimating. What is the most likley duration for this work? Without the variance on the Mode or the Mean, the Median is the middlemost number, there can be no confidence you wouldn't freeze to death in February if you go to Wyoming in your shorts and teeshirt

Related articles How to Avoid the "Yesterday's Weather" Estimating Problem Want To Learn How To Estimate? Capabilities Based Planning
Categories: Project Management

Median, Mean, Mode without Variance is Worthless

Herding Cats - Glen Alleman - Sun, 04/09/2017 - 16:54

Just had a conversation of sorts where it was stated, I look at the median EQF of a portfolio as one gauge of the quality of the overall underlying data. The problem is without the variances of any single point metric, that metric is pretty much worthless.

Here's a little exercise we use at conferences and training sessions.

  • I'm going to send you on a mission to determine the Mode of a measurement. You'll get a clipboard, a hat, a folding chair, and a few other items. You'll go to two locations for 365 days and record the high temperature of the day.¬†
  • Let's start with Trinidad-Tobago. You've got your clipboard, a nice folding chair sitting on the beach, a good sun hat and sunscreen.¬†
  • Next, you'll go to Cody Wyoming, just north of my home and do the same thing

Now let's look at the Mode of those numbers. Why the Mode. It's the most recurring value in a time series. This is what we use when modeling - Estimating - variables in project planning. The Mode of a Task's work duration is the Most Likely value drawn from a sample of possible values in the Probability Distribution Function. It is the value that occurs most often in the Task's life. If we have a similar Task - through a Reference Class Forecasting process, we want to use the Mode of the duration. Not the Mean (average) or the Median (the middle most). The Mode is what the Task will take most of the time.

So now let's look at the data.

  • For Trinidad-Tobago, the number that occurs¬†most often across the 365 days of sitting in your chair writing down numbers is 84¬į F
  • In Cody Wyoming, the number that occurs¬†most often in the 365 days is - wait for it - 84¬įF

Now, these are the Mode numbers. The Most Likely To Occur. For temperatures, this is a bit of a stretch because temperatures are driven by a periodic cycle of seasons and weather.

But for task durations, this is a legitimate starting point for estimating. What is the most likley duration for this work? Without the variance on the Mode or the Mean, the Median is the middlemost number, there can be no confidence you wouldn't freeze to death in February if you go to Wyoming in your shorts and teeshirt

Related articles How to Avoid the "Yesterday's Weather" Estimating Problem Want To Learn How To Estimate? Capabilities Based Planning
Categories: Project Management

Same Same but Different †

Herding Cats - Glen Alleman - Sun, 04/09/2017 - 16:47

Much of the conversation in social media around agile techniques seems to be based on the differences between the variety of choices of a  method, a process, or a practice and definitions of terms for those choices. There seems little in the way of shared principles, when in fact, there is a great deal of sharing of principles.

I work in a domain where systems are engineered for the customer. These systems fall into the Software Intensive System of Systems (SISoS) category. In this domain innovation is the basis of success. The notion that innovation and engineering - software engineering - are somehow in conflict is common. 

... creativity is simply the production of novel, approprioate ideas, from science, to the arts, to education, to business, to everyday life. The ideas must be movel - different from what's been done before - but they can't be simpmply bizarre; they must be appropriate to the problem or opportuntiy presented. Creativity is the first step in innovation, which is the successful implementaion of those novel, approproate ideas. And innovation, is absolutley vital for long-term corporate success - "Motivating creativity in organmizations: On doing whay you love and loving what you do," [2]

In many conversations about managing in the presence of uncertainty - which is the ubiquitous condition for all non-trivial software development projects - the notion that principles, processes, and practices of Engineered Systems appear to be the antithesis of Agile software development in some quarters. 

Both agile advocates and engineered systems advocates practice innovation and creativity. Both fields are supported by a history of creating innovations - and in fact, collaborate on many of the programs I work. In the software engineering domain, like the developer domain, design is the basis of innovation. Design from a generalized perspective is purposeful ...

... thinking, problem-solving, drawing, talking, consulting and responding to a range of practical and aesthetic constraints to create - ideally - the most appropriate solution(s) under the given circumstances. - [3]

So why is there a great divide between the traditional engineered software-intensive system of systems and the current agile development paradigm that appears to reject the basic principles of developing value-based products from capital investments, using the core principles of managerial finance and microeconomics of decision making in the presence of uncertainty rejected by many in the agile development community? 

Why does the engineered system paradigm reject many of the practices of the agile community as undisciplined, ad hoc with little or no basis in the principles of management? 

Let's start with five core organizational principles of agile...

  1. Regular delivery of incremental business value - defined in a Product Roadmap, scheduled in a Cadence or Capability release plan.
  2. Iterative development focused on the delivery of Features that enable the needed Capabilities that fulfill the business case or accomplish the mission of the software system.
  3. Responding to changing requirements and priorities to assure continuous release of products needed to enable the needed Capabilities to be fulfilled.
  4. Engaging in multiple levels of planning with detailed planning occurring at the Sprint level to produce working software for needed Capabilities produced as planned in the Product Roadmap.
  5. Open and regular collaboration across teams and stakeholder to assure a shared vision of the desired project outcome.

The engineered SISoS domain has the same paradigm in principle because these principles and five implementation principles are Immutable. The five implementation principles are...

  1. What does done look like described in units of measure meaningful to the decision makers?
  2. What is the plan to reach done for the cost and delivery date to produce the needed value to those paying for the work?
  3. What resources are needed to provide the needed capabilities to fulfill the business case or accomplish the mission?
  4. What impediments will be encountered along the way to Done and what handling activities will be applied to remove or reduce these impediments?
  5. How will progress to plan be measured so the decision makers will have confidence their investment will be returned as planned?

So what's the disconnect we hear between Agile and Traditional systems development? 

The first is the principles listed above are not established first before idea exchange starts. So when there is a gap in the semantics of a conversation, there is no touchstone to go back to. Without this shared understanding of the principles, the conversation becomes self-centered (an echo chamber) and any possible exchange in pursuit of learning is lost.

Second is there is no shared domain as the basis for applying those principles. I work in the SW Intensive System of Systems world. Others work in website development, others work in internal IT shops. While the principles of developing software for money are likely to be the same, the processes and practices can be dramatically different. What is a good practice for our spaceflight avionics system development, is probably of little use to a web developer. And vice versa.

Until we come to agree that the principles are a starting point, and then put a shared domain on top of those, it will be an argument without end.

Same Same but Different

† "Same Same but Different" is a phrase used a lot in Thailand, especially in an attempts to sell something but can mean just about anything depending on what the user is trying to achieve.

[1] Same Same but Different Perspectives on Creativity Workshops by Design and Business," Alexander Brem and Henrik Spoedt, IEEE Engineering Management Review, Vol. 4, No. 1, First Quarter, March 2017, pp. 27-31

[2] "Motivating Creativity in Organizations: On Doing what you Love and Loving What You Do," T. M. Amabile, California Management Review, 40, pp. 39-58, 1997.

[3] The Design History Reader,  Grace Lees-Maffei (Editor), Rebecca Houze (Editor), Bloomsbury Academic, April 15, 2010.

Related articles The Customer Bought a Capability The Reason We Plan, Schedule, Measure, and Correct Who Builds a House without Drawings? Two Books in the Spectrum of Software Development Complex, Complexity, Complicated
Categories: Project Management

Holacracy: Re-read Week One, Logistics and Introduction

Book Cover

Holacracy

This week we begin our read of Holacracy with a few logistics and a review of the introduction.  We have a short entry this week that will give you time to buy a copy today and read along!  If you have not listened to my interview with Jeff Dalton on Software Process and Measurement Cast 433, I would suggest a quick listen. Jeff has practical experience with using the concepts of holacracy in his company and as a tool in his consultancy.  

Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson was published by Henry Holt and Company in 2015.  The book is comprised of a forward, 10 chapters in three parts, notes, acknowledgments, and an index.  My plan is to read and review one chapter per week.  We will move on to a new book in approximately 12 weeks.

Introduction:  

The introduction is written by David Allen, the creator of Getting Things Done (GTD).  GTD is a hugely popular personal organization and productivity approach. David Allen premises his introduction by explaining that when he encountered David Robertson and the concept of holacracy, he was in a place where the day-to-day operation of the company was overwhelming is the ability to promote and grow GTD.  That conflict of roles caused him to not to want to be the CEO of the company. Holacracy seemed like a chance for him to support the continued growth of his firm.  Holacracy promised to allow Allen to create a structure in which he did not have to make all of the decisions, freeing time for more creative activities.  Allen and the rest of GTD dove into holacracy.  Allen wrote the introduction was written three years into the implementation of holacracy.  In the introduction, Allen makes the point that holacracy is not a silver bullet, but rather it provides a stable platform for identifying and addressing problems efficiently.

Next week we dive into the meat, so get reading!


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 04/08/2017 - 16:21

‚ÄúIf a profession is to sharpen its skills, to develop new skills and applications, and to gain increasing respect and credibility, then theory and practice must be closely related‚ÄĚ ‚Äď Martin Shub

When there are practices suggested without principles, those practices have no way of being tested for applicability to the problem at hand. So when you hear We have a way to fix some problem, ask by what principles can you suggest your solution with fix the problem? And of course, ask have you found the root cause of the problem or are you just treating the symptom?

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 04/08/2017 - 16:21

‚ÄúIf a profession is to sharpen its skills, to develop new skills and applications, and to gain increasing respect and credibility, then theory and practice must be closely related‚ÄĚ ‚Äď Martin Shub

When there are practices suggested without principles, those practices have no way of being tested for applicability to the problem at hand. So when you hear We have a way to fix some problem, ask by what principles can you suggest your solution with fix the problem? And of course, ask have you found the root cause of the problem or are you just treating the symptom?

Categories: Project Management

Velocity versus Speed (Update)

Herding Cats - Glen Alleman - Sat, 04/08/2017 - 15:17

Velocity is a speed in a specific direction. Velocity is Distance traveled divided by time in a specific direction. This is defined as a Vector. Speed and Direction. 

Velocity

The direction can be a compass heading. A compass heading of an aircraft - 270¬į. The aircraft can have a 2nd dimension of measure - the compass heading and climbing or descending¬†at a¬†measure of feet per minute.

CompassImage005
Speed is Distance traveled over Time. When we are driving we have a speedometer. It says 55 Miles Per Hour. OK, I rarely drive 55 MPH but pretend I was. In one hour (time) I will cover 55 miles. This measure doesn't include the direction. We're driving on the road for the most part, so the direction is predetermined. With speed, we're measuring the distance over that road over time, no matter what the shape of the road is - straight or curved, flat or mountainous. 

Velocity in Agile Software Development

Velocity is a term used in agile development. In agile, A velocity is an arbitrary unit of measure, calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project. The velocity is calculated by counting the number of units of work (Stories or Story Points or any other arbitrary measure) completed in an interval of time (a Sprint), the length of which is determined at the start of the project (some small number of weeks).

These units can be Story Points, Stories, kumquats, or Corgi dogs, and the interval can be a Sprint of 2 or 3 weeks or any other unit. Usually, they are Story Points or Stories, but this is arbitrary. 

So 30 Story Points over 2 weeks means 15 Story Points per week - on average - or 5 Story Points per day - on average. This is the average Speed, the instantaneous speed is different, just like the instantaneous speed changes many times a minute. The speed in the agile example is the number of Story Points and the direction is toward the end - direction. But that end we need a  unit of measure as well. The Stories and Tasks from the Product Backlog is a direction. Those Stories are definitized from Features in the Prodict Roadmap and Release Plan. That's how we get direction in agile for Velocity.

Velocity = Speed in a Direction. Velocity is a Vector. Speed is a Scalar

Does Velocity effect Cycle Time? 

Cycle time is the total time from the beginning to the end of a work process. Cycle time includes process time, where the object under work is acted upon to bring it closer to an output, and delay time, during which a unit of work is spent waiting to take the next action. For software development, the object is the Story and the development of the Code that implements the Story along with the testing and all the other activities of producing the outcome from the work effort.

If we think of cycle time as the time the Story spends being developed (including all the work to produce working software), over some period of time inside the Sprint, including the entire Sprint. Then we can say...

The cycle time for a Story is the total time from beginning to end for the production of Working Software. Howe long were we working on the Story. This is a unit used in Little's Law as well. The direction this Story is going is toward the end of the Sprint. So the Story has a Velocity. But since the direction is fixed - toward the last day of the Sprint, the passage of time from start to end of the development effort, is it's Seed.

So here's the Question

Does Velocity impact Cycle Time? Answer? YES It Does

Why? Cycle Time and Velocity - when there is a fixed direction (the Sprint) over a fixed duration (2 or 3 weeks where we work) affect the total amount of time the Story is being worked (as we say in our defense software development domain). This time being worked is the Cycle Time for that Story. How fast the work is being done (Speed, as in Velocity with a fixed direction) will impact how long the Story is being worked.

The faster we go the faster the Story will be completed. 

So yes,

Velocity does impact the Cycle Time. It impacts that Cycle Time directly. Go fast, finish fast. Finish fast, lower Cycyle Time

After talking this over this morning with a colleague who is responsible for many of the aspects of Agile development in manned and unmanned space flight systems where I work, here's a summary of the notion of Velocity from her point of view - which like mine is from Physics and Math

Screen Shot 2017-04-07 at 3.13.33 PM

  • Speed is the rate of the production of Story Points by the team over some time - the Sprint
  • The direction is the Product Roadmap -¬†where are we going. The Stories and the Features they implement need to have some reason for being there.

So velocity and throughput are directly related. The faster outcomes appear the larger the throughput, measured in a fixed block of time. Since Story Points are a relative (Ordinal) unit of measure expressing an estimate of the effort required to fully implement a product backlog item or any other piece of work, we can assign Story Points to the rate of movement - 20 story points per sprint. This can also be referred to as the capacity for work. Our team can deliver 20 Story Points per Sprint. 

And of course each of these variables are random variable impacted by uncertainty, so estimates are needed in the presence of this uncertainty to determine if we are going be delivering what we said we were going to deliver when the Sprint started.

Lastly, when we hear Story Points are not hours, that is also incorrect. We can assign the arbitrary unit of measure - the Story Point - to any Cardinal measure. One program I work assigns ONE Story Point to be 6 hours (an Ideal Day). This is purely arbitrary of course, but once we've exited Story Time where we use Story Points to prioritize the Story, they are converted into Ideal Days to confirm our capacity for work matches the calendar period of performance for that work. 

By the Way

There a poster on a cubical wall across from my cubical that says ...

Aardvark

Why is this important? The staff on that side of the passage are contract managers for a Manned Space Flight Vehicle. They are always arguing about the meaning of Affect and Effect and how contractually binding the words they are arguing about, usually some piece of hardware that was built or procured and how that piece of hardware is integrated into a larger piece of hardware - last week the connectivity bolts that attached the spacecraft to the adapter ring on top of the launch vehicle and how it's behaviour will AFFECT the performance of the system after the EFFECT of an explosive separation, which by the way must work 100% of the time or people die.

 

 

Related articles Estimates Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management
Categories: Project Management

Same Same but Different †

Herding Cats - Glen Alleman - Sat, 04/08/2017 - 00:19

Much of the conversation in social media around agile techniques seems to be based on the differences between the variety of choices of a  method, a process, or a practice and definitions of terms for those choices. There seems little in the way of shared principles, when in fact, there is a great deal of sharing of principles.

I work in a domain where systems are engineered for the customer. These systems fall into the Software Intensive System of Systems (SISoS) category. In this domain innovative is also the basis of success. The notion that innovation and engineering - software engineering - are somehow in conflict is common. 

... creativity is simply the production of novel, approprioate ideas, from science, to the arts, to education, to business, to everyday life. The ideas must be movel - different from what's been done before - but they can't be simpmply bizarre; they must be appropriate to the problem or opportuntiy presented. Creativity is the first step in innovation, which is the successful implementaion of those novel, approproate ideas. And innovation, is absolutley vital for long-term corporate success - "Motivating creativity in organmizations: On doing whay you love and loving what you do," [2]

In many conversations about managing in the presence of uncertainty - which is the ubiquitous condition for all non-trivial software development projects - the notion that principles, processes, and practices of Engineered Systems appear to be the antithesis of Agile development in some quarters. 

Both agile advocates and engineered systems advocates practice innovation and creativity. Both fields are supported by a history of creating innovations - and in fact, collaborate on many of the programs I work. In the software engineering domain, like the developer domain, design is the basis of this innovation. Design from a generalized perspective is purposeful ...

... thinking, problem-solving, drawing, talking, consulting and responding to a range of practical and aesthetic constraints to create - ideally - the most appropriate solution(s) under the given circumstances. - [3]

So why is it there is a great divide between the traditional engineered software-intensive system of systems and the current agile development paradigm that appears to reject the basic principles of developing value-based products from capital investments? Why are the core principles of managerial finance and the microeconomics of decision making in the presence of uncertainty rejected by the agile development community? 

Why does the engineered system paradigm reject many of the practices of the agile community as undisciplined, ad hoc practices with little or nor basis in the principles of management? 

Let's start with five core organization principles of agile...

  1. Regular delivery of incremental business value - defined in a Product Roadmap, scheduled in a Cadence or Capability release plan
  2. Iterative development focused on the delivery of Features that enable the needed Capabilities that fulfill the business case or accomplish the mission of the software system
  3. Responding to changing requirements and priorities to assure continuous release of products needed to enable the needed Capabilities to be fulfilled.
  4. Engaging in multiple levels of planning with detailed planning occurring at the Sprint level to produce working software for needed Capabilities produced as planned in the Product Roadmap
  5. Open and regular collaboration across teams and stakeholder to assure a shared vision of the desired project outcome

The engineered SISoS domain has the same paradigm in principle because these principles and five implementation principles are Immutable. The five implementation principles are...

  1. What does done look like described in units of measure meaningful to the decision makers
  2. What is the plan to reach done for the cost and delivery date to produce the needed value to those paying for the work?
  3. What resources are needed to provide the needed capabilities to fulfill the business case or accomplish the mission?
  4. What impediments will be encountered along the way to Done and what handling activities will be applied to remove or reduce these impediments?
  5. How will progress to plan be measured so the decision makers will have confidence their investment will be returned as planned?

So what's the disconnect we hear between Agile and more Traditional systems development? 

The first is the principles listed above are not established first before idea exchange starts. So when there is a gap in the semantics of a conversation, there is not touchstone to go back to. Without this shared understanding of the principles, the conversation becomes self-centered (an echo chamber) and any possible exchange in pursuit of learning is lost.

Second is there is no shared domain as the basis for applying those principles. I work in the SW Intensive System of Systems world. Other work in website development, others work in internal IT shops. While the principles of developing software for money are likely to be the same, the processes and practices can be dramatically different. What is a good practice for our spaceflight avionics system development, is probably of little use to a web developer. And vice versa.

Until we come to agree that the principles are a starting point, and then put a shared domain on top of those, it will an argument without end

Same Same but Different

† "Same Same but Different" is a phrase used a lot in Thailand, especially in an attempts to sell something but can mean just about anything depending on what the user is trying to achieve.

[1] Same Same but Different Perspectives on Creativity Workshops by Design and Business," Alexander Brem and Henrik Spoedt, IEEE Engineering Management Review, Vol. 4, No. 1, First Quarter, March 2017, pp. 27-31

[2] "Motivating Creativity in Organizations: On Doing what you Love and Loving What You Do," T. M. Amabile, California Management Review, 40, pp. 39-58, 1997.

[3] The Design History Reader,  Grace Lees-Maffei (Editor), Rebecca Houze (Editor), Bloomsbury Academic, April 15, 2010.

Related articles The Customer Bought a Capability The Reason We Plan, Schedule, Measure, and Correct Who Builds a House without Drawings? Two Books in the Spectrum of Software Development Complex, Complexity, Complicated
Categories: Project Management