Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

When Should You Improve?

Making the Complex Simple - John Sonmez - Mon, 01/25/2016 - 14:00

Resolutions for the New Year seldom have the lasting impact they intend to. Unpacking the reasons for this is difficult. The human psyche is so complex it can be hard to determine all the reasons for why we fall short. The problem with taking the New Year as an opportunity to change our habits is […]

The post When Should You Improve? appeared first on Simple Programmer.

Categories: Programming

Reinventing Testing: What is Integration Testing? (part 2)

James Bach’s Blog - Mon, 01/25/2016 - 10:45

These thoughts have become better because of these specific commenters on part 1: Jeff Nyman, James Huggett, Sean McErlean, Liza Ivinskaia, Jokin Aspiazu, Maxim Mikhailov, Anita Gujarathi, Mike Talks, Amit Wertheimer, Simon Morley, Dimitar Dimitrov, John Stevenson. Additionally, thank you Michael Bolton and thanks to the student whose productive confusion helped me discover a blindspot in my work, Anita Gujarathi.

Integration testing is a term I don’t use much– not because it doesn’t matter, but because it is so fundamental that it is already baked into many of the other working concepts and techniques of testing. Still, in the past week, I decided to upgrade my ability to quickly explain integration, integration risk, and integration testing. This is part of a process I recommend for all serious testers. I call it: reinventing testing. Each of us may reinvent testing concepts for ourselves, and engage in vigorous debates about them (see the comments on part 1, which is now the most commented of any post I have ever done).

For those of you interested in getting to a common language for testing, this is what I believe is the best way we have available to us. As each of us works to clarify his own thinking, a de facto consensus about reasonable testing ontology will form over time, community by community.

So here we go…

There several kinds of testing that involve or overlap with or may even be synonymous with integration testing, including: regression testing, system testing, field testing, interoperability testing, compatibility testing, platform testing, and risk-based testing. Most testing, in fact, no matter what it’s called, is also integration testing.

Here is my definition of integration testing, based on my own analysis, conversations with RST instructors (mainly Michael Bolton), and stimulated by the many commenters from part 1. All of my assertions and definitions are true within the Rapid Software Testing methodology namespace, which means that you don’t have to agree with me unless you claim to be using RST.

What is integration testing?

Integration testing is:
1. Testing motivated by potential risk related to integration.
2. Tests designed specifically to assess risk related to integration.

Notes:

1. ‚ÄúMotivated by‚ÄĚ and ‚Äúdesigned specifically to‚ÄĚ overlap but are not the same. For instance, if you know that a dangerous criminal is on the loose in your neighborhood you may behave in a generally cautious or vigilant way even if you don‚Äôt know where the criminal is or what he looks like. But if you know what he looks like, what he is wearing, how he behaves or where he is, you can take more specific measures to find him or avoid him. Similarly, a newly integrated product may create a situation where any kind of testing may be worth doing, even if that testing is not specifically aimed at uncovering integration bugs, as such; OR you can perform tests aimed at exposing just the sort of bugs that integration typically causes, such as by performing operations that maximize the interaction of components.

The phrase ‚Äúintegration testing‚ÄĚ may therefore represent ANY testing performed specifically in an ‚Äúintegration context‚ÄĚ, or applying a specific ‚Äúintegration test technique‚ÄĚ in ANY context.

This is a special case of the difference between risk-based test management and risk-based test design. The former assigns resources to places where there is potential risk but does not dictate the testing to be performed; whereas the latter crafts specific tests to examine the product for specific kinds of problems.

2. ‚ÄúPotential risk‚ÄĚ is not the same as ‚Äúrisk.‚ÄĚ Risk is the danger of something bad happening, and it can be viewed from at least three perspectives: probability of a bad event occurring, the impact of that event if it occurs, and our uncertainty about either of those things. A potential risk is a risk about which there is substantial uncertainty (in other words, you don’t know how likely the bug is to be in the product or you don’t know how bad it could be if it were present). The main point of testing is to eliminate uncertainty about risk, so this often begins with guessing about potential risk (in other words, making wild guesses, educated guesses, or highly informed analyses about where bugs are likely to be).

Example: I am testing something for the first time. I don’t know how it will deal with stressful input, but stress often causes failure, so that’s a potential risk. If I were to perform stress testing, I would learn a lot about how the product really handles stress, and the potential risk would be transformed into a high risk (if I found serious bugs related to stress) or a low risk (if the product handled stress in a consistently graceful way).

What is integration?

General definition from the Oxford English Dictionary: ‚ÄúThe making up or composition of a whole by adding together or combining the separate parts or elements; combination into an integral whole: a making whole or entire.‚ÄĚ

Based on this, we can make a simple technical definition related to products:

Integration is:
v. the process of constructing a product from parts.
n. a product constructed from parts.

Now, based on General Systems Theory, we make these assertions:

An integration, in some way and to some degree:

  1. Is composed of parts:
  • …that come from differing sources.
  • …that were produced for differing purposes.
  • …that were produced at different times.
  • …that have differing attributes.
  1. Creates or represents an internal environment for its parts:
  • …in which its parts interact among themselves.
  • …in which its parts depend on each other.
  • …in which its parts interact with or depend on an external environment.
  • …in which these things are not visible from the outside.
  1. Possesses attributes relative to its parts:
  • …that depend on them.
  • …that differ from them.

Therefore, you might not be able to discern everything you want to know about an integration just by looking at its parts.

This is why integration risk exists. In complex or important systems, integration testing will be critically important, especially after changes have been made.

It may be possible to gain enough knowledge about an integration to characterize the risk (or to speak more plainly: it may be possible to find all the important integration bugs) without doing integration testing. You might be able to do it with unit testing. However, that process, although possible in some cases, might be impractical. This is the case partly because the parts may have been produced by different people with different assumptions, because it is difficult to simulate the environment of an integration prior to actual integration, or because unit testing tends to focus on what the units CAN do and not on what they ACTUALLY NEED to do. (If you unit test a calculator, that’s a lot of work. But if that calculator will only ever be asked to add numbers under 50, you don’t need to do all that work.)

Integration testing, although in some senses being complex, may actually simplify your testing since some parts mask the behavior of other parts and maybe all you need to care about is the final outputs.

Notes:

1. “In some way and to some degree” means that these assertions are to be interpreted heuristically. In any specific situation, these assertions are highly likely to apply in some interesting or important way, but might not. An obvious example is where I wrote above that the “parts interact with each other.” The stricter truth is that the parts within an integration probably do not EACH directly interact with ALL the other ones, and probably do not interact to the same degree and in the same ways. To think of it heuristically, interpret it as a gentle warning such as¬† “if you integrate something, make it your business to know how the parts might interact or depend on each other, because that knowledge is probably important.”

By using the phrase “in some way and to some degree” as a blanket qualifier, I can simplify the rest of the text, since I don’t have to embed other qualifiers.

2. “Constructing from parts” does not necessarily mean that the parts pre-existed the product, or have a separate existence outside the product, or are unchanged by the process of integration. It just means that we can think productively about pieces of the product and how they interact with other pieces.

3. A product may possess attributes that none of its parts possess, or that differ from them in unanticipated or unknown ways. A simple example is the stability of a tripod, which is not found in any of its individual legs, but in all the legs working together.

4. Disintegration also creates integration risk. When you takes things away, or take things apart, you end up with a new integration, and that is subject to the much the same risk as putting them together.

5. The attributes of a product and all its behaviors obviously depend largely on the parts that comprise it, but also on other factors such as the state of those parts, the configurations and states of external and internal environments, and the underlying rules by which those things operate (ultimately, physics, but more immediately, the communication and processing protocols of the computing environment).

6. Environment refers to the outside of some object (an object being a product or a part of a product), comprising factors that may interact with that object. A particular environment might be internal in some respects or external in other respects, at the same time.

  • An internal environment is an environment controlled by the product and accessible only to its parts. It is inside the product, but from the point vantage point of some of parts, it’s outside of them. For instance, to a spark plug the inside of an engine cylinder is an environment, but since it is not outside the car as a whole, it’s an internal environment. Technology often consists of deeply nested environments.
  • An external environment is an environment inhabited but not controlled by the product.
  • Control is not an all-or-nothing thing. There are different levels and types of control. For this reason it is not always possible to strictly identify the exact scope of a product or its various and possibly overlapping environments. This fact is much of what makes testing– and especially security testing– such a challenging problem. A lot of malicious hacking is based on the discovery that something that the developers thought was outside the product is sometimes inside it.

7. An interaction occurs when one thing influences another thing. (A “thing” can be a part, an environment, a whole product, or anything else.)

8. A dependency occurs when one thing requires another thing to perform an action or possess an attribute (or not to) in order for the first thing to behave in a certain way or fulfill a certain requirement. See connascence and coupling.

9. Integration is not all or nothing– there are differing degrees and kinds. A product may be accidentally integrated, in that it works using parts that no one realizes that it has. It may be loosely integrated, such as a gecko that can jettison its tail, or a browser with a plugin. It may be tightly integrated, such as when we take the code from one product and add it to another product in different places, editing as we go. (Or when you digest food.) It may preserve the existing interfaces of its parts or violate them or re-design them or eliminate them. The integration definition and assertions, above, form a heuristic pattern– a sort of lens– by which we can make better sense of the product and how it might fail. Different people may identify different things as parts, environments or products. That’s okay. We are free to move the lens around and try out different perspectives, too.

Example of an Integration Problem

bitmap

This diagram shows a classic integration bug: dueling dependencies. In the top two panels, two components are happy to work within their own environments. Neither is aware of the other while they work on, let’s say, separate computers.

But when they are installed together on the same machine, it may turn out that each depends on factors that exclude the other. Even though the components themselves don’t clash (the blue A box and the blue B boxes don’t overlap). Often such dependencies are poorly documented, and may be entirely unknown to the developer before integration time.

It is possible to discover this through unit testing… but so much easier and probably cheaper just to try to integrate sooner rather than later and test in that context.

 

Categories: Testing & QA

SPaMCAST 377 ‚Äď Evan Leybourn, No More Projects

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

We begin year 10 of the Software Process and Measurement Cast with our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the “end to IT projects.” We discussed the idea of #NoProject and continuous delivery, and whether this is just an ‚ÄúIT‚ÄĚ thing or something that can encompass the entire business.¬† Evan‚Äôs views are informative and bit provocative.¬† I have not stopped thinking about the concepts we discussed since originally taping the interview.

Evan last appeared on SPaMCAST 284 ‚Äď Evan Leybourn, Directing The Agile Organization to discuss his book Directing the Agile Organization.

Evan’s Bio
Evan pioneered the field of Agile Business Management; applying the successful concepts and practices from the Lean and Agile movements to corporate management. He keeps busy as a business leader, consultant, non-executive director, conference speaker, internationally published author and father.

Evan has a passion for building effective and productive organizations, filled with actively engaged and committed people. Only through this, can organizations flourish. His experience while holding senior leadership and board positions in both private industry and the government has driven his work in business agility and he regularly speaks on these topics at local and international industry conferences.

As well as writing “Directing the Agile Organization.”, Evan currently works for IBM in Singapore to help them become a leading agile organization. As always, all thoughts, ideas, and comments are his own and do not represent his clients or employer.

All of Evan’s contact information and blog can be accessed on his website.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Six, we discussed using risk in quantitative analysis and the Monte Carlo analysis.

 

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: why doesn’t the definition of done include value?

We will also have columns from Jeremy Berriault’s QA Corner and Steve Tendon discussing the next chapter in the book  Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Clojure: First steps with reducers

Mark Needham - Sun, 01/24/2016 - 23:01

I’ve been playing around with Clojure a bit today in preparation for a talk I’m giving next week and found myself writing the following code to apply the same function to three different scores:

(defn log2 [n]
  (/ (Math/log n) (Math/log 2)))
 
(defn score-item [n]
  (if (= n 0) 0 (log2 n)))
 
(+ (score-item 12) (score-item 13) (score-item 5)) 9.60733031374961

I’d forgotten about folding over a collection but quickly remembered that I could achieve the same result with the following code:

(reduce #(+ %1 (score-item %2)) 0 [12 13 5]) 9.60733031374961

The added advantage here is that if I want to add a 4th score to the mix all I need to do is append it to the end of the vector:

(reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767

However, while Googling to remind myself of the order of the arguments to reduce I kept coming across articles and documentation about reducers which I’d heard about but never used.

As I understand they’re used to achieve performance gains and easier composition of functions over collections so I’m not sure how useful they’ll be to me but I thought I’d give them a try.

Our first step is to bring the namespace into scope:

(require '[clojure.core.reducers :as r])

Now we can compute the same result using the reduce function:

(r/reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767

So far, so identical. If we wanted to calculate individual scores and then filter out those below a certain threshold the code would behave a little differently:

(->>[12 13 5 6]
    (map score-item)
    (filter #(> % 3))) (3.5849625007211565 3.700439718141092)
 
(->> [12 13 5 6]
     (r/map score-item)
     (r/filter #(> % 3))) #object[clojure.core.reducers$folder$reify__19192 0x5d0edf21 "clojure.core.reducers$folder$reify__19192@5d0edf21"]

Instead of giving us a vector of scores the reducers version returns a reducer which can pass into reduce or fold if we want an accumulated result or into if we want to output a collection. In this case we want the latter:

(->> [12 13 5 6]
     (r/map score-item)
     (r/filter #(> % 3))
     (into [])) (3.5849625007211565 3.700439718141092)

With a measly 4 item collection I don’t think the reducers are going to provide much speed improvement here but we’d need to use the fold function if we want processing of the collection to be done in parallel.

One for next time!

Categories: Programming

SPaMCAST 378 ‚Äď Evan Leybourn, No More Projects

Software Process and Measurement Cast - Sun, 01/24/2016 - 23:00

We begin year 10 of the Software Process and Measurement Cast with our Interview with Evan Leybourn. Evan returns to the Software Process and Measurement Cast to discuss the "end to IT projects." We discussed the idea of #NoProject and continuous delivery, and whether this is just an ‚ÄúIT‚ÄĚ thing or something that can encompass the entire business.¬† Evan‚Äôs views are informative and bit provocative.¬† I have not stopped thinking about the concepts we discussed since originally taping the interview.

Evan last appeared on SPaMCAST 284 ‚Äď Evan Leybourn, Directing The Agile Organization to discuss his book Directing the Agile Organization.

Evan’s Bio
Evan pioneered the field of Agile Business Management; applying the successful concepts and practices from the Lean and Agile movements to corporate management. He keeps busy as a business leader, consultant, non-executive director, conference speaker, internationally published author and father.

Evan has a passion for building effective and productive organizations, filled with actively engaged and committed people. Only through this, can organizations flourish. His experience while holding senior leadership and board positions in both private industry and the government has driven his work in business agility and he regularly speaks on these topics at local and international industry conferences.

As well as writing "Directing the Agile Organization.", Evan currently works for IBM in Singapore to help them become a leading agile organization. As always, all thoughts, ideas, and comments are his own and do not represent his clients or employer.

All of Evan’s contact information and blog can be accessed on his website.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Six, we discussed using risk in quantitative analysis and the Monte Carlo analysis.

 

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: why doesn’t the definition of done include value?

We will also have columns from Jeremy Berriault’s QA Corner and Steve Tendon discussing the next chapter in the book  Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

The Man Who Opened the Door

James Bach’s Blog - Sun, 01/24/2016 - 22:09

I just heard that Ed Yourdon died.

In ’93 or early ’94, I got a strange email from him. He had heard about me in Mexico and he wanted to meet. I had never been to Mexico and I had never met or spoken to Ed. I was shocked: One of the most famous people in the software development methodology world wanted to talk to me, a test manager in Silicon Valley who had almost no writing and had spoken at only one conference! This was the very first time I realized that I had begun to build a reputation in the industry.

Ed was not the first famous guy I had met. I met Boris Beizer at that conference I mentioned, and that did not go well (we yelled at each other… he told me that I was full of shit… that kind of thing). I thought that might be the end of my ambition to rock the testing industry, if the heavy hitters were going to hate me.

Ed was a heavy hitter. I owned many of his books and I had carefully read his work on structured analysis. He was one of my idols.

So we met. We had a nice dinner at the Hyatt in Burlingame, south of San Francisco. He told me I needed to study systems thinking more deeply. He challenged me to write a book and asked me to write articles for American Programmer (later renamed to the Cutter IT Journal).

The thing that got to me was that Ed treated me with respect. He asked me many questions. He encouraged me to debate him. He pushed me to write articles on the CMM and on Good Enough Software– both subjects that got me a lot of favorable attention.

On the day of our meeting, he was 49– the same age I am now. He set me on a path to become a guy like him– because he showed me (as many others would later do, as well) that the great among us are people who help other people aspire to be great, too. I enjoy helping people, but reflecting on how I was helped reminds me that it is not just fun, it’s a moral imperative. If Ed reached out his hand to me, some stranger, how can I not do the same?

Ed saw something in me. Even now I do not want to disappoint him.

Rest in Peace, man.

 

Categories: Testing & QA

How To Measure Anything, Chapter 6: Quantifying Risk Through Modeling

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 6 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition, is titled: Quantifying Risk Through Modeling. Chapter 6 builds on the basics described in Chapter 4 (define the decision and data that will be needed) and Chapter 5 (determine what is known). Hubbard addresses the process of quantifying risk in two overarching themes. The first theme is the quantification of risk and the second is using the Monte Carlo analysis to model outcomes.

Risk is the possibility that a loss or some other bad thing will occur. The possibility that something equates to uncertainty. Risk is often expressed in qualitative terms such low, medium, high and the ever popular really high rather than in quantified terms. However, that qualitative approach generates ambiguity. Qualitative designations provide little measurement value outside of a sort of measurement placebo effect. Even though an absolute value for risk might not be knowable, risk can be quantified as a range of values. The qualification of risk is important, both in terms of understanding risk and in terms of usefulness for defining overall outcomes. Defining a range of risks makes understanding the amount of risk in any decision less ambiguous. Quantifying risk is also the foundation of further measurement needed for decision-making. Hubbard hammers home the point that measurement is done to inform some decision that is uncertain and has negative consequences if it turns out wrong.

The goal of Monte Carlo analysis is to provide input for better decision making under uncertainty. When you allow yourself to use ranges and probabilities, you really don’t really have to assume anything you don’t know for a fact (Chapter 5 showed us how to estimate based on what we know). All risks can be expressed by the range of uncertainty on the costs and benefits and the probabilities of events that may affect them. Turning a range of estimates into a set of predicted outcomes requires math. Monte Carlo analysis is a mathematical technique that uses the estimated uncertainty in decisions to furnish a range of possible outcomes and the probabilities they will occur.

Monte Carlo analysis can incorporate a wide range of scenarios and variables.¬† Hubbard points out that it is easy to get carried away with the detail of the model. Models should only be as sophisticated as needed to add value to a decision. Remember, as Gene Hughson of Form Follows Function says, ‚Äúall models are wrong.‚Ä̬† Models are abstractions of real life, there is always detail you leave out, no matter how sophisticated the model becomes. Hubbard suggests that model users always ask whether a new more complex model is an improvement on any alternative model. Quantification provides a platform for making consistent choices in order to clearly state how risk-averse or risk tolerant any organization really is.

Hubbard closes this chapter by stating a risk paradox.

‚ÄúIf an organization uses quantitative risk analysis at all, it is usually for routine operation decisions.¬† The largest, most risky decisions get the least amount of risk analysis.‚ÄĚ

The combination of estimation (Chapter 5), quantifying risk and Monte Carlo analysis may seem complex keeps many decision makers from using the technique, this is especially true in software development, hence the paradox.  For example, every software development estimation problem, whether Agile, lean or plan based, has a large degree of uncertainty embedded in the process and therefore, is a perfect candidate to use Monte Carlo analysis. However, very few estimator understand or use the technique. Learning Monte Carlo analysis (and using the one of the many tools to do the mathematical heavy lifting) or alternately hiring some to perform risk analysis are both paths to addressing adding quantitative data to decision making.  When making decisions under conditions of uncertainty,  Monte Carlo analysis is a necessity to do the math needed. 

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

How To Measure Anything, Third Edition, Introduction

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?


Categories: Process Management

Project Management is a Closed Loop Control System

Herding Cats - Glen Alleman - Sat, 01/23/2016 - 23:27

Without a desired delivery date, target budget, and expected¬†capabilities, a control system is of little interest to those¬†providing the money at business level.¬†There is no way to assure those needs ‚Äď date, budget, capabilities ‚Äď can¬†be met with the current capacity for work, efficacy of that work process,¬†or budget absorption of that work.¬†

With a need date, target budget, and expected capability outcome, a control system is the basis of increasing the probability of success. These targets are the baseline to steer toward. Without a steering target the management of the project is Open Loop. There are two types of control systems

  • Closed Loop Control ‚Äď where the output signal has direct impact on the control action.
  • Open Loop Control ‚Äď where the output signal has no direct impact on the control action.

An¬†Open Loop Control control system¬†is a non-feedback system, where¬†the output ‚Äď the desired state ‚Äď has no influence or effect¬†on the control action of the input signal.¬†In an Open-Loop control system the output ‚Äď the desired¬†state‚Äď is neither measured nor ‚Äúfed back‚ÄĚ for comparison¬†with the input. ¬†An Open-Loop system is expected to faithfully follow its¬†input command or set point regardless of the final result.¬†An Open-Loop system has no knowledge of the output¬†condition ‚Äď the difference between desired state and actual¬†state ‚Äď so cannot self-correct any errors it could make¬†when the preset value drifts, even if this results in large¬†deviations from the preset value.

An Closed-loop Control System, is a feedback control system¬†which uses the concept of an open loop system as its forward¬†path but has one or more feedback loops between its output¬†and its input.¬†In Closed Loop control, there is a¬†‚Äúfeedback,‚ÄĚ signal that means some portion of the output¬†is returned ‚Äúback‚ÄĚ to the input to form part of the systems¬†excitation.

Closed-loop systems are designed to automatically achieve and¬†maintain the desired output condition by comparing it with the¬†actual condition. Closed Loop control systems do¬†this by generating an error signal which is the¬†difference between the output and the reference input.¬†A ‚Äúclosed-loop system‚ÄĚ is a fully automatic control system in¬†which its control action being dependent on the output in some¬†way.

Key Differences Between Open Loop and Closed Loop control

Open Loop Control

  • Controller has some knowledge¬†of the output condition.
  • The desired condition is not¬†present in the control loop ‚Ästhence the Open Loop.
  • Any corrective action requires an¬†operator input to change the¬†behavior of the system to achieve¬†a desired output condition
  • No comparison between actual output condition and the desired output conditions.
  • Close Loop control has no regulation or control action over the output condition ! Each input condition determine a fixed operating condition for the controller.
  • Changes or disturbances in external conditions does not result in a direct output change unless the controller and manually altered.

Closed Loop Control

  • Controller has some knowledge of the output condition.
  • The desired condition is compared to the actual condition to create an error signal. This signal¬†is the difference between the input signal (the desired dryness) and the output signal (the current dryness).
  • Closed loop means feedback not just for recording the output, but for comparing with the desired state to take corrective action.
  • Output condition errors are adjust by changes in the controller function by measure difference between output and desired condition.
  • Output conditions are stable in the presence of an unstable system.
  • Reliable and repeatable output performance results from corrective actions taken from the error signal.

Using Closed Loop Control for a Project

  • The Setting ‚Äď we work in an enterprise IT environment, a product development company, or on a mission critical software project.
  • The Protagonist ‚Äď Those providing the money need information to make decisions
  • The Imbalance ‚Äď it‚Äôs not clear how to make decisions in the absence of information about, the cost, schedule, and technical outcomes from those decisions.
  • Restoring the Balance ‚Äď when a decision is made, it needs to based on the principles of microeconomics, at least in a governance based organization. The decision
  • Recommended Solution ‚Äď start with a baseline estimate of the cost, schedule, and technical performance. Execute work and measure the productivity of that work.

Using these measures to calculate the variance between planned and actual. Take management action to adjust the productivity, the end date, the budget ‚Äď using all variables produce a new Estimate To Complete to manage toward.

This is a closed loop control system

The Microeconomics of Decision in Closed Loop Control

Microeconomics is the study of how people make decisions in resource-limited situation on a person scale. It deals with decision that individual and organizations make on such issues as how much insurance to buy, which word processor to buy, what prices to change for pro ducts and services, which path to take in a project. Throughput the project lifecycle, these decision making opportunities. Each decision impacts the future behavior of the project and is informed by past performance and the probabilistic and statistical processes of the underlying project activities. To make an informed decision about the project, estimates are made using this information.

Microeconomics applied to projects is a well understood and broadly applied discipline in cost account and business strategy and execution. Decision making based on alternatives, their assessed value and forecast cost. Both these values are probabilistic. Microeconomics is the basis of Real Options and other statistical decision making. Without this paradigm decision are made not knowing the future impact of those decisions, their cost, schedule, or technical impacts. This is counter to good business practices in any domain.

Let's Look At An Open Loop Control System

Screen Shot 2016-01-23 at 3.17.28 PMThis is all fine and dandy. But where are we going? What's the probability we will arrive at our desired destination if we knew what that destination was? Do we have what we need to reach that desired destination if we knew what it was? In Open Loop Control these questions have no answers.

Let's Look at a Closed Loop Control System

Screen Shot 2016-01-23 at 3.18.42 PM

We want to manage our projects with Closed Loop Control Systems

Related articles Who's Budget is it Anyway? Your Project Needs a Budget and Other Things The Actual Science in Management Science Control Systems - Their Misuse and Abuse Building a Credible Performance Measurement Baseline Open Loop Thinking v. Close Loop Thinking
Categories: Project Management

Demonstration of the Exactness of Little's Law

Xebia Blog - Sat, 01/23/2016 - 11:00

Day 18

Little's Law is a powerful tool that relates the amount the work a team is doing and the average lead time of each work item. Basically there are two main applications involving either 1) the input rate of work entering the team, or 2) the throughput of work completed.

In previous posts (Applying Little's Law in agile games, Why Little's law works...always) I already explained that Little's Law is exact and hardly has any assumptions, other than work entering the team (or system).

This post demonstrates this by calculating Little Law at every project day while playing GetKanban.

The video below clearly shows that Little's Law holds exactly at every project day. For both the input rate and throughput versions. Throughput is based on the subclass of 'completed' items.

E.g. on the yellow post-it the product of lambda and W equals N on every project day.

http://blog.xebia.com/wp-content/uploads/2016/01/LittlesLaw_540p.mp4

 

IMG_1436

The set-up is that we run the GetKanban game from day 9 through day 24. The video will show on the right hand side the board and charts whereas the left hand side shows the so-called 'sample path' and Little's Law calculation for both input rate (yellow post-it) and throughput (green post-it).

Sample Path. The horizontal axis shows the project day running from 9 till 24. The vertical axis shows the work item: each row represents a item on the board.

The black boxes mark the days that the work in on the board. For example, item 8 was in the system on project day 9 and completed at the end of project day 12 where it was deployed.

The collection of all black boxes is called a 'Sample Path'.

 

 

IMG_1433 copyLittle's Law. The average number of items in the system (N) is show on top. This is an average over the project days. Here W denotes the average lead-time of the items. This is an average taken over all work items.

Input rate: on the yellow post-it the Greek lambda indicates the average number of work per day entering the system.

Throughput: the green post-it indicates the average work per day completed. This is indicated by the Greek mu.

Note: the numbers on the green post-it are obtained by considering only the subclass of work that is completed (the red boxes).

References

[Rij2014a] http://blog.xebia.com/why-littles-law-works-always/

[Rij2014b] http://blog.xebia.com/applying-littles-law-in-agile-games/

Teach Yourself Deep Learning with TensorFlow and Udacity

Google Code Blog - Fri, 01/22/2016 - 20:09

Originally posted on Google Research Blog

Posted by Vincent Vanhoucke, Principal Research Scientist

Deep learning has become one of the hottest topics in machine learning in recent years. With TensorFlow, the deep learning platform that we recently released as an open-source project, our goal was to bring the capabilities of deep learning to everyone. So far, we are extremely excited by the uptake: more than 4000 users have forked it on GitHub in just a few weeks, and the project has been starred more than 16000 times by enthusiasts around the globe.

To help make deep learning even more accessible to engineers and data scientists at large, we are launching a new Deep Learning Course developed in collaboration with Udacity. This short, intensive course provides you with all the basic tools and vocabulary to get started with deep learning, and walks you through how to use it to address some of the most common machine learning problems. It is also accompanied by interactive TensorFlow notebooks that directly mirror and implement the concepts introduced in the lectures.
The course consists of four lectures which provide a tour of the main building blocks that are used to solve problems ranging from image recognition to text analysis. The first lecture focuses on the basics that will be familiar to those already versed in machine learning: setting up your data and experimental protocol, and training simple classification models. The second lecture builds on these fundamentals to explore how these simple models can be made deeper, and more powerful, and explores all the scalability problems that come with that, in particular regularization and hyperparameter tuning. The third lecture is all about convolutional networks and image recognition. The fourth and final lecture explore models for text and sequences in general, with embeddings and recurrent neural networks. By the end of the course, you will have implemented and trained this variety of models on your own machine and will be ready to transfer that knowledge to solve your own problems!

Our overall goal in designing this course was to provide the machine learning enthusiast a rapid and direct path to solving real and interesting problems with deep learning techniques, and we're now very excited to share what we've built! It has been a lot of fun putting together with the fantastic team of experts in online course design and production at Udacity. For more details, see the Udacity blog post, and register for the course. We hope you enjoy it!

Categories: Programming

Stuff The Internet Says On Scalability For January 22nd, 2016

Hey, it's HighScalability time:


The Imaginary Kingdom of Aurullia. A completely computer generated fractal. Stunning and unnerving.

 

If you like this Stuff then please consider supporting me on Patreon.
  • 42,000: drones from China securing the South China Sea; 1 billion: WhatsApp active users; 2‚ĀĽ¹²²: odds of a two GUIDs with 122 random bits colliding; 25,000 to 70,000: memory chip errors per billion hours per megabit; 81,500: calories in a human body; 62: people as wealthy as half of world's population; 1.66 million: App Economy jobs in the US; 521 years: half-life of DNA; 0.000012%: air passenger fatalities; $1B: Microsoft free cloud resources for nonprofits; 4000-7000+: BBC stats collected per second; $1 billion: Google's cost to taste Apple's pie;

  • Quotable Quotes:
    • @mcclure111: 1995: Every object in your home has a clock & it is blinking 12:00 / 2025: Every object in your home has a IP address & the password is Admin
    • @notch: Coming soon to npm: tirefire.js, an asynchronous framework for implementing helper classes for reinventing the wheel. Based on promises.
    • @ayetempleton: Fun fact: You are MORE likely to win a million or more dollars in the #powerball lottery than to lose an #AWS #S3 object in a given year.
    • @viktorklang: IMO biggest lie in performance work: constant factors don't matter in Big-Oh.
    • Flavien Boucher: We all came to the conclusion that Docker is adding a complexity layer compare to a virtual machine approach, and this complexity will be for the deployment, development and build.
    • @Frances_Coppola: Uber is a cab cartel. And AirBNB is wealthy - though its suppliers aren't. They are simply firms with apps.
    • Susan Sontag: The method especially appeals to people handicapped by a ruthless work ethic – Germans, Japanese and Americans. Using a camera appeases the anxiety which the work driven feel about not working when they are on vacation and supposed to be having fun. They have something to do that is like a friendly imitation of work: they can take pictures.
    • @SachaNauta: "It's never been easier to be a billionaire and never been harder to be a millionaire" @profgalloway #DLD16
    • @Techmeme: Google Play saw 100% more downloads than iOS App Store, but Apple generated 75% more revenue 
    • Ryan Shea: we’ve concluded that 8MB blocks are simply too large to be considered safe for the network at this point in time, considering the current global bandwidth levels.
    • @RichRogersHDS: "In the old world you spent 30% of your time building a great service & 70% shouting about it. In the new world, that inverts." - Jeff Bezos
    • @thetinot: When you have an SDN, yes, networking throughput does grow on trees. Why @googlecloud is faster than #AWS and #Azure 
    • @GOettingerEU: Digital tech has contributed to around 1/3 of EU GDP growth in over the past decade and I believe this number will continue to grow #wef16
    • @COLRICHARDKEMP: More women fly F16s in Israel than drive cars in Saudi Arabia. KA. 
    • @JoshZumbrun: The total collapse in shopping mall construction
    • @jeffjarvis: 44 million people saw NY Fashion Show content on Instagram last year says Instagram's Marne Levine. Attn: Conde & Hearst!  #DLD16
    • @HackerNewsOnion: Developer Accused Of Unreadable Code Refuses To Comment
    • Lloyds online banking: in a 60-second period: 12,900 people visit its website, 400 bills are paid, 1,500 customers log onto the mobile app, 350 transfers are made and 3,000+ logins
    • @bdha: 2013: DevOps 2014: Docker 2015: Containers 2016: Unikernels 2017: Threads 2018: Syscalls 2019: Inodes
    • hacknat: Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.

  • It's a winner take all world, but not everywhere. Brian Brushwood on Cordkillers with an insightful breakdown of how the new diversified market for TV content has actually become far less of a winner take all system. We have more good content than ever. Gone are the days of Mash when everyone watched the same show at the same time. Is it bad that actors are making less? No. We are seeing the destruction of the tournament, as explained in the book Freakonomics, is the idea that those at the very top make all the money, those at the bottom of the pyramid make next to nothing. And the winners only have to win by a nose to reap all the rewards, the don't even need to win on merit. This is an inefficient system. Now we are reaching an artistically efficient system. If you have a story to tell and no budget you can tell it on YouTube. This is the democratization of talent. It's inconvenient for those who used to be at the top. What we have now is more working actors producing more content than ever.  And since a lot of this content does not have to pander to advertisers to get made the content is more diverse and more interesting than ever as well.

  • The RAMCloud Storage System: RAMCloud combines low-latency, large scale, and durability. Using state of the art networking with kernel bypass, RAMCloud expects small reads to complete in less that 10µs on a cluster of 10,000 nodes. This is 50 – 1,000 times faster that storage systems commonly in use.

  • All Change Please. Adrian Colyer makes the case that we are transitioning to a new part of the technology cycle that promises great change. Networking: 40Gbps and 100Gbps ethernet. Memory: battery backed RAM; 3D XPoint, MRAM, MeRAM, etc. Storage: NVRAM and fast PCIe. Processing: GPUs; integrated on processor FPGAs; hardware transactional memory. This is the question: What happens when you combine fast RDMA networks with ample persistent memory, hardware transactions, enhanced cache management support and super-fast storage arrays? It’s a whole new set of design trade-offs that will impact the OS, file systems, data stores, stream processing, graph processing, deep learning and more. And this is before we’ve even introduced integration with on-board FPGAs, and advances in GPUs…

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Forecasting Cost and Schedule Performance in the Presence of Uncertainty

Herding Cats - Glen Alleman - Fri, 01/22/2016 - 17:28

Estimating in the presence of uncertainty is a critical success factor for all project work. Uncertainty prevails on all projects, no matter the domain, development process, or governance method.

Uncertainty from underlying statistical variances. Uncertainty from probabilistic events.

Here's an approach to estimating the cost and schedule impacts from those uncertainties.

Forecasting cost and schedule performance from Glen Alleman

 

Related articles Intellectual Honesty of Managing in the Presence of Uncertainty Essential Reading List for Managing Other People's Money
Categories: Project Management

Agile Acceptance Testing: Not Just User Acceptance Testing

24015766875_1afd0b0e7b_b

Acceptance Testing is rarely just one type of testing.

Many practitioners see Agile acceptance testing as focused solely on the business facing functionality. This is a misunderstanding; acceptance testing is more varied. The body of knowledge that supports the International Software Testing Qualifications Board’s testing certifications deconstructs acceptance testing into four categories: 

  1. User Acceptance testing focuses mainly on the functionality requested by the business user(s).
  2. The Operational Acceptance test or Production Acceptance test validates whether the system meets the organization’s requirements for operation. These requirements are often considered technical or non-functional.
  3. Contract Acceptance testing compare results against the criteria documented in the contract. Contract criteria can and often do overlap with criteria from other forms of acceptance testing.
  4. Compliance Acceptance testing validates that the product adheres to relevant regulations and standards. Compliance is often driven by government or industry regulation.

Fitting each of these versions of acceptance testing into Agile framework requires forethought and planning.  For example, most Agile and lean frameworks spread Agile User Acceptance Testing (AUAT) steps throughout the development framework so that AUAT is not a gate at the end of development.  Spreading the user acceptance activities into user story creation and grooming, demonstrations and integration steps for scaled projects reduce the possibility of work being completed that does not meet the needs of the business. The other three types of acceptance testing can leverage many of the same techniques; however, sometimes each of the different types might need a bit of augmentation to acceptance testing goals.

Operations acceptance can be addressed by considering operations personnel as a user or customer of the product being built. Users have acceptance criteria, participate in demonstrations and in some cases participate directly on or with the team(s) doing the work. DevOps addresses the idea of interweaving development and operational personnel. In addition, non-functional and technical operational requirements should be built directly into the definition of done, so that teams ensure they are covering operational needs as they solve the business problem.

Using Agile acceptance testing methods on project or product work that requires external parties and contracts is often more complicated than just leveraging common Agile practices or adding components into the definition of done. In this scenario, the contract vehicles have to be crafted to recognize and use both the Agile planning cadence (iteration and release timing) and the feedback loops the cadence generates as a contract control mechanism. The level of trust and formality will dictate who has to be involved in contract acceptance.  For example, in some formal contract scenarios, purchasing agents, executives, and legal representatives might be required in addition to typical team members. Using the iteration cadence as the contract review point, rather than classic stage gates, may add additional overhead to the end of iteration ceremonies such as demonstrations, however, expending the effort will reduce the potential for rework by avoiding surprises later in the development cycle.

Agile compliance acceptance testing is typically approached using the same mechanisms as Agile operational acceptance testing, with the exception of the injection of an auditor or compliance officer as a participant.

The four types of acceptance testing have similar characteristics. They can be addressed using many of the basic Agile techniques with a few tweaks, such as involving auditors, compliance officers, purchasing agents and (gasp) lawyers, when needed. In order to make the full range of acceptance testing work, the broader organization needs to understand the nuances of the Agile techniques that are being used. And that my dear readers is a culture change, which is sometimes just a wee bit hard. 


Categories: Process Management

Why does Unikernel Systems Joining Docker Make A Lot of Sense?

Unikernel Systems Joins Docker. Now this is an interesting match. The themes are security and low overhead, though they do seem to solve the same sort of problem.

So, what's going on?

In FLOSS WEEKLY 302 Open Mirage, starting at about 10 minutes in, there are a series of possible clues. Dr. Anil Madhavapeddy, former CTO of Unikernel Systems, explains their motivation behind the creation of unikernels. And it's a huge and exciting vision...

Categories: Architecture

Helping 1 million students build better apps

Google Code Blog - Thu, 01/21/2016 - 20:48

Posted by Peter Lubbers, Senior Program Manager, Google Developer Training

Almost three years ago we shipped our very first Udacity course about HTML5 Game Development. Today marks a milestone that we proudly want to share with the world. The 1 millionth person has enrolled in our Google Developer Training courses. Was it you?

This milestone is more than just a number. Thanks to our partnership with Udacity, this training gives developers access to skills that empower families, communities, and the future.

One million developers around the world have made a commitment to learn a new language, expand their craft, start a new business, completely shift careers and more. So, here's to the next million people who are excited about using technology to solve the world’s most pressing challenges.

Keep learning!

Categories: Programming

What Is The Best Way To Freelance & Network While Working For A Company?

Making the Complex Simple - John Sonmez - Thu, 01/21/2016 - 17:00

What if you could have an open relationship with your company? In this Simple Programmer episode, I dive deep into this question. Are you currently working with a company but want to search for freelance jobs as well as network with other companies? So, in this case, you need to do it right so you […]

The post What Is The Best Way To Freelance & Network While Working For A Company? appeared first on Simple Programmer.

Categories: Programming

Risk Management is How Adults Manage Projects

Herding Cats - Glen Alleman - Wed, 01/20/2016 - 19:44

Risk Management is How Adults Manage Projects - Tim Lister

Tim's quote offends some people. Here's a collection of his presentations. But this not really the point of this blog. I'm working two programs where Agile (Scrum and SAFe) is integrated into the Earned Value Management System, complaint with EIA-748-C and the guidelines for deploying, using, and reporting progress to plan with that system. This integration is directed by FAR 34.2 and DFARS 234.2 on Software Intensive System of Systems programs.

These programs are similar to Enterprise IT programs, they have mission critical outcomes, they are expensive, they have high risk, they are not simple structure - by definition, and they have deadlines and budget goals set of the enterprise for maintenance of corporate performance.

Outside of the programs we work, we encounter much confusion around risk management, the source of risk, and the separation of the effective and non-effective process of managing risk in the presence of uncertainty. Let's start with a framework for making decisions in the presence of uncertainty. This briefing has been shown before here, many times. But it can't be shown too many times, because there is much misinformation about how to manage in the presence of uncertainty. This briefing is part of a much large set of charts developed over several years for an Office of Secretary of Defense for Acquisition, Technology and Logistics. These are the people that buy things for the warfighter and those who support them.

It may not appear to be applicable in all domains, but I'd strongly suggest it is. Because uncertainty is present on all projects. And the risk this uncertainty creates is present on all projects. Uncertainty and the resulting risk cannot be avoided it can only be managed.

The Point

When we hear Agile is Risk Management it's not true. Agile Software Development in the form of Scrum, XP, DSDM, Crystal, and now even Agile Prince2 is just that - a Software Development Method using the principles of agile.

12 Agile Principles

In some domains by the way a few of those principles are violations of governance of other people's money. Many business people have day jobs and aren't going to be available on demand to work with the developers.  standards - DODAF, ToGAF, 1553 Bus, ISO 12207 etc. define architectural processes.

But that aside for now, Agile provides information for risk management through frequent feedback and adaptability. But Agile is NOT risk management for a  simple reason ...  

Agile does not model the uncertainties that underly all project work.  

Agile has no notion of margin. Agile has an informal notion of risk reduction of Reducible Risks (Epistemic uncertainties if you read the briefing). But those irreducible uncertainties (Aleatory) are not addressed by Agile. Research shows the irreducible uncertainties are the killer problems on all project work. These come from the natural variances of humans, processes, and machines.

Screen Shot 2016-01-20 at 8.08.24 AM

Agile in the form of methods has no process model for addressing the reducible and irreducible uncertainties other than to provide data on performance to date to make decisions about mitigations, margin, and management reserve needed to MANAGE Risk in the presence of uncertainty. 

And of course the final tag line

To Manage Risk in the presence of uncertainty we must Estimate not only the probability of occurrence of an Event based risk, but also Estimate the statistical processes of naturally occurring variances and then estimate the impact of those uncertainties creating risk, Estimate the cost and schedule to mitigate those risk, and finally Estimate the residual uncertainty after mitigation.

Yes, Risk Management is How Adults Manage Projects and of course Estimating is how Adults Manage Projects

 

Related articles Agile Software Development in the DOD Risk Management is How Adults Manage Projects What Do We Mean When We Say "Agile Community?" IT Risk Management
Categories: Project Management

SE-Radio Episode 247: Andrew Phillips on DevOps

Sven Johann talks with Andrew Phillips about DevOps. First, they try to define it. Then, they discuss its roots in agile operations, its relationship to lean development and continuous delivery, its goals, and how to get started. They proceed to system thinking and what ‚ÄúYou build it, you run it‚ÄĚ means for a system when […]
Categories: Programming

Building An Infinitely Scaleable Online Recording Campaign For David Guetta

This is a guest repost of an interview posted by Ryan S. Brown that originally appeared on serverlesscode.com. It continues our exploration of building systems on top of Lambda.

Paging David Guetta fans: this week we have an interview with the team that built the site behind his latest ad campaign. On the site, fans can record themselves singing along to his single, “This One’s For You” and build an album cover to go with it.

Under the hood, the site is built on Lambda, API Gateway, and CloudFront. Social campaigns tend to be pretty spiky – when there’s a lot of press a stampede of users can bring infrastructure to a crawl if you’re not ready for it. The team at parall.ax chose Lambda because there are no long-lived servers, and they could offload all the work of scaling their app up and down with demand to Amazon.

James Hall from parall.ax is going to tell us how they built an internationalized app that can handle any level of demand from nothing in just six weeks.

The Interview
Categories: Architecture

Proper Black Box Testing Case Design ‚Äď Equivalence Partitioning

Making the Complex Simple - John Sonmez - Wed, 01/20/2016 - 14:00

In today‚Äôs IT world, the lines between developers and QA Engineers are being blurred. With the emergence of Agile, Test Driven Development, Continuous Integration, and many other methodologies, software testing is becoming even more critical. To support daily releases, multiple Operating Systems, and multiple browsers, the Development team (QA and Software Engineers) needs the capability […]

The post Proper Black Box Testing Case Design – Equivalence Partitioning appeared first on Simple Programmer.

Categories: Programming