Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Testing on the Toilet: Writing Descriptive Test Names

Google Testing Blog - Mon, 10/20/2014 - 19:22
by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

How long does it take you to figure out what behavior is being tested in the following code?

@Test public void isUserLockedOut_invalidLogin() {
authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertTrue(authenticator.isUserLockedOut(username));
}

You probably had to read through every line of code (maybe more than once) and understand what each line is doing. But how long would it take you to figure out what behavior is being tested if the test had this name?

isUserLockedOut_lockOutUserAfterThreeInvalidLoginAttempts

You should now be able to understand what behavior is being tested by reading just the test name, and you don’t even need to read through the test body. The test name in the above code sample hints at the scenario being tested (“invalidLogin”), but it doesn’t actually say what the expected outcome is supposed to be, so you had to read through the code to figure it out.

Putting both the scenario and the expected outcome in the test name has several other benefits:

- If you want to know all the possible behaviors a class has, all you need to do is read through the test names in its test class, compared to spending minutes or hours digging through the test code or even the class itself trying to figure out its behavior. This can also be useful during code reviews since you can quickly tell if the tests cover all expected cases.

- By giving tests more explicit names, it forces you to split up testing different behaviors into separate tests. Otherwise you may be tempted to dump assertions for different behaviors into one test, which over time can lead to tests that keep growing and become difficult to understand and maintain.

- The exact behavior being tested might not always be clear from the test code. If the test name isn’t explicit about this, sometimes you might have to guess what the test is actually testing.

- You can easily tell if some functionality isn’t being tested. If you don’t see a test name that describes the behavior you’re looking for, then you know the test doesn’t exist.

- When a test fails, you can immediately see what functionality is broken without looking at the test’s source code.

There are several common patterns for structuring the name of a test (one example is to name tests like an English sentence with “should” in the name, e.g., shouldLockOutUserAfterThreeInvalidLoginAttempts). Whichever pattern you use, the same advice still applies: Make sure test names contain both the scenario being tested and the expected outcome.

Sometimes just specifying the name of the method under test may be enough, especially if the method is simple and has only a single behavior that is obvious from its name.

Categories: Testing & QA

Facebook Mobile Drops Pull For Push-based Snapshot + Delta Model

We've learned mobile is different. In If You're Programming A Cell Phone Like A Server You're Doing It Wrong we learned programming for a mobile platform is its own specialty. In How Facebook Makes Mobile Work At Scale For All Phones, On All Screens, On All Networks we learned bandwidth on mobile networks is a precious resource. 

Given all that, how do you design a protocol to sync state (think messages, comments, etc.) between mobile nodes and the global state holding servers located in a datacenter?

Facebook recently wrote about their new solution to this problem in Building Mobile-First Infrastructure for Messenger. They were able to reduce bandwidth usage by 40% and reduced by 20% the terror of hitting send on a phone.

That's a big win...that came from a protocol change.

Facebook Messanger went from a traditional notification triggered full state pull:

Categories: Architecture

Python: Converting a date string to timestamp

Mark Needham - Mon, 10/20/2014 - 16:53

I’ve been playing around with Python over the last few days while cleaning up a data set and one thing I wanted to do was translate date strings into a timestamp.

I started with a date in this format:

date_text = "13SEP2014"

So the first step is to translate that into a Python date – the strftime section of the documentation is useful for figuring out which format code is needed:

import datetime
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
$ python dates.py
2014-09-13 00:00:00

The next step was to translate that to a UNIX timestamp. I thought there might be a method or property on the Date object that I could access but I couldn’t find one and so ended up using calendar to do the transformation:

import datetime
import calendar
 
date_text = "13SEP2014"
date = datetime.datetime.strptime(date_text, "%d%b%Y")
 
print(date)
print(calendar.timegm(date.utctimetuple()))
$ python dates.py
2014-09-13 00:00:00
1410566400

It’s not too tricky so hopefully I shall remember next time.

Categories: Programming

4 Biggest Reasons Why Software Developers Suck at Estimation

Making the Complex Simple - John Sonmez - Mon, 10/20/2014 - 15:00

Estimation is difficult. Most people aren’t good at it–even in mundane situations. For example, when my wife asks me how much longer it will take me to fix some issue I’m working on or to head home, I almost always invariably reply “five minutes.” I almost always honestly believe it will only take five minutes, […]

The post 4 Biggest Reasons Why Software Developers Suck at Estimation appeared first on Simple Programmer.

Categories: Programming

Mix Mashup

NOOP.NL - Jurgen Appelo - Mon, 10/20/2014 - 10:08
mashup-square

<promotion>

The MIX Mashup is a gathering of the vanguard of management innovators—pioneering leaders, courageous hackers, and agenda-setting thinkers from every realm of endeavor

The post Mix Mashup appeared first on NOOP.NL.

Categories: Project Management

Decision Making Without Estimates?

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

 
Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.


Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work first - If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk First - When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work First - Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements First - In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - fails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of http://acumyn.com/  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is called. Puts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.

References

  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

SPaMCAST 312 – Alex Neginsky, A Leader and Practitioner’s View of Agile

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 312 now!

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor’s degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent.

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization.

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 312 - Alex Neginsky, A Leader and Practitioner’s View of Agile

Software Process and Measurement Cast - Sun, 10/19/2014 - 22:00

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!  

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor's degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent. 

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization. 

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Android 5.0 Lollipop SDK and Nexus Preview Images

Android Developers Blog - Sun, 10/19/2014 - 07:12

Two more weeks!

By Jamal Eason, Product Manager, Android

At Google I/O last June, we gave you an early version of Android 5.0 with the L Developer Preview, running on Nexus 5, Nexus 7 and Android TV. Over the course of the L Developer Preview program, you’ve given us great feedback and we appreciate the engagement from you, our developer community. Thanks!

This week, we announced Android 5.0 Lollipop. Starting today, you can download the full release of the Android 5.0 SDK, along with updated developer images for Nexus 5, Nexus 7 (2013), ADT-1, and the Android emulator.

The first set of devices to run this new version of Android -- Nexus 6, Nexus 9, and Nexus Player -- will be available in early November. In the same timeframe, we'll also roll out the Android 5.0 update worldwide to Nexus 4, 5, 7 (2012 & 2013), and 10 devices, as well as to Google Play edition devices.

Therefore, now is the time to test your apps on the new platform. You have two more weeks to get ready!

What’s in Lollipop?

Android 5.0 Lollipop introduces a host of new APIs and features including:

There's much more, so check out the Android 5.0 platform highlights for a complete overview.

What’s in the Android 5.0 SDK?

The Android 5.0 SDK includes updated tools and new developer system images for testing. You can develop against the latest Android platform using API level 21 and take advantage of the updated support library to implement Material Design as well as the leanback user interface for TV apps.

You can download these components through the Android SDK Manager and develop your app in Android Studio:

  • Android 5.0 SDK Platform & Tools
  • Android 5.0 Emulator System Image - 32-bit & 64-bit (x86)
  • Android 5.0 Emulator System Image for Android TV (32-bit)
  • Android v7 appcompat Support Library for Material Design theme backwards capability
  • Android v17 leanback library for Android TV app support

For developers using the Android NDK for native C/C++ Android apps we have:

For developers on Android TV devices we have:

  • Android 5.0 system image over the air (OTA) update for ADT-1 Developer Kit. OTA updates will appear over the next few days.

Similar to our previous release of the preview, we are also providing updated system image downloads for Nexus 5 & Nexus 7 (2013) devices to help with your testing as well. These images support the Android 5.0 SDK, but only have the minimal apps pre-installed in order to enable developer testing:

  • Nexus 5 (GSM/LTE) “hammerhead” Device System Image
  • Nexus 7 (2013) - (Wifi) “razor” Device System Image

For the developer preview versions, there will not be an over the air (OTA) update. You will need to wipe and reflash your developer device to use the latest developer preview versions. If you want to receive the official consumer OTA update in November and any other official updates, you will have to have a factory image on your Nexus device.

Validate your apps with the Android 5.0 SDK

With the consumer availability of Android 5.0 and the Nexus 6, Nexus 9, and Nexus Player right around the corner, here are a few things you should do to prepare:

  1. Get the emulator system images through the SDK Manager or download the Nexus device system images.
  2. Recompile your apps against Android 5.0 SDK, especially if you used any preview APIs. Note: APIs have changed between the preview SDK and the final SDK.
  3. Validate that your current Android apps run on the new API 21 level with ART enabled. And if you use the NDK for your C/C++ Android apps, validate against the 64-bit emulator. ART is enabled by default on API 21 & new Android devices with Android 5.0.

Once you validate your current app, explore the new APIs and features for Android 5.0.

Migrate Your Existing App to Material Design

Android 5.0 Lollipop introduces Material Design, which enables your apps to adopt a bold, colorful, and flexible design, while remaining true to a small set of key principles that guide user interaction across multiple screens and devices.

After making sure your current apps work with Android 5.0, now is the time to enable the Material theme in your app with the AppCompat support library. For quick tips & recommendations for making your app shine with Material Design, check out our Material Design guidelines and tablet optimization tips. For those of you new to Material Design, check out our Getting Started guide.

Get your apps ready for Google Play!

Starting today, you can publish your apps that are targeting Android 5.0 Lollipop to Google Play. In your app manifest, update android:targetSdkVersion to "21", test your app, and upload it to the Google Play Developer Console.

Starting November 3rd, Nexus 9 will be the first device available to consumers that will run Android 5.0. Therefore, it is a great time to publish on Google Play, once you've updated and tested your app. Even if your apps target earlier versions of Android, take a few moments to test them on the Android 5.0 system images, and publish any updates needed in advance of the Android 5.0 rollout.

Stay tuned for more details on the Nexus 6 and Nexus 9 devices, and how to make sure your apps look their best on them.

Next up, Android TV!

We also announced the first consumer Android TV device, Nexus Player. It’s a streaming media player for movies, music and videos, and also a first-of-its-kind Android gaming device. Users can play games on their HDTVs with a gamepad, then keep playing on their phones while they’re on the road. The device is also Google Cast-enabled, so users can cast your app from their phones or tablets to their TV.

If you’re developing for Android TV, watch for more information on November 3rd about how to distribute your apps to Android TV users through the Google Play Developer Console. You can start getting your app ready by making sure it meets all of the TV Quality Guidelines.

Get started with Android 5.0 Lollipop platform

If you haven’t had a chance to take a look at this new version of Android yet, download the SDK and get started today. You can learn more about what’s new in the Android 5.0 platform highlights and get all the details on new APIs and changed behaviors in the API Overview. You can also check out the latest DevBytes videos to learn more about Android 5.0 features.

Enjoy this new release of Android!

Join the discussion on

+Android Developers
Categories: Programming

Agile Risk Management: The Need Still Exists

ROAM

 

Hand Drawn Chart Saturday

I once asked the question, “Has the adoption of Agile techniques magically erased risk from software projects?” The obvious answer is no, however Agile has provided a framework and tools to reduce the overall level of performance variance. Even if we can reduce risk by damping the variance driven by complexity, the size of the work, process discipline and people, we still need to “ROAM” the remaining risks. ROAM is a model that helps teams identify risks. Applying the model, a team would review each risk and classify then as:

  • Resolved, the risk has been answered and avoided or eliminated.
  • Owned, someone has accepted the responsibility for doing something about the risk.
  • Accepted, the risk has been understood and the team has agreed that nothing will be done about it.
  • Mitigated, something has been done so that the probability or potential impact is reduced.

When we consider any risk we need to recognize the two attributes: impact and probability.  Impact is what will happen if the risk becomes something tangible. Impacts are typically monetized or stated as the amount of effort needed to correct the problem if it occurs. The size of the impact can vary depending on when the risk occurs. For example, if we suddenly decide that the system architecture will not scale to the level required during sprint 19, the cost in rework would be higher than if that fact were discovered in sprint 2. Probability is the likelihood a risk will become an issue. In a similar manner to impact, probability varies over time.

We defined risk as “any uncertain event that can have an impact on the success of a project.” Does using Agile change our need to recognize and mitigate risk?  No, but instead of a classic risk management plan and a risk log, a more Agile approach to risk management might be generating additional user stories. While Agile techniques reduce some forms of risk we still need to be vigilant. Adding risks to the project or program backlog will help ensure there is less chance of variability and surprises.


Categories: Process Management

What Is Systems Architecture And Why Should We Care?

Herding Cats - Glen Alleman - Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Neo4j: LOAD CSV – The sneaky null character

Mark Needham - Sat, 10/18/2014 - 11:49

I spent some time earlier in the week trying to import a CSV file extracted from Hadoop into Neo4j using Cypher’s LOAD CSV command and initially struggled due to some rogue characters.

The CSV file looked like this:

$ cat foo.csv
foo,bar,baz
1,2,3

I wrote the following LOAD CSV query to extract some of the fields and compare others:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2"
==> +--------------------------------------+
==> | line.foo | line.bar | line.bar = "2" |
==> +--------------------------------------+
==> | <null>   | "2"     | false          |
==> +--------------------------------------+
==> 1 row

I had expect to see a “1” in the first column and a ‘true’ in the third column, neither of which happened.

I initially didn’t have a text editor with hexcode mode available so I tried checking the length of the entry in the ‘bar’ field:

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | <null>   | "2"     | false          | 2                |
==> +---------------------------------------------------------+
==> 1 row

The length of that value is 2 when we’d expect it to be 1 given it’s a single character.

I tried trimming the field to see if that made any difference…

load csv with headers from "file:/Users/markneedham/Downloads/foo.csv" AS line
RETURN line.foo, trim(line.bar), trim(line.bar) = "2", length(line.bar)
==> +---------------------------------------------------------------------+
==> | line.foo | trim(line.bar) | trim(line.bar) = "2" | length(line.bar) |
==> +---------------------------------------------------------------------+
==> | <null>   | "2"            | true                 | 2                |
==> +---------------------------------------------------------------------+
==> 1 row

…and it did! I thought there was probably a trailing whitespace character after the “2” which trim had removed and that ‘foo’ column in the header row had the same issue.

I was able to see that this was the case by extracting the JSON dump of the query via the Neo4j browser:

{  
   "table":{  
      "_response":{  
         "columns":[  
            "line"
         ],
         "data":[  
            {  
               "row":[  
                  {  
                     "foo\u0000":"1\u0000",
                     "bar":"2\u0000",
                     "baz":"3"
                  }
               ],
               "graph":{  
                  "nodes":[  
 
                  ],
                  "relationships":[  
 
                  ]
               }
            }
         ],
      ...
}

It turns out there were null characters scattered around the file so I needed to pre process the file to get rid of them:

$ tr < foo.csv -d '\000' > bar.csv

Now if we process bar.csv it’s a much smoother process:

load csv with headers from "file:/Users/markneedham/Downloads/bar.csv" AS line
RETURN line.foo, line.bar, line.bar = "2", length(line.bar)
==> +---------------------------------------------------------+
==> | line.foo | line.bar | line.bar = "2" | length(line.bar) |
==> +---------------------------------------------------------+
==> | "1"      | "2"      | true           | 1                |
==> +---------------------------------------------------------+
==> 1 row

Note to self: don’t expect data to be clean, inspect it first!

Categories: Programming

R: Linear models with the lm function, NA values and Collinearity

Mark Needham - Sat, 10/18/2014 - 07:35

In my continued playing around with R I’ve sometimes noticed ‘NA’ values in the linear regression models I created but hadn’t really thought about what that meant.

On the advice of Peter Huber I recently started working my way through Coursera’s Regression Models which has a whole slide explaining its meaning:

2014 10 17 06 21 07

So in this case ‘z’ doesn’t help us in predicting Fertility since it doesn’t give us any more information that we can’t already get from ‘Agriculture’ and ‘Education’.

Although in this case we know why ‘z’ doesn’t have a coefficient sometimes it may not be clear which other variable the NA one is highly correlated with.

Multicollinearity (also collinearity) is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy.

In that situation we can make use of the alias function to explain the collinearity as suggested in this StackOverflow post:

library(datasets); data(swiss); require(stats); require(graphics)
z <- swiss$Agriculture + swiss$Education
fit = lm(Fertility ~ . + z, data = swiss)
> alias(fit)
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality + z
 
Complete :
  (Intercept) Agriculture Examination Education Catholic Infant.Mortality
z 0           1           0           1         0        0

In this case we can see that ‘z’ is highly correlated with both Agriculture and Education which makes sense given its the sum of those two variables.

When we notice that there’s an NA coefficient in our model we can choose to exclude that variable and the model will still have the same coefficients as before:

> require(dplyr)
> summary(lm(Fertility ~ . + z, data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03
> summary(lm(Fertility ~ ., data = swiss))$coefficients
                   Estimate  Std. Error   t value     Pr(>|t|)
(Intercept)      66.9151817 10.70603759  6.250229 1.906051e-07
Agriculture      -0.1721140  0.07030392 -2.448142 1.872715e-02
Examination      -0.2580082  0.25387820 -1.016268 3.154617e-01
Education        -0.8709401  0.18302860 -4.758492 2.430605e-05
Catholic          0.1041153  0.03525785  2.952969 5.190079e-03
Infant.Mortality  1.0770481  0.38171965  2.821568 7.335715e-03

If we call alias now we won’t see any output:

> alias(lm(Fertility ~ ., data = swiss))
Model :
Fertility ~ Agriculture + Examination + Education + Catholic + 
    Infant.Mortality
Categories: Programming

Agile Risk Management: People

People are chaotic.

People are chaotic.

Have ever heard the saying, “software development would be easy if it weren’t for the people”? People are one of the factors that cause variability in the performance of projects and releases (other factors also include complexity, the size of the work, and process discipline.) There are three mechanisms built into most Agile frameworks to address an acceptance of the idea that people can be chaotic by nature and therefore dampen variability.

  1. Team size, constitution and consistency are attributes that most Agile frameworks have used to enhance productivity and effectiveness that also reduce the natural variability generated when people work together.
    1. The common Agile team size of 7 ± 2 is small enough that team members can establish and nurture personal relationships to ensure effective communication.
    2. Agile teams are typically cross-functional and include a Scrum master/coach and the product owner. The composition of the team fosters self-reliance and the ability to self-organize, again reducing variability.
    3. Long lived teams tend to establish strong bonds that foster good communication and behaviors such as swarming. Swarming is a behavior in which team members rally to a task that is in trouble so that team as a whole can meet its goal, which reduces overall variability in performance.
  2. Peer reviews of all types have been a standard tool to improve quality and consistency of work products for decades. Peer reviews are a mechanism to remove defects from code or other work product before they are integrated into larger work products. The problem is that having someone else look at something you created and criticize it is grating. Extreme programing took classic peer reviews a step further and put two people together at one keyboard, one typing and the other providing running commentary (a colloquial description of pair programing). Demonstrations are a variant of peer reviews. Removing defects earlier in the development process through observation and discussion reduces variability and therefore the risk of not delivering value.
  3. Daily stand ups and other rituals are the outward markers of Agile techniques. Iteration/sprint planning keeps teams focused on what they need to do in the short-term future and then re-plans when that time frame is over. Daily stand-ups provide a platform for the team to sync up on a daily basis to reduce the variance that can creep in when plans diverge. Demonstrations show project stakeholders how the team is solving their business problems and solicit feedback to keep the team on track. All of these rituals reduce potential variability that can be introduced by people acting alone rather than as a team with a common goal.

In information technology projects of all types, people transform ideas and concepts into business value. In software development and maintenance, the tools and techniques might vary but, at its core, software-centric projects are social enterprises. Get any group of people together to achieve a goal is a somewhat chaotic process. Agile techniques and frameworks have been structured to help individuals to increase alignment and to act together as a team to deliver business value.


Categories: Process Management

Zumero Professional Services

Eric.Weblog() - Eric Sink - Fri, 10/17/2014 - 19:00
Building mobile apps is really hard.
  • Why does my app work in the simulator but not on the device?
  • How do I get 60 frames-per-second UI performance on Android?
  • How do I manage concurrent access to my mobile database?
  • How do I deal with limited memory?
  • How do I keep something running in the background?
  • I have to compile for how many different CPU architectures?!?

Building apps for mobile devices involves a dizzying array of technologies and tooling. The explosion of mobile offers more ways for software projects to fail than ever before.

Zumero would like to help.

We can build your app for you. Or we can assist or advise as you build it yourself.

Both New and Old

Zumero for SQL Server is the best solution for syncing SQL data with mobile devices. Now we're looking ahead to providing more ways of helping developers build great business apps for mobile devices. We have additional products in the pipeline, but it is already clear that Zumero Professional Services is going to be an important piece of our story.

To some degree, we are formalizing what we are already doing for our customers. They're building apps to solve business problems, and they're constantly experiencing the reality that mobile app development is really, really hard. We often get involved in ways that go beyond the use of our product, helping with issues like concurrency and integration with native code.

So yes, Zumero Professional Services is a new thing for us. We see this as an opportunity because of the explosion of business apps being built. But it is also an old thing. We've done this before.

A number of years ago we did a lot of custom software development on mobile devices for companies like Motorola. This was back when smartphones weren't actually very smart. We built stuff that was burned into the phone's ROM, with CPU and memory constraints that make an iPhone look like a supercomputer. We had things like massive test suites with high code coverage.

We got out of that market, but now we're back, and even though the mobile industry has changed dramatically, many of the core technology challenges are still the same. This feels like a return to our roots.

Our place in the world Specialties Solid competencies Brussels sprouts (We are especially excited about stuff in this column.) (We can provide reliable execution and high-quality results using the technologies and practices in this column.) (We actively avoid stuff in this column.) Apps that change your business for the better Apps that solve business problems Games (unless we're playing them) Tough technical issues Apps that integrate with existing systems Photoshop Offline-capable apps with sync Networking, REST Data exchange via CDROM Xamarin Objective-C, Java, PhoneGap Adobe Flash Azure, Amazon Web Services Integration with on-prem backends Apps that don't let people interact iOS, Android, Windows Phone Windows, Mac OS BlackBerry, PalmOS SQLite, Microsoft SQL Server NoSQL FoxPro Helping companies use mobile apps to make their business better Building great apps that people like to use Guessing what the next viral app will be Interested?

Email me or sales@zumero.com to explore possibilities.

 

Updated Material Design Guidelines and Resources

Google Code Blog - Fri, 10/17/2014 - 18:00

When we first published the Material Design guidelines back in June, we set out to create a living document that would grow with feedback from the community. In that time, we’ve seen some great work from the community in augmenting the guidelines with things like Sketch templates, icon downloads and screens upon screens of inspiring visual and motion design ideas. We’ve also received a lot of feedback around what resources we can provide to make using Material Design in your projects easier.

So today, in addition to publishing the final Android 5.0 SDK for developers, we’re publishing our first significant update to the Material Design guidelines, with additional resources including:

  • Updated sticker sheets in PSD, AI and Sketch formats
  • A new icon library ZIP download
  • Updated color swatch downloads
  • Updated whiteframe downloads, including better baseline grid text alignment and other miscellaneous fixes

The sticker sheets have been updated to reflect the latest refinements to the components and integrated into a single, comprehensive sticker sheet that should be easier to use. An aggregated sticker sheet is also newly available for Adobe Photoshop and Sketch—two hugely popular requests. In the sticker sheet, you can find various elements that make up layouts, including light and dark symbols for the status bar, app bar, bottom toolbar, cards, dropdowns, search fields, dividers, navigation drawers, dialogs, the floating action button, and other components. The sticker sheets now also include explanatory text for elements.

Note that the images in the Components section of the guidelines haven't yet been updated (that’s coming soon!), so you can consider the sticker sheets to be the most up-to-date version of the components.

Also, the new system icons sticker sheet contains icons commonly used in Android across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on.

Stay tuned for more enhancements as we incorporate more of your feedback—remember to share your suggestions on Google+! We’re excited to continue evolving this living document with you!

For more on Material Design, check out these videos and the new getting started guide for Android developers.

Posted by Roman Nurik, Design Advocate
Categories: Programming

How to deploy a Docker application into production on Amazon AWS

Xebia Blog - Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Stuff The Internet Says On Scalability For October 17th, 2014

Hey, it's HighScalability time:


What could this be? Swarms of drones painting 3D light sculptures against the night sky!
  • Quotable Quotes:
    • Visnja Zeljeznjak: Steve Jobs' product pricing formula: cost of materials x 3 + 33%
    • Benedict Evans: We now have over 2bn iOS and Android devices on earth, and this will grow in the next few years to well over 3bn.
    • @ClearStoryData: It's true! Avg beer drinker attracts 4.4% more Mosquitos than water drinker #Strataconf
    • Leslie Lamport: The core idea of the problem of that notion of causality came about because of my familiarity with special relativity...where one event could causally effect another depended on weather or not information from one could physically reach the other.
    • @laurelatoreilly: Fascinating session about cargo ships going dark to shift market prices #IoT #strataconf "your decisions are only as good as your data"
    • @muratdemirbas: Distributed/decentralized coordination is expensive & hard to scale. Centralized coordination is cheap & scales easily using hierarchies.
    • @froidianslip: ”Kafka is awesome. We heard it cures cancer." -- @gwenshap #Strataconf
    • @timoreilly: RT @grapealope: The self-driving car has 6000 sensors, and takes readings at 4Hz. That's a lot of data. @MCSrivas #strataconf #MapR
    • @froidianslip: Love the paraphrase borrowed from Ray Bradbury, "Any sufficiently complex configuration is indistinguishable from code." #Strataconf
    • @matei_zaharia: Spark shatters MapReduce's 100 TB and 1 PB sort records... with 10x fewer nodes
    • @msallstr: “Synchronous calls in this environment are the crystal meth of programming”  @mjpt777 on the new   reactive manifesto 
    • @postwait: “If you put them under enough stress, perfectly rational people will panic and start believing in science” #priceless
    • Ilya Grigorik: It's great to see access from mobile is around 30% faster compared to last year.
    • @ryandotsmith: Recently migrated an async system to SQS. Much simple. Tiny latency. Here is the code (maybe a gem?)

  • People just don't appreciate the power of messy. The problematic culture of "Worse is Better". There's an implied notion here that people can't recognize better when they see it. Better is not a platonic ideal. It can't be proved by argument. Better, like evolution, is something that works itself out in practice. Like evolution, Worse is Better is an algorithm for stepping through a possibility space by jumping from one working phenotype to the next more adapted working phenotype. And for many, that's better. Not Ideal, but Better.

  • The Times They Are a-Changin'. Docker and Microsoft partner to drive adoption of distributed applications. What's the goal? nickstinemates: Package your Windows app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Windows host. Package your Linux app in a docker container, use same tooling you would otherwise use to deploy to a docker engine running on a Linux host.

  • Leandro Pereira writes a fine autobiography in Life of a HTTP request, as seen by my toy web server. All the stages of life are there. Socket creation. Acceptance. Scheduling. Coroutines. Reading requests. Parsing requests. All the way to the reply and the death of the connection. A lot to learn if you want to look at the simplified internals of a service.

  • Wonderful talk: Call Me Maybe: Carly Rae Jepsen and the Perils of Network Partitions. Kyle Kingsbury takes a detailed look at different partition problems in different databases. There are split brains. Masters dying. Lost data. General network mayhem. It's great. The lesson: what's written down in the marketing documentation is not always what you get. Test your application and see what really happens. The world is not simple. A dumb solution where you understand the failure modes can be a good choice.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Then When Given

Xebia Blog - Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.


Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click 

...

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.


Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the Then clause contains an actor and an action.


When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.


Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.