Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Learning Styles and Teams

Proceed with caution!

Proceed with caution!

A team is a collection of individuals. This fact is important to remember because how each individual consumes and synthesizes information is as varied as the number of team members. However there is a finite set of learning styles to take into account. Learning styles not only impact how individuals absorb and remember information, but how they share information with others. While there are several models of learning styles, I have found the Seven Learning Styles to be useful in multicultural IT teams.  Here is my interpretation of the Seven Learning Styles:

Visual – The Diagramer absorbs information from pictures.  This is the person that builds diagrams or draws pictures to understand a concept. Adherents of mind maps tend to fall in to the visual category. Walk around your department and look at how whiteboards are being used. In a meeting the person that jumps up and starts drawing when they begin to explain a concept is generally a visual learner.

Aural – The Musician needs to hear the information they are processing. Pitch, pace and rhythm tend to important components in how this type of learner processes information. When the aural learner talks about concepts, they often combine sound references into the descriptions. For example, when attorney Johnny Cochran famously intoned “If the gloves don’t fit, you must acquit” he was evoking aural techniques that helped make the point sticky.

Verbal – The Talker needs to talk though the content they are trying to absorb. In many cases the dialog can occur internally. For example, I tend to game plan certain meeting scenarios beforehand by running sample conversations through my mind so I can anticipate how they will sound.

Physical – The Builder builds models as a means of building an understanding of a concept. Experimentation is a form of physical learning you often find in an IT department. Physical learners build something that is tangible so they can develop knowledge.  If the learners we were discussing were rocket scientists they might build model rockets rather than drawing pictures of rockets.  If we talking about programmers we would expect them to create executable code rather than models or diagrams. True prototypes (throwaway proof of concepts) are means of hands-on learning. Physical learners in non-physical situations will use tactile words to describe concepts. I recently talked to a database modeler that described the model symmetry of the model he was working on.

Logical – The lawyer builds knowledge by assembling facts and assertions into logical arguments that can be evaluated. The process that the Lawyer follows tends to build very solid bases of knowledge that are hard to challenge and disrupt.  Logical learners because they tend to move from point to point it is more difficult for them to make large jumps that do not follow from point to point. To paraphrase Socrates: All programmers are human, Joe is a programmer, therefore Joe is human.

Social – The Grouper prefers learning in group settings. The critical component for the social learner is other people.  The interaction with others is an important part of processing. Interaction in groups includes verbal and non-verbal communication and emotional support. Do you remember the person when you were at University that always organized the group study sessions? They probably fell into this category.

Solitary – The Introvert learns best by themselves.  This is the type of person that takes the book home over the weekend and just figures it out.

Learning styles are not mutually exclusive.  Each person usually has a predominate style and one or more secondary styles. I tend to the visual, but often augment pictures with physical experiments (whether writing code or brewing beer). The individuals that make up a team will have a mixture of learning styles. Each person’s learning style influences not only how they acquire knowledge, how they store knowledge and also influence how they can retrieve that data.  For example, music or sounds are a tool for aural learners (the musician) to gather information and then retrieve it.  Many of us use have used mnemonics to memorize facts. When I was young I learned to play a piano.  When I was learning to read music, my teach taught me the mnemonic, “every good boy does fine.” These are the notes on the treble cleft. Teams need to work together to accommodate and validate different learning styles. When team members are not aware of how others on the team learn they can often talk past one anther, which could reduce knowledge-transfer effectiveness.


Categories: Process Management

Slicing Work Into Small Pieces

Herding Cats - Glen Alleman - Mon, 03/31/2014 - 19:32

One of the suggestions in #NoEstimates is the slicing of work - either Stories or any word needed to indicate an agile projects chunking of the work - into small pieces. This of course doesn't actually address the issue of producing and Estimate at Completion for the project. An estimate needed by those funding or authorizing the spending of funds to know how much and when.

But slicing is a process of reducing the exposure to uncertainty to a manageable size.  It's the next level down's answer to what's the value at risk? Make in small and reduce the value at risk of not showing up on time and on budget. Slicing answers the question, that has been around for some time.

How long are you willing to wait till you find out you are late (or over budget, or it doesn't work as planned)?

The answer to this  - how long - question varies according to the domain, value at risk, and other factors usually associated with risk tolerance. But it is a question that must be answered periodically (month;y for us) Recently this notion of slicing has been put forth as part of the solution to the estimating problem, which of course it's not. Since the size of the work chunks only reducing the uncertainty of the variance. Both the aleatory (irreducible uncertainty) and epistemic (reducible uncertainty) will be less when the exposure to the uncertainty is smaller. Beneficial to the project for sure. But the total all in cost and schedule are related to the slicing size only by the cumulative variance of the parts.

It many be interesting to know, that slicing is part of ANSI-748 Earned Value Management assessment by the Defense Contract Management Agency (DCMA). DCMA is the DOD agency that validates the Earned Value Management System's 32 Guidelines. DCMA performs a 14-point assessment of the Integrated Master Schedule (integrated because it is connected to the cost based) and the Performance Measurement Baseline (PMB) (the time phased planned budget for all the work).

DCMA Check 8 looks for high duration activities. These are know to cause issues with the exposure to programmatic risk for the program. The 44 day number represents 2 working months. The work then passes through one accounting period (monthly submission of the Integrated Program Management Report). At the end of each accounting period an assessment of Percent Complete is used to calculate the Budgeted Cost of Work Performed (BCWP) - the earned value for the tasks, works packages, and control accounts (funding buckets) for the program. 44 days may sound long for an agile software development project, but 44 days is short on a multi-year, many millions and likley billions of dollars of defesnse work.

Screen Shot 2014-03-30 at 4.18.49 PM

Since the agile comminity is  fond of saying there is nothing new here, while suggesting their ideas are new and unique, the above clip is from the DCMA guide, long used in our defense program management paradigm.

So it is worth repeating the principle of asking how long you are willing to wait before you find out something. The rule of thumb is to sample the status of the thing you are controlling are a rate twice your needed to control or determine a value. This is called the Nyquist rate from single processing. Signal processing is where I grew up writing software for Fast Fourier Transforms, Finite Impulse Response Filter, Kalman Filters for particle physics data streaming off the particle accelerator. When I didn't have an orginal idea to finish my PhD studies, I switched to writing the same software for radar signals intelligence, and electroninc warfare systems. Same principles work for any control system including a management control system.

Just as an aside in the control systems paradigm, there is discussion about monitoring and decision making from the information gathered from the monitoring. This is an Open Loop control systems. Without a planned value to seek, the SET POINT if you're using the room thermostat analogy, monitoring the value provide no value, since you don't know what you are seeking the system to do. Just monitoring is Open Loop, you've got numbers from the system the room temperature or the number of stories produced, but no target to control against.

To have a closed loop system, you need a SET POINT, a steering target, a goal, a desired outcome. Then the monitoring - sampling - can produce a variance, a difference between goal and actual - by which you cn take action. Raise or lower the temperature, speed up or sloe down the car, speed up or slow down the production of software outputs. Yes you can go too fast, the down stream user can't take the results and by the time they can, the requirements may have changed. This is the Closed Loop Control. 

Open Closed Loop

Categories: Project Management

Updates to Google BigQuery following Cloud Platform Live

Google Code Blog - Mon, 03/31/2014 - 17:00
Author PhotoBy Felipe Hoffa, Cloud Platform team

Cross-posted from the Google Cloud Platform Blog
Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.

Last Tuesday we announced an exciting set of changes to Google BigQuery making your experience easier, faster and more powerful. In addition to new features and improvements like table wildcard functions, views, and parallel exports, BigQuery now features increased streaming capacity, lower pricing, and more.


1000x increase in streaming capacity

Last September we announced the ability to stream data into BigQuery for instant analysis, with an ingestion limit of 100 rows per second. While developers have enjoyed and exploited this capability, they've asked for more capacity. You now can stream up to 100,000 rows per second, per table into BigQuery - 1,000 times more than before.

For a great demonstration of the power of streaming data into BigQuery, check out the live demo from the keynote at Cloud Platform Live.

Users often partition their big tables into smaller units for data lifecycle and optimization purposes. For example, instead of having yearly tables, they could be split into monthly or even daily sets. BigQuery now offers table wildcard functions to help easily query tables that match common parameters.

The downside of partitioning tables is writing queries that need to access multiple tables. This would be easier if there was a way to tell BigQuery "process all the tables between March 3rd and March 25th" or "read every table which names start with an 'a'". You can do this with this release.

TABLE_DATE_RANGE() queries all tables that overlap with a time range (based on the table names), while TABLE_QUERY() accepts regular expressions to select the tables to analyze.

For more information, see the documentation and syntax for table wildcard functions.

Improved SQL support and table views

BigQuery has adopted SQL as its query language because it's one of the most well known, simple and powerful ways to analyze data. Nevertheless BigQuery used to impose some restrictions on traditional SQL-92, like having to write multiple sub-queries instead of simpler multi-joins. Not anymore, now BigQuery supports multi-join and CROSS JOIN, and improves its SQL capabilities with more flexible alias support, fewer ORDER BY restrictions, more window functions, smarter PARTITION BY, and more.

A notable new feature is the ability to save queries as views, and use them as building blocks for more complex queries. To define a view, you can use the browser tool to save a query, the API, or the newest version of the BigQuery command-line tool (by downloading the Google Cloud SDK).

User-defined metadata

Now you can annotate each dataset, table, and field with descriptions that are displayed within BigQuery. This way people you share your datasets with will have an easier time identifying them.

JSON parsing functions

BigQuery is optimized for structured data: before loading data into BigQuery, you should first define a table with the right columns. This is not always easy, as JSON schemas might be flexible and in constant flux. BigQuery now lets you store JSON encoded objects into string fields, and you can use the JSON_EXTRACT and JSON_EXTRACT_SCALAR functions to easily parse them later using JSONPath-like expressions.

For example:
SELECT json_extract_scalar(
"{'book': {
'category':'fiction',
'title':'Harry Potter'}}",
"$.book.category");


Fast parallel exports

BigQuery is a great place to store all your data and have it ready for instant analysis using SQL queries. But sometimes SQL is not enough, and you might want to analyze your data with external tools. That's why we developed the new fast parallel exports: With this feature, you can define how many workers will be consuming the data, and BigQuery exports the data to multiple files optimized for the available number of workers.

Check the exporting data documentation, or stay tuned for the upcoming Hadoop connector to BigQuery documentation.

Massive price reductions

At Cloud Platform live, we announced a massive price reduction: Storage costs are going down 68%, from 8 cents per gigabyte per month to only 2.6, while querying costs are going down 85%, from 3.5 cents per gigabyte to only 0.5. Previously announced streaming costs are now reduced by 90%. And finally, we announced the ability to purchase reserved processing capacity, for even cheaper prices and the ability to precisely predict costs. And you always have the option to burst using on-demand capacity.

I want to take this space to celebrate the latest open source community contributions to the BigQuery ecosystem. R has its own connector to BigQuery (and a tutorial), as Python pandas too (check out the video we made with Pearson). Ruby developers are now able to use BigQuery with an ActiveRecord connector, and send all their logs with fluentd. Thanks all, and keep surprising us!

Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Louis Gray, Googler
Categories: Programming

How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second

When we last visited WhatsApp they’d just been acquired by Facebook for $19 billion. We learned about their early architecture, which centered around a maniacal focus on optimizing Erlang into handling 2 million connections a server, working on All The Phones, and making users happy through simplicity.

Two years later traffic has grown 10x. How did WhatsApp make that jump to the next level of scalability?

Rick Reed tells us in a talk he gave at the Erlang Factory: That's 'Billion' with a 'B': Scaling to the next level at WhatsApp (slides), which revealed some eye popping WhatsApp stats:

What has hundreds of nodes, thousands of cores, hundreds of terabytes of RAM, and hopes to serve the billions of smartphones that will soon be a reality around the globe? The Erlang/FreeBSD-based server infrastructure at WhatsApp. We've faced many challenges in meeting the ever-growing demand for our messaging services, but as we continue to push the envelope on size (>8000 cores) and speed (>70M Erlang messages per second) of our serving system.

What are some of the most notable changes from two years ago?

  • Obviously much bigger in every dimension, except the number of engineers. More boxes, more datacenters, more memory, more users, and more scale problems. Handling this level of growth with so few engineers is what Rick is most proud of: 40 million users per engineer. This is part of the win of the cloud. Their engineers work on their software. The network, hardware, and datacenter is handled by someone else.

  • They’ve gone away from trying to support as many connections per box as possible because of the need to have enough head room to handle the overall increased load on each box. Though their general strategy of keeping down management overhead by getting really big boxes and running efficiently on SMP machines, remains the same.

  • Transience is nice. With multimedia, pictures, text, voice, video all being part of their architecture now, not having to store all these assets for the long term simplifies the system greatly. The architecture can revolve around throughput, caching, and partitioning.

  • Erlang is its own world. Listening to the talk it became clear how much of everything you do is in the world view of Erlang, which can be quite disorienting. Though in the end it’s a distributed system and all the issues are the same as in any other distributed system.

  • Mnesia, the Erlang database, seemed to be a big source of problem at their scale. It made me wonder if some other database might be more appropriate and if the need to stay within the Erlang family of solutions can be a bit blinding?

  • Lots of problems related to scale as you might imagine. Problems with flapping connections, queues getting so long they delay high priority operations, flapping of timers, code that worked just fine at one traffic level breaking badly at higher traffic levels, high priority messages not getting serviced under high load, operations blocking other operations in unexpected ways, failures causing resources issues, and so on. These things just happen and have to be worked through no matter what system you are using.

  • I remain stunned and amazed at Rick’s ability to track down and fix problems. Quite impressive.

Rick always gives a good talk. He’s very generous with specific details that obviously derive directly from issues experienced in production. Here’s my gloss on his talk…

Stats
Categories: Architecture

What to Do When You Don’t Feel Like Writing and You Have Nothing to Say

Making the Complex Simple - John Sonmez - Mon, 03/31/2014 - 16:00

I spend a lot of time doing two things: blogging and telling other developers the benefits of doing things like starting their own blog. (Occasionally I squeeze in a little bit of time to code as well. And my wife says I spend too much time answering emails and checking my phone—she wanted me to […]

The post What to Do When You Don’t Feel Like Writing and You Have Nothing to Say appeared first on Simple Programmer.

Categories: Programming

Personal Branding

NOOP.NL - Jurgen Appelo - Mon, 03/31/2014 - 10:27
Personal Brand

I call myself a Creative Networker. It is the most accurate and succinct description I could think of that wraps all the projects I am involved in. My business cards and personal website explain that I’m a writer, speaker, trainer, entrepreneur, illustrator, manager, blogger, reader, dreamer, leader, and freethinker. (I seem to have forgotten bragger in that list.) If I had to summarize my personal brand with three words I would choose creative, smart, and funny. But I would easily agree that brands are better defined by their observers, not by their owners. My readers might prefer to describe me as weird, blunt, and smug. It is all in the eye of the beholder.

The post Personal Branding appeared first on NOOP.NL.

Categories: Project Management

SPaMCAST 283 – User Stories Pure and Simple

Software Process and Measurement Cast - Sun, 03/30/2014 - 22:00

Listen to the Software Process and Measurement Cast 283. The SPaMCAST 283 features our essay on user stories.  A user story is a brief, simple requirement statement from the user perspective. User stories are narratives describing who is interacting with the application; how they are interacting with the application and the benefit they derive from that interaction.

If you would like to read the original blog entries that formed the basis of this essay they can be found at:

User Stories
Cards
Epics, Themes and Issues
Problems
Same as Use Cases and Traditional Requirements?

Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.

Next week we will feature our interview with Evan Leybourn author of Directing the Agile Organization. We discussed agile business management. Agile is not just for IT anymore!

Upcoming Events

QAIQuest 2014

I will be facilitating a ½ Day tutorial titled Make Integration and Acceptance Testing Truly Agile. The tutorial will wrestle with the flow of testing in Agile projects and will include lots of practical advice and exercises. Remember that Agile testing is not waterfall done quickly. I will also be around for the conference and look forward to meeting and talking with SPaMCAST readers and listeners.  More confernce information   ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

StarEast

I will be speaking at the StarEast Conference May 4th – 9th in Orlando, Florida.  I will be presenting a talk titled, The Impact of Cognitive Biases on Test and Project Teams. Follow the link for more information on StarEast. ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

I look forward to seeing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI's mission is to pull together the expertise and educational efforts of the world's leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: "This book will prove that software projects should not be a tedious process, neither for you or your team." Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

SPaMCAST 283 – User Stories Pure and Simple

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 283.  The SPaMCAST 283 features our essay on user stories.  A user story is a brief, simple requirement statement from the user’s perspective. User stories are narratives describing who is interacting with the application, how they are interacting with the application and the benefit they derive from that interaction.

If you would  like to read the original blog entries that formed the basis of this essay they can be found at:
User Stories
Cards
Epics, Themes and Issues
Problems
Same as Use Cases and Traditional Requirements?

Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.

Next week we will feature our interview with Evan Leybourn author of Directing the Agile Organization. We discussed Agile business management. Agile is not just for IT anymore!

Upcoming Events

QAIQuest 2014

I will be facilitating a ½ Day tutorial titled Make Integration and Acceptance Testing Truly Agile. The tutorial will wrestle with the flow of testing in Agile projects and will include lots of practical advice and exercises. Remember that Agile testing is not waterfall done quickly. I will also be around for the conference and look forward to meeting and talking with SPaMCAST readers and listeners.  More confernce information   ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

StarEast

I will be speaking at the StarEast Conference May 4th – 9th in Orlando, Florida.  I will be presenting a talk titled, The Impact of Cognitive Biases on Test and Project Teams. Follow the link for more information on StarEast. ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

I look forward to seeing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

Soulver: For all your random calculations

Mark Needham - Sun, 03/30/2014 - 15:48

I often find myself doing random calculations and I used to do so part manually and part using Alfred‘s calculator until Alistair pointed me at Soulver, a desktop/iPhone/iPad app, which is even better.

I thought I’d write some examples of calculations I use it for, partly so I’ll remember the syntax in future!

Calculating how much memory Neo4j memory mapping will take up

800 mb + 2660mb + 6600mb + 9500mb + 40mb in GB = 19.6 GB

How long would it take to cover 20,000 km at 100 km / day?

20,000 km / 100 km/day in months = 6.57097681677241832481 months

How long did an import of some data using the Neo4j shell take?

4550855 ms in minutes = 75.84758333333333333333 minutes

Bit shift 1 by 32 places

1 << 32 = 4,294,967,296

Translating into easier to digest units

32381KB / second in MB per minute = 1,942.86 MB/minute
500,000 / 3 years in per hour = 19.01324310408685857874 per hour^2

How long would it take to process a chunk of data?

100 GB / (32381KB / second in MB per minute)  = 51.47051254336390681778 minutes

Hexadecimal to base 10

0x1111 = 4,369
1 + 16 + 16^2 + 16^3 = 4,369

I’m sure there’s much more that you can do that I haven’t figured out yet but even for these simple examples it saves me a bunch of time.

Categories: Programming

Inca Trail and Process Improvement: A Ritual Purification

dsc_0108Several years ago, I hiked the Inca Trail in Peru. The trek included three plus days of hiking, climbing, crawling, sweating and an occasional exasperated utterance. Each day presented me with a new set of challenges and tasks to confront and to overcome. In some cases failure was not an option with without endangering myself, fellow trekkers, guides or porters (or all of them at once). Each accomplishment brought its own value while moving us closer to the ultimate goal Machu Picchu. Meta-physically the process could be viewed as a form of ritual purification. The process presents the supplicant with a series of tasks and hurdles to overcome to bring him or her to a point where the larger goal can be grasped. The journey is an important part of reaching the ultimate goal. Taking the train to Machu Picchu would not deliver the same enlightenment as toiling for days to attain that goal. Both have a value and one or the other might not be an available option in every circumstance however under no circumstances should the value each delivers be confused with being the same.

The parable of the trek can be used in the process improvement arena. Organizations whose sole rationale for the journey is the goal, the approval or certification can easily be tempted to look for methods to “take the train,” in other words, to jump to the end without the effort between the beginning and the end. “Taking the train” has made many faces: buying a set of processes and implementing them blindly, hiring consultants to define, develop and implement process improvement without your organizations intimate participation or outsourcing to an organization that already has the title or certification you’re seeking. The scenarios are not valueless rather they provide significantly less value than derived from the ritual purification of the trek. I’m not suggesting organizations blindly blunder down the SPI path as a learning process: guides, porters and tools can help you make the journey in your own way. Making the journey in your own way provides the greatest possibility of learning, growing and transforming. The guides can provide best practices, templates knowledge and guidance however without making the journey these pieces of knowledge capital will not be easily internalized. It will be easy to view the requirements of change as external pressures to be resisted rather than embraced. Humans and organizations resist change primarily due to fear or complacency. The ritual purification built into the journey is a tool to break the complacency and transform fear into action; to incorporate the journey with all of its trials and tribulations into part of the solution rather than just an obstacle in the way of attaining your process improvement goal.


Categories: Process Management

Get Up And Code 047: Miguel Castro Knows How To Stay Fit

Making the Complex Simple - John Sonmez - Sat, 03/29/2014 - 15:00

I finally got a chance to talk with Miguel Castro about fitness and diet and how he incorporates it in his busy life. Miguel has an awesome story about how he lost quite a bit of fat and replaced it with muscle. Full transcript below: show John:               Hey everyone, welcome back to the Get Up and CODE […]

The post Get Up And Code 047: Miguel Castro Knows How To Stay Fit appeared first on Simple Programmer.

Categories: Programming

Optimism: The Good and The Bad

It is good to be optimistic, but we still want airbags...

It is good to be optimistic, but we still want airbags…

Is optimism a good thing?  Optimistic people live longer and are typically happier then less optimistic people.  Michael Sheier and Charles Carver report in Effects of Optimism on Psychological and Physical Well-Being that optimism may lead a person to cope more adaptively with stress.  Unfortunately the personal benefits of optimism do not tend to expend to projects and teams.  So why is optimism good for most of people while it is a problem for teams?

  • Optimism causes risks to be overlooked.
  • Optimism causes estimates to be missed.
  • Optimism causes plans to be slipped.
  • Optimism causes the need for heroism.

In people optimism is a great thing, but for project managers optimistic realism (defined as optimism balanced with “REAL” data) is far healthier. Regardless of my plea most teams or leaders eschew realism and plan optimistically.  We are trained to be problem solvers; trained that we can surmount any problem by sheer force of will, therefore we plan optimistically.  Planning for optimism only counts when you define planning for optimism as a plan based on measured performance plus planned innovation.  Innovation must be planned because projects cannot count on serendipity to achieve their goals and neither can teams nor an organization count on lightening striking twice without a plan and process to attract it.

Agile teams of all shapes and flavors need to separate how they view themselves from how they view their role.  Personal optimism is great, but professional optimism should be replaced with optimistic realism.  Realism is seeing the forest for the trees and planning to make sure you achieve your goals.  A self-managing team must see risk, to predict the future and plan for innovation and change to deliver in a timely, accurate and efficient manner. Teams must focus on performance because they are charged with delivering functionality while making a customer happy.  It takes optimistic realism to deliver.  In short:

  • Optimistic realism causes risks to be identified.
  • Optimistic realism recognizes capacity.
  • Optimistic realism supports making flexible plans.
  • Optimistic realism fosters teams.

Optimism without measurement and data as a balance is unrealistic.


Categories: Process Management

Stuff The Internet Says On Scalability For March 28th, 2014

Hey, it's HighScalability time:


Looks like a multiverse, if you can keep it.
  • Quotable Quotes:
    • @abt_programming: "I am a Unix Creationist. I believe the world was created on January 1, 1970 and as prophesized, will end on January 19, 2038" - @teropa
    • @demisbellot: Cloud prices are hitting attractive price points, one more 40-50% drop and there'd be little reason to go it alone.
    • @scott4arrows: Dentist "Do you floss regularly?" Me "Do you back up your computer data regularly?"
    • @avestal: "I Kickstarted the Oculus Rift, what do I get?" You get a lesson in how capitalism works.
    • @mtabini: “$20 charger that cost $2 to make.” Not pictured here: the $14 you pay for the 10,000 charger iterations that never made it to production.
    • @strlen: "I built the original assembler in JS, because it's what I prefer to use when I need to get down to bare metal." - Adm. Grace Hopper
    • tedchs: I'd like to propose a new rule for Hacker News: only if you have built the thing you're saying someone should save money by building themselves, may you say the person should build that thing.
    • lamby: Bezos predicted they would be good over the long term but said that he didn’t want to repeat “Steve Jobs’s mistake” of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition.
    • @PariseauTT: I feel a Netflix case study coming...everybody get your drinks ready...#AWSSummit
    • seanmccann: That's no different than startups of the past having to pay thousands to millions of dollars to setup servers. These days those same servers can be setup in minutes for a fraction of the cost. Timing is everything. Sometimes the current market pricing for a commodity is too expensive to make your business viable today but in the future that will not be the case. Just depends how long that will take. 
    • Petyr 'Littlefinger' Baelish: Chaos isn't a pit. Chaos is a ladder. Many who try to climb it fail and never get to try again. The fall breaks them. And some, are given a chance to climb. They refuse, they cling to the realm or the gods or love. Illusions. Only the ladder is real. The climb is all there is.
  • We turn those who first climb the peaks of tall mountains into heros. But what of the mountain? Mariana Mazzucato The Entrepreneurial State: Debunking Private vs. Public Sector Myths: Every feature of the iPhone was created, originally, by multi-decade government-funded research. From DARPA came the microchip, the Internet, the micro hard drive, the DRAM cache, and Siri. From the Department of Defense came GPS, cellular technology, signal compression, and parts of the liquid crystal display and multi-touch screen (joining funding from the CIA, the National Science Foundation, and the Department of Energy, which, by the way, developed the lithium-ion battery.) CERN in Europe created the Web. Steve Jobs’ contribution was to integrate all of them beautifully.

  • This is not an April Fools' joke, as the name might make a certain sort of mind consider. WebScaleSQL: MySQL goes web scale with contributions from MySQL engineering teams at Facebook, Google, LinkedIn, and Twitter. It includes lots of good work on the test suite, performance enhancements, and features to make scaling easier. So many forks at the table (MariaDB, Percona, webscale).

  • This is why we can't have nice code. Martin Sústrik argues In the Defense of Spaghetti Code, quite successfully I think, that it's often clearer to have one large 1500 line function than it is to have a refactored great big ball of mud. Commenters argue from a perfect world stance that if you do X from the start and then continue the same practices over the years and and hundreds of programmers then code can be perfected. Not so. The nature of many domains is they are just messy. At a certain level of abstraction messiness can be hidden, but when you are in the guts of a thing, messiness is preserved. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

No Discount for Charity

NOOP.NL - Jurgen Appelo - Fri, 03/28/2014 - 16:14
Management 3.0 Workout

This week I was asked if I offer a discount for charity organizations for my new one-day workshops.

I said, “No”.

I don’t believe in discounts for charity for various reasons. But I’m also sure some people will hate me for saying “No” when I don’t offer a good explanation. “Jurgen is such a bad selfish person! Boo!” Well, yes that’s true. But let me offer you my thoughts.

The post No Discount for Charity appeared first on NOOP.NL.

Categories: Project Management

Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices

Herding Cats - Glen Alleman - Fri, 03/28/2014 - 04:28

There is a popular myth that the estimating problem in  software development starts with comparing software development to bridge design and development. This lays the groundwork that software development is not the same as any other engineering discipline and estimating can't be done for software projects

That some how the project paradigm is excempt from the Five Immutable Principles of every other project domain. These immutable principles are:

  1. Do we know what DONE looks like in units of measure meaningful to the decision makers? These measures are  effectiveness and performance. The Measures of Effectiveness are operational measures closely related to the achievement of the mission or operational objectives evaluated in an operational environment, under a specific set of conditions. The Measures of Performance characterize the physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  2. Do we have a PLAN - not a schedule, but a strategy - to reach this done state in a logical sequence of work activities? This sequence  delivers the needed capabilities to support the mission or business case to maximize the ROI for each capability.
  3. Do we have some understanding of the RESOURCES needed to execute this plan? These resources include staff, facilities, money, tools, environments all needed for success.
  4. Have we identified all the uncertainties - reducible and non-reducible - that will create risks to our success?
  5. Have we established some form of measures progress toward our goal in units of physical percent complete?

If we are building bridges the answers to these questions are pretty clear. We are  bending metal into money in ways well established from the past. For software project, these questions still have clear and concise answers, although the answers are developed not from blueprints, but from other means.

Economics of writing software for money

If we accept these immutable principles of project success there are few other immutable principles of business and the management of a business that writs software for money. Either internal projects, where funding comes for the company, or external projects where the money comes from a customer.

  1. All value is exchanged for the consumption of time and money.
  2. This value cannot be assessed in the absence of knowing the cost to produce that value

First let's look at Barry Boehm's seminal work Software Economics. Some have suggested this idea is outdated. They would be wrong, especially since that suggestion - being wrong - is not backed by any experience or reference of managing the balance sheet of for profit software project. Internal projects where the cost and cost perfomance is outside the persons management responsibility doesn't count. 

Show me the money (that you have been held accountable for)

And buy accountable I mean, you lose you job when things go bad financially. 

Now with the advent of agile software development, Barry's concepts from long ago need to be updated for iterative and incremental development. Barry by the way instituted the Spiral method in the DOD, replacing the Waterfall method, An recently instituted the evolutionary method replacing the spiral method, starting with Section 803 of the National Defense Authorization Act. 

Core business processes are still in place, not matter the software development methods. Those processes haven't been overturned by agile or any other process. We still exchange money for value. The rate of that exchange must be a positive ratio

Value / Cost > 1.0

must be positive if we expect to stay in business for long. Monetizing value is many times difficult up front. Monetizing the work effort may appear difficult to some, but in fact it is not as difficult as it appears. There are many tools, processes, books, training, professional organizations that have field proven solutions to estimating and managinf cost in the software development domain.

Back to the Future on Estimating Software Development Cost (and Schedule)

  • The estimates aren't for the developers, although development management is a crticial user of the estimates, and the developers are critical contributors to the estimates. The estimats are for the business and the decision processes made by the business. Business takes funds and turns them into products and services. Bending metal into money is a common phrase in the hardgoods business. Bending software into money works for the software development (for money) business (internal or external).
  • Developers are closest to the work, but they are not likley the best at estimating the work. Not because they don't know what needs to be done. If they don't know what needs to be done, they'll produce bad estimates. 
  • Decisions on how to spend the companys money start and end - in any profit focused company - with  when will we earn back our investment?
    • What's our profit margin?
    • What's our ROI, IRR, Breakeven day, Analysis of Alternatives?
    • What staffing impacts will result from starting this project?
    • When will this project complete, so I can know when the spend profile and staffing burden will change
    • What's the expected completion date so we can transition to the business, and then transition to operations. So we can know when to change what we are doing to the new operations  doing.
  • Knowing the costs of products or services provided in exchange for money is the basis of any business strategy.
    • People give you money
    • You expend effort (cost)
    • The difference yuo keep, minus your overhead, fringe, and benefits.
    • If you wait till the end to do this calculation, it's too late to change the course of your efforts.

This approach comes from early in my career. A time by Barry Boehm worked at TRW, along with me and 1,000's of other. My boss, took us young guns aside one day and gave us his standard speach on a pay day Friday.

Everyone take out you pay check. Look in the upper left corner. It says Bank of America and TRW and the branch address. That's NOT where the money comes from to pay you. The money comes from the US Air Force (TRDSS was our program). They pay us to deliver the Statement of Work items for the amount of money we said we woudl on the date (more or less) we said we would. Keep doing that - within the allowable variances - and you'll keep getting those check you can take across the street to deposit in your account.

Customer provides money to do the work needed to provide value. In the TDRSS case, provide the internet in space. When the cost of providing the value exceeds the budget for providing the value, we loose money and likely go out of business.

Not knowing when you're headed for trouble on cost, schedule, or techncial performance is simply bad business. So having an estimated performance target to steer against is mandatory for business success. It can't be any cleared than that. Without that steering target your project is open loop management and you'll be in the ditch before you can steer away from the ditch. 

Related articles Facts and Fallacies of Estimating Software Cost and Schedule
Categories: Project Management

A Really Simple Checklist For Change Readiness Assessment

A wrapping paper change barbarian

A wrapping paper change barbarian

When you begin a change process it is sometimes important to have a reminder of the critical points required to start the journey. If you were getting ready for vacation the checklist might include identifying who is going and who is in charge, deciding on a destination, a map, hotel reservations and a list of people to tell you are going. Beginning a process improvement project is not very different. I have constructed a simple checklist with five of the most critical requirements for preparing to embrace a journey of change. My critical five are:

1. An Identified Goal

2. Proper Sponsorship

3. Sufficient Budget

4. A Communication Strategy and Plan

5. A Tactical Plan

The first item on the checklist is an identified goal. The goal is the definition of where you want to go, the destination in the vacation analogy. A goal provides direction and a filter to understand the potential impact of constraints. Examples of goal can range from something as simple as, “reduce the cost of projects” or as complex as “attain CMMI Maturity Level 5”. The goal also sets the table for discussing all of the other items on the checklist, such as the required budget. One piece of advice: make sure your goal can be concisely and simply stated. In my opinion simplicity increases the chance the goal will be broadly remembered, which reduces the number of times you will need to explain the goal, which will increase the amount of time available for progress.

Proper sponsorship is next on the list. Sponsorship is important because it is provides the basis for the authority needed to propel change. There are many different types and levels of sponsorship. The word “proper” is used in this line item to remind you that there is no one type of sponsorship that fits all events and organizational culture. Examples of different flavors of sponsor include “the barbarian.” The barbarian is the type that will lead the charge, but typically is less collaborative and more a command-driven personality. Barbarians tend to be viewed as zealots who harness their belief structure to provide single minded energy towards the goal they are focused on. Having a barbarian as a sponsor can infuse change projects with an enormous amount of power. The bookend to the barbarian type of sponsorship is the “bureaucrat”. Sponsorship from a bureaucrat is very different. Instead of leading the charge bureaucrats tend to organize and control the charge. They may provide guidance, but they rarely get directly involved in the fray. The examples show two different varieties of sponsorship each that will fit in different organizations. In a life or death situation I would like to have a barbarian for a sponsor, however if I was affecting incremental changes in a command and control organization the bureaucrat would make more sense. Remember sponsorship is important because sponsorship give you access to power.

Budget is next on the checklist. The term budget can cover a wide range of ground ranging from money to availably of human resources (effort). The budget will answer the question “how much of the organization’s formal resources can you apply?” The budget that ends up being identified to support change is always less than what seems to be needed. Use this constraint as a tool to motivate your team to find innovations on the way to attaining the goal rather and a reason to rein in your goal.

The first plan I recommend building is an organizational change management plan (OCMP). The OCMP is a means to frame how your project is going to transform the future state of the organization. An organizational change management plan will integrate the project roles and responsibilities with the requirements for communication, training, oversight, reporting and the strategies to address resistance, and reinforcement activities. We will address concepts of organizational change management in essay later in 2009. The OCMP is a mixture of a high level map and how-to document that is critical to ensure you are as focused on how you are change the organization as to tasks required to define and implement specific processes.

Finally you will need a tactical plan that lays out the tasks you need to accomplish and the order the tasks need to be done. The focus and breadth of the tactical plan you use will be different depending on the project management technique that you use. For example if you use a time boxed technique like SCRUM your tactical plan will focus on identifying tasks for the current sprint based on the backlog of items required to reach your goal. Regardless of the planning technique used, a sprint backlog or detailed project schedule doesn’t matter you must have a tactical plan or risk falling into random activity. Use the technique that conforms to your project’s needs and your organization’s culture. The bottom line is that you will you need to understand the activities and order they occur in to get to your goal.

Change is difficult to accomplish in the best of times and almost impossible if you fail to start properly. The simple checklist for change readiness described in this paper was developed and compiled to help you focus on a set of topics that need to be considered when beginning any process improvement project. Are there other areas that should be on the list? Can each topic area be decomposed into finer levels of granularity? I believe the answer is certainly and I would urge you to augment and decompose the list and further to share your results. In any case a checklist that focuses you on getting your sponsorship, goals, budget and plans in order can only help you start well.


Categories: Process Management

"I'll Send You the Deck"

Software Architecture Zen - Pete Cripp - Fri, 03/28/2014 - 00:02
Warning, this is a rant!

I’m sure we’ve all been here. You’re in a meeting or on a conference call or just having a conversation with a colleague discussing some interesting idea or proposal which he or she has previous experience of and at some point they issue the immortal words “I’ll send you the deck”.

Read more...
Categories: Architecture

Two New Videos About Testing at Google

Google Testing Blog - Thu, 03/27/2014 - 22:41
by Anthony Vallone

We have two excellent, new videos to share about testing at Google. If you are curious about the work that our Test Engineers (TEs) and Software Engineers in Test (SETs) do, you’ll find both of these videos very interesting.

The Life at Google team produced a video series called Do Cool Things That Matter. This series includes a video from an SET and TE on the Maps team (Sean Jordan and Yvette Nameth) discussing their work on the Google Maps team.

Meet Yvette and Sean from the Google Maps Test Team



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Categories: Testing & QA

Optimal Logging

Google Testing Blog - Thu, 03/27/2014 - 22:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Categories: Testing & QA

The Google Test and Development Environment - Pt. 1: Office and Equipment

Google Testing Blog - Thu, 03/27/2014 - 22:40
by Anthony Vallone

When conducting interviews, I often get questions about our workspace and engineering environment. What IDEs do you use? What programming languages are most common? What kind of tools do you have for testing? What does the workspace look like?

Google is a company that is constantly pushing to improve itself. Just like software development itself, most environment improvements happen via a bottom-up approach. All engineers are responsible for fine-tuning, experimenting with, and improving our process, with a goal of eliminating barriers to creating products that amaze.

Office space and engineering equipment can have a considerable impact on productivity. I’ll focus on these areas of our work environment in this first article of a series on the topic.

Office layout

Google is a highly collaborative workplace, so the open floor plan suits our engineering process. Project teams composed of Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs) all sit near each other or in large rooms together. The test-focused engineers are involved in every step of the development process, so it’s critical for them to sit with the product developers. This keeps the lines of communication open.

Google Munich
The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Google Dublin
Working with remote teams

Google’s worldwide headquarters is in Mountain View, CA, but it’s a very global company, and our project teams are often distributed across multiple sites. To help keep teams well connected, most of our conference rooms have video conferencing equipment. We make frequent use of this equipment for team meetings, presentations, and quick chats.

Google Boston
What’s at your desk?

All engineers get high-end machines and have easy access to data center machines for running large tasks. A new member on my team recently mentioned that his Google machine has 16 times the memory of the machine at his previous company.

Most Google code runs on Linux, so the majority of development is done on Linux workstations. However, those that work on client code for Windows, OS X, or mobile, develop on relevant OSes. For displays, each engineer has a choice of either two 24 inch monitors or one 30 inch monitor. We also get our choice of laptop, picking from various models of Chromebook, MacBook, or Linux. These come in handy when going to meetings, lounges, or working remotely.

Google Zurich
Thoughts?

We are interested to hear your thoughts on this topic. Do you prefer an open-office layout, cubicles, or private offices? Should test teams be embedded with development teams, or should they operate separately? Do the benefits of offering engineers high-end equipment outweigh the costs?

(Continue to part 2)
Categories: Testing & QA