Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Former Softie Patricia Walsh Sets a World Record for Blind Triathletes

I’m always of fan of hearing about how Softies change the world, inside and outside of Microsoft.

I was reading Blind Ambition: How to Envision Your Limitless Potential and Achieve the Success You Want by Patricia Walsh.  It’s an inspirational story, as well as an insightful read if you are looking for ways to up your game or get the edge in work and life.

I wrote a 10 Big Ideas from Blind Ambition to share some of the highlights from the book.

Walsh is a former Softie.  More than that, she has raced in marathons, ultra-marathons and IRONMAN triathlons.  In 2011, Walsh set a new world record for blind triathletes, shattering the previous male and female records by over 50 minutes.

Pretty impressive.

She left Microsoft to start her own business, pursuit her speaking career, and train as a world-class athlete.

She set a high-bar.

But she also set a great example.  Walsh wanted to help light the way for others to show them that they can be limitless if they set goals, put in the work, and don’t let fear or failures hold them back. 

And most importantly, don’t put limits on yourself, and don’t fall into the trap of the limits that others put on you.

Categories: Architecture, Programming

All The Books I Read This Year

Making the Complex Simple - John Sonmez - Mon, 12/08/2014 - 16:00

I read or listen to the audio version of a lot of books each year. It’s really important to always be learning and trying to expand your mind with new ideas. Since we are getting close to the end of the year, I thought I’d do a post listing all the books I read this year and give a few thoughts ... Read More

The post All The Books I Read This Year appeared first on Simple Programmer.

Categories: Programming

The End of Common-off-the-Shelf Software

Xebia Blog - Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‘works in production’ in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear’s browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

The End of Common-off-the-Shelf Software

Xebia Blog - Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‘works in production’ in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear’s browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

The End of Common-off-the-Shelf Software

Xebia Blog - Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‘works in production’ in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear’s browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

SPaMCAST 319 – Requirements, Communications, Fixing IT

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast.

SPaMCAST 319 includes three segments! The first segment is our essay, Why Are Requirements So Hard To Get Right?  Much of the problems with requirements boil down to people, and while people are not the only factor driving the quality of requirements, they are a critical factor.  Pay attention to how people are being deployed, provide support and instruction and make darn sure the right people are in the right place at the right time.

The second segment marks the debut of Jo Ann Sweeny’s new column Explaining Change.  Jo Ann’s first installment tackles the need for defining the impact you expect communication activities to make – knowledge, attitudes, action.  Visit Jo Ann’s website at http://www.sweeneycomms.com/ and let her know what you think of her new column.

The third segment features a new entry of Gene Hughson’s column: Form Follows Function.  In this installment, Gene talks about his blog entry, Fixing IT – Credible or Cassandra? Gene points out that credibility is a precious commodity that, if squandered, is difficult to recover even when you are correct!

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 319 – Requirements, Communications, Fixing IT

Software Process and Measurement Cast - Mon, 12/08/2014 - 00:34

SPaMCAST 319 includes three segments! The first segment is our essay, Why Are Requirements So Hard To Get Right?  Much of the problems with requirements boil down to people, and while people are not the only factor driving the quality of requirements, they are a critical factor.  Pay attention to how people are being deployed, provide support and instruction and make darn sure the right people are in the right place at the right time.

The second segment marks the debut of Jo Ann Sweeny’s new column Explaining Change.  Jo Ann’s first installment tackles the need for defining the impact you expect communication activities to make – knowledge, attitudes, action.  Visit Jo Ann’s website at http://www.sweeneycomms.com/ and let her know what you think of her new column.

The third segment features a new entry of Gene Hughson’s column: Form Follows Function.  In this installment, Gene talks about his blog entry, Fixing IT – Credible or Cassandra? Gene points out that credibility is a precious commodity that, if squandered, is difficult to recover even when you are correct!

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Upcoming Events

DCG Webinars:

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

R: String to Date or NA

Mark Needham - Sun, 12/07/2014 - 20:29

I’ve been trying to clean up a CSV file which contains some rows with dates and some not – I only want to keep the cells which do have dates so I’ve been trying to work out how to do that.

My first thought was that I’d try and find a function which would convert the contents of the cell into a date if it was in date format and NA if not. I could then filter out the NA values using the is.na function.

I started out with the as.Date function…

> as.Date("2014-01-01")
[1] "2014-01-01"
 
> as.Date("foo")
Error in charToDate(x) : 
  character string is not in a standard unambiguous format

…but that throws an error if we have a non date value so it’s not so useful in this case.

Instead we can make use of the strptime function which does exactly what we want:

> strptime("2014-01-01", "%Y-%m-%d")
[1] "2014-01-01 GMT"
 
> strptime("foo", "%Y-%m-%d")
[1] NA

We can then feed those values into is.na..

> strptime("2014-01-01", "%Y-%m-%d") %>% is.na()
[1] FALSE
 
> strptime("foo", "%Y-%m-%d") %>% is.na()
[1] TRUE

…and we have exactly the behaviour we were looking for.

Categories: Programming

Re-read Saturday: Creating the Guiding Coalition, Leading Change, John P. Kotter Chapter Four

index

John P. Kotter’s Leading Change established why change in organizations can fail and the forces that shape successful in the first two chapters. The two sets of opposing forces he identifies are used to define his famous eight-stage model for change. The first step of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second step is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for to make change happen.

Kotter defines a guiding coalition as a team built on trust and a common goal. The team must reflect the proper balance of four key attributes in order to be effective:

  1. Position Power – Are the members of the coalition in the right organizational positions to support and foster progress? Are they in the right place in the organization to overcome those outside the coalition that can disrupt progress? For example, not including line-management in the guiding coalition for a change that affects them is typically dangerous. Not involving the leadership of the organization most directly affected will generate active resistance or passive aggressive behavior. Without the involvement of those that will be affected  In a software development organization, not including development managers in the guiding coalition for planning and implementing changes that would impact how they staff a project would generate resistance.  It would easy to view the change they are being asked to make as being forced on them from the outside..
  2. Expertise – Do the members of the team have the relevant skills and knowledge needed to make the decision’s needed to make the right change happen? When considering the topic of expertise, the concept of diversity must be considered. Relevant diversity helps team take broader perspective when making decisions.
  3. Credibility – Does the broader organization (specially the areas impacted in the change) trust and believe in the reputations of the members so that decisions will be accepted? Without credibility the guiding the decisions and messaging around the change will not be taken seriously.
  4. Leadership – The guiding coalition has to have enough leadership to drive the change. While there is not a single precise definition of leadership, the core attribute of all definition is the ability to influence a group of people to achieve a goal.

The guiding coalition needs to reflect a combination of all of these qualities. For example, a guiding coalition without leadership will either tend to wander aimlessly or not have enough influence to get the organization to follow them. Kotter drives home the point that a well-balanced team is needed by comparing changes driven by the lone, powerful champion and the underpowered committee. The lone powerful champion can work in scenarios where diversity of thought is not critical or the required rate of change is slow. Given the pace of change and level of complexity most development organizations face on a day-to-day basis, these scenarios are outside of the norm today. Many organizations appoint committees to lead and champion change that are merely groups of managers that meet to facilitate status sharing or don’t have the power needed to generate change and influence the organization. A few years ago I was asked to observe a project steering committee for a CMMI implementation. The implementation touched every aspect of the development group in the organization (including support and enhancements). The steering committee was comprised of proxies for the leaders all impacted departments. Each person on the committee had their own agenda insuring that the committee was a group rather than a team. Also, because they were proxies for other leaders there were very few decisions they were empowered to make because they had very little positional authority. The steering committee had a hard time steering anything.  A guiding coalition needs to be a team that is focused on a single goal in order to effectively provide the structure needed to harness and guide the energy unleashed by a common well understood sense of urgency.


Categories: Process Management

Agile Risk Management: Untangling The Wires

Risk management is like sorting out the cords.

Risk management is like sorting out the cords.

As we have noted, the difference between the classic and Agile approaches to risk management boils down to a few serious dichotomies. The first is that classic methods tend to be project-manager driven, while Agile processes involve and hold the entire team responsible. Second, Agile risk identification and management is built on the continuous re-planning process that is intrinsic to Agile rather than being event/review driven (e.g. phase gates or defined review cycles) which is more the norm in classic project management. All projects need to expend time and effort on dealing with regardless of the processes used to identify, monitor and manage risks. That time and effort means there will be less time to address the functionality requested by the product owner and product management. This leads project personnel to try to balance the effort they spend on the risk management process AND to only focus on the risks that really matter.

Lean risk management processes, such as the one we have described, focus on minimizing the effort needed to identify and manage risks by integrating risk management into other processes. Examples include using the definition of done to mitigate risk and the product backlog to document risks as user stories. Risk management does require specialized meetings and deliverables that increase the perceived overhead. Building risk management into day-to-day activities has secondary benefit of creating team level involvement which is useful for reinforcing the risk management process.

One of the common features of mature risk management processes (whether they reflect waterfall or Agile methods) is risk prioritization. We have explored several mechanisms to evaluate the probability the an risk will become an issue and the potential impact of the issue (Agile and Risk Management: Prioritization Techniques, Part 1 and Agile and Risk Management: Prioritization and Measurement Techniques, Part 2). In all cases the goal of the processes is to consistently prioritize risks so that teams and managers spend their time on the risks that really matter. I recently chatted with a project “risk manager” while waiting for a table at a restaurant. He suggested that he often sees projects without formal prioritization techniques spending precious time and effort on worrying about risks that they can’t influence or have a nearly zero chance of happening, but sound scary. Every erg of energy and every minute spent on risks that are not relevant is waste and provides fodder for those that see risk management as a waste of space.

One common complaint about risk management is that we can never anticipate everything; there are unknown unknowns. The conclusion some practitioners make is to abandon planning and to just stay vigilant. The argument is the effort for risk management is not worth the return. This approach might work, however I have not seen it work on any sizable project. Coupling a lean risk management process with Agile risk management maximizes the value from risk management. This morning while listening to the Gist podcast, Mike Pesca (the host) stated that worrying about the future is an important survival mechanism.  True but that survival mechanism doesn’t require a 100-page risk register that no one will ever look at to be effective.


Categories: Process Management

Stuff The Internet Says On Scalability For December 5th, 2014

Hey, it's HighScalability time:


InfoSec Taylor Swift is wise...haters gonna hate.

 

  • 6 billion+: Foursquare checkins; 25000: allocs for every keystroke in Chrome's Omnibox
  • Quotable Quotes:
    • @wattersjames: Pretty convinced that more value is created by networks of products in today's world than 'stacks'--'stack' model seems outdated.
    • @ChrisLove: WOW 70% of http://WalMart.com  traffic over the holidays was from a 'mobile' device! #webperf #webdevelopment #html5
    • @Nick_Craver: No compelling reason - we can run all of #stackexchange on one R610 server (4yr old) @ 5% CPU. #redis is incredibly efficient.
    • @jehiah: The ticker on http://bitly.com  rolled past 20 BILLION Bitlinks today. Made possible by reliable mysql clusters + NSQ.
    • @tonydenyer: micro services how to turn your ball of mud into a distributed ball of mud" #xpdaylon
    • @moonpolysoft: containers are the new nosql b/c all are dimly aware of a payoff somewhere, and are more than willing to slice each other up to get there.
    • @shipilev: Shipilev's First Law of Performance Issues: "It is almost always something very simple and embarrassing to admit once you found it"
    • Gérard Berry: We discovered that this whole hypothesis of being infinitely fast both simplified programming and helped us to be faster than others, which was fun.
    • @randybias: OH: “The religion of technology is featurism.”  [ brilliant observation ]
    • @rolandkuhn: ACID 2.0: associative commutative idempotent distributed #reactconf @PatHelland
    • @techmilind: @adrianco @wattersjames @cloudpundit Intra-region latency of <2ms is the killer feature. That's what makes Aurora possible.
    • @timreid: async involves a higher level of initial essential complexity, but a greatly reduced level of continual accidental complexity #reactconf
    • @capotribu: Docker Machine + Swarm + (proposed) Compose = multi-containers apps on clusters in 1 command #DockerCon
    • @dthume: "Some people say playing with thread affinity is for wimps. Those people don't make money" - @mjpt777 at #reactconf
    • @jamesurquhart: Reading a bunch of apparently smart people remain blind to the lessons of complexity. #rootcauseismerelyaclue
    • Facebook: the rate of our machine-to-machine traffic growth remains exponential, and the volume has been doubling at an interval of less than a year.

  • In the US we tend to be practical mobile users instead of personal and social fun users. Innovation is happening elsewhere as is clearly shown in Dan Grover's epic exploration of Chinese Mobile App UI Trends: using voice instead of text for messaging; QR codes for everything; indeterminate badges to indicate there's something interesting to look at; a discover menu item that contains "changing menagerie of fun"; lots of app stores; using phone numbers for login, even on websites; QR code logins; chat as the universal UI; more damn stickers; each app has a wallet; use of location in ways those in the US might find creepy; tight integration with offline consumption; common usage of the assistive touch feature on the iPhone; cutesy mascots in loading and error screens; pollution widgets; full ad splash screen when an app starts; theming of apps. 

  • Awesome analysis. A really deep dive with great graphics on Facebook's new network architecture. Facebook Fabric Networking Deconstructed: the new Facebook Network Fabric is in fact a Fat-Tree with 3-levels.

  • Just a tiny thing. AMD, Numascale, and Supermicro Deploy Large Shared Memory System: The Numascale system, installed over the past two weeks, consists of 5184 CPU cores and 20.7 TBytes of shared memory which is housed in 108 Supermicro 1U servers connected in a 3D torus with NumaConnect, using three cabinets with 36 servers apiece in a 6x6x3 topology. Each server has 48 cores three AMD Opteron 6386 CPUs and 192GB memory, providing a single system image and 20.7 TB to all 5184 cores.

  • Quit asking why something happened. The question that must be answered is how. The Infinite Hows (or, the Dangers Of The Five Whys)

  • What do customers want? Answers. Greg Ferro talks about how a company that hired engineers to answer sales inquiries doubled their sales. All people wanted were their questions answered. Once answered they would place an order. No complex time wasting sales cycle required. Technology has replaced the information gather part of the sales cycle. Customers already know everything about a product before making contact. Now what they need are answers.

  • Docker Networking is as simple as a reverse 4 and a half somersault piked from a 3 metre board into a black hole.

  • How is that Google Cloud Platform thingy working out? Pretty well says Aerospike. 1M Aerospike reads/sec for just $11.44/hour, on 50 nodes, with  linear scalability for 100% reads and 100% writes.

  • Your bright human mind can solve a maze. So what? Fatty acid chemistry can too: This study demonstrates that the Marangoni flow in a channel network can solve maze problems such as exploring and visualizing the shortest path and finding all possible solutions in a parallel fashion. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Building Software Before Requirement Are Stable??

Herding Cats - Glen Alleman - Fri, 12/05/2014 - 14:47

There is a popular notion that requirements stability is difficult to acheive, because customers don't know what they want. Ignoring for the moment that requirements instability is the root cause of Death March project and let's pretend requirements stability is in fact hard to come by.

If you a project without requirements stability, the architecture of the underlying systems become paramount, since the components of the system are guaranteed to change.

What is software architecture?

Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.

This means that the underlying architecture must

  • Be based on a reference design, so the structure has alreay been shown to work
  • The transaction model must be separated from the data model and the display model
  • There must be strict, some would say ruthless, separation of concerns, for example

Screen Shot 2014-12-04 at 10.11.02 PM

  • Data warehouse paradigm, where the data for the transactions and the resulting display and interaction is completely isolated
  • A federated architecture, where systems are integrated through a data bus or some other form os isolation
  • The referential integrity of the data must be ruthlessly maintained and enforced

With these the result of low requirements fidelity is the ball of mud or shanty town architecture. The Ball or Mud paradigm is 

A BIG BALL OF MUD is a casually, even haphazardly, structured system. Its organization, if one can call it that, is dictated more by expediency than design. Yet, its enduring popularity cannot merely be indicative of a general disregard for architecture.

This means there is 

  • Throw away code - we'll fix this now and worry about the outcomes later
  • Piecemeal code that is installed to fix the current problems, only to create future problems.
  • Keep it working - we're on a deadline, we need to get this fixed and move on.

And of course the biggest lie of all for any non-trivial system

Our architecture emerges as the requirements become known

This approach is the basis of Shanty Town or Ball of Mud architecture of the modern hacked together systems we've all come in contact with.

Related articles Software Estimating for Non Trival Projects
Categories: Project Management

GTAC 2014 Wrap-up

Google Testing Blog - Fri, 12/05/2014 - 01:26
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Testing & QA

Agile Risk Management: Recognizing Risks

You can't capture risk with your camera. You need to have a conversation with a diverse group of stakeholders.

You can’t capture risk with your camera. You need to have a conversation with a diverse group of stakeholders.

At a recent Q&A session held after a presentation I was asked: where could a person get their project risks? I stifled a smart-alecky answer that would have included driving to the grocery store, and decided that the question that was being asked was really: how do I go about recognizing and capturing risks? Perhaps a more boring question, but far more important. If I answered the first question the answer would have been that risks are generated by the interaction of the project with other projects, applications, the business, technology and world (risk categories) – pretty much the existence of a project could be considered a risk magnet. The answer to the second question is that once you have a risk magnet (a project) you will need to ask as many different people as is feasible to recognize the possible risks. The discussion of risk always appropriate, however the typical meeting/events and the types of people to consider in the conversation need to be planned. The discovery process typically follows the requirements/user story discovery process outlined below.

  1. Carve out time when you are developing the backlog and ask as diverse a group possible to identify the potential problems that could get in the away of delivering the value promised by the project. Prompt the group to consider business, technical, operational and organizational factors. Diversity is incredibly important to inject different perspectives, so that the team does not fall prey to only seeing the risks they expect (a form of cognitive bias).
  2. Form a small team (consider the Three Amigos) to interview stakeholders that either were not part of the planning exercise. Explain the project and use the same category prompts to generate a risk discussion.
  3. Gather risk data though surveys when the program stakeholders are geographically diverse. (Note: I have only seen this used well in very large programs with professional market research staffs)
  4. Interview customers or potential customers. Customer interviews are not generally used as a standalone risk discovery tool, but rather primarily as a tool to gather requirements/user stories, however piggybacking a few questions to solicit potential risks is useful to add diversity of thought to risk identification.
  5. Periodically ask about risks either as an agenda item or as a follow-on to standard meetings. For example, I have seen teams that have successfully added a five-minute follow-on to the last daily stand-up of the week in order to consider risks. A quick risk recognition session can easily be added to other standard meeting many projects have. Other standard scrum meetings that can be used to identify risks include demonstrations, retrospectives and sprint planning. Each of these meetings will provide a different perspective on the project and the team therefore could expose other potential risks.

The baseline answer to the question of how can I recognize and capture risks is by involving all of the projects stakeholders in a discussion of potential risks. The process of collaborative discussion will help increase diversity of thought, reducing (but NOT eliminating) the potential number of unknowns – unknowns that could impact the projects ability to deliver value.


Categories: Process Management

Defining Proper Success Metrics on Business Objectives Models

Software Requirements Blog - Seilevel.com - Thu, 12/04/2014 - 17:00
The Business Objective Model (BOM) is one of the foundational  models we use as part of the Seilevel  requirements methodology.  The BOM defines the rationale for doing a project.  Every BOM has the following key component parts. 1.  Problems  – the business problems to be solved or addressed 2.  Objectives – the targeted objectives or […]
Categories: Requirements

Is Programming the Same as Software Engineering?

Herding Cats - Glen Alleman - Thu, 12/04/2014 - 16:50

Jenkins-cover-6-26-2014When we hear I'm a developer is that the same as I'm a Software Engineer. What does it mean to be a software engineering versus a developer of sofwtare? Peter Denning's review of he book A Whole New Engineer in the December edition of Communications of the ACM speaks to this question.

There are several important ideas here. One example in the review was from a 1980s in a study at the research institute at NASA-Ames Research Center where computer scientists were brought together with NASA scientists on big problems in space and aeronautics. Our scientists pioneered in applying supercomputers instead of wind tunnels to the design of full aircraft, conducting science operations from great distances over a network, and studying neural networks that could automate tasks that depend on human memory and experience. But there was a breakdown: our NASA customers frequently complained that our engineers and scientists failed to make their deliverables. 

This was a major issue, since the research funding for the institute came mainly from our individual contracts with NASA managers. Failure to make deliverables was a recipe for non-renewal and loss of jobs. NASA managers said, “your work is of no value to us without the deliverables we agreed to,” and our scientists responded, “sorry, we can’t schedule breakthroughs.” This disconnect seemed to be rooted in the gap between what engineering and science schools teach and the realities of customer expectations in the workplace.

This extract from the review brings up a smaller issue. The notion that we can't possibly estimate when we'll be done or what it wil cost. And the popular notion that...

Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

The rest of the book review speaks to the new engineer gap and how it is being closed, with these principles (I've included the one most interesting to me personally)

  • Become competent at engineering practices and technologies.
  • Learn to be a designer—someone who can propose combinations of existing components and technologies to take care of real concerns.
Related articles Assessing Value Produced By Investments Software Estimating for Non Trival Projects
Categories: Project Management

How Do I Learn C++?

Making the Complex Simple - John Sonmez - Thu, 12/04/2014 - 16:00

In this video, I give some tips on how to learn C++.

The post How Do I Learn C++? appeared first on Simple Programmer.

Categories: Programming

Sky Force 2014 Reimagined for Android TV

Android Developers Blog - Thu, 12/04/2014 - 09:33

By Jamil Moledina, Games Strategic Partnerships Lead, Google Play

In the coming months, we’ll be seeing more media players, like the recently released Nexus Player, and TVs from partners with Android TV built-in hit the market. While there’s plenty of information available about the technical aspects of adapting your app or game to Android TV, it’s also useful to consider design changes to optimize for the living room. That way you can provide lasting engagement for existing fans as well as new players discovering your game in this new setting. Here are three things one developer did, and how you can do them too.

Infinite Dreams is an indie studio out of Poland, co-founded by hardcore game fans Tomasz Kostrzewski and Marek WyszyƄski. With Sky Force 2014 TV, they brought their hit arcade style game to Android TV in a particularly clever way. The mobile-based version of Sky Force 2014 reimaged the 2004 classic by introducing stunning 3D visuals, and a free-to-download business model using in-app purchasing and competitive tournaments to increase engagement. In bringing Sky Force 2014 to TV, they found ways to factor in the play style, play sessions, and real-world social context of the living room, while paying homage to the title’s classic arcade heritage. As WyszyƄski puts it, “We decided not to take any shortcuts, we wanted to make the game feel like it was designed to be played on TV.”

Orientation

For starters, Sky Force 2014 is played vertically on a smartphone or tablet, also known as portrait mode. In the game, you’re piloting a powerful fighter plane flying up the screen over a scrolling landscape, targeting waves of steampunk enemies coming down at you. You can see far enough up the screen, enabling you to plan your attacks and dodge enemies in advance.

Vertical play on the mobile version

When bringing the game to TV, the quickest approach would have been to preserve that vertical orientation of the gameplay, by pillarboxing the field of play.

With Sky Force 2014, Infinite Dreams considered their options, and decided to scale the gameplay horizontally, in landscape mode, and recompose the view and combat elements. You’re still aiming up the screen, but the world below and the enemies coming at you are filling out a much wider field of view. They also completely reworked the UI to be comfortably operated with a gamepad or simple remote. From WyszyƄski’s point of view, “We really didn't want to just add support for remote and gamepad on top of what we had because we felt it would not work very well.” This approach gives the play experience a much more immersive field of view, putting you right there in the middle of the action. More information on designing for landscape orientation can be found here.

Multiplayer

Like all mobile game developers building for the TV, Infinite Dreams had to figure out how to adapt touch input onto a controller. Sky Force 2014 TV accepts both remote control and gamepad controller input. Both are well-tuned, and fighter handling is natural and responsive, but Infinite Dreams didn’t stop there. They took the opportunity to add cooperative multiplayer functionality to take advantage of the wider field of view from a TV. In this way, they not only scaled the visuals of the game to the living room, but also factored in that it’s a living room where people play together. Given the extended lateral patterns of advancing enemies, multiplayer strategies emerge, like “divide and conquer,” or “I got your back” for players of different skill levels. More information about adding controller support to your Android game can be found here, handling controller actions here, and mapping each player’s paired controllers here.

Players battle side by side in the Android TV version

Business Model

Infinite Dreams is also experimenting with monetization and extending play session length. The TV version replaces several $1.99 in-app purchases and timers with a try-before-you-buy model which charges $4.99 after playing the first 2 levels for free. We’ve seen this single purchase model prove successful with other arcade action games like Mediocre’s Smash Hit for smartphones and tablets, in which the purchase unlocks checkpoint saves. We’re also seeing strong arcade action games like Vector Unit’s Beach Buggy Racing and Ubisoft’s Hungry Shark Evolution retain their existing in-app purchase models for Android TV. More information on setting up your games for these varied business models can be found here. We’ll be tracking and sharing these variations in business models on Android TV, including variations in premium, as the Android TV platform grows.

Reflecting on the work involved in making these changes, WyszyƄski says, “From a technical point of view the process was not really so difficult – it took us about a month of work to incorporate all of the features and we are very happy with the results.” Take a moment to check out Sky Force 2014 TV on a Nexus Player and the other games in the Android TV collection on Google Play, most of which made no design changes and still play well on a TV. Consider your own starting point, take a look at the Android TV section on our developer blog, and build the version of your game that would be most satisfying to players on the couch.

Join the discussion on

+Android Developers
Categories: Programming

R: Applying a function to every row of a data frame

Mark Needham - Thu, 12/04/2014 - 07:31

In my continued exploration of London’s meetups I wanted to calculate the distance from meetup venues to a centre point in London.

I’ve created a gist containing the coordinates of some of the venues that host NoSQL meetups in London town if you want to follow along:

library(dplyr)
 
# https://gist.github.com/mneedham/7e926a213bf76febf5ed
venues = read.csv("/tmp/venues.csv")
 
venues %>% head()
##                        venue      lat       lon
## 1              Skills Matter 51.52482 -0.099109
## 2                   Skinkers 51.50492 -0.083870
## 3          Theodore Bullfrog 51.50878 -0.123749
## 4 The Skills Matter eXchange 51.52452 -0.099231
## 5               The Guardian 51.53373 -0.122340
## 6            White Bear Yard 51.52227 -0.109804

Now to do the calculation. I’ve chosen the Centre Point building in Tottenham Court Road as our centre point. We can use the distHaversine function in the geosphere library allows us to do the calculation:

options("scipen"=100, "digits"=4)
library(geosphere)
 
centre = c(-0.129581, 51.516578)
aVenue = venues %>% slice(1)
aVenue
##           venue   lat      lon
## 1 Skills Matter 51.52 -0.09911

Now we can calculate the distance from Skillsmatter to our centre point:

distHaversine(c(aVenue$lon, aVenue$lat), centre)
## [1] 2302

That works pretty well so now we want to apply it to every row in the venues data frame and add an extra column containing that value.

This was my first attempt…

venues %>% mutate(distHaversine(c(lon,lat),centre))
## Error in .pointsToMatrix(p1): Wrong length for a vector, should be 2

…which didn’t work quite as I’d imagined!

I eventually found my way to the by function which allows you to ‘apply a function to a data frame split by factors’. In this case I wouldn’t be grouping rows by a factor – I’d apply the function to each row separately.

I wired everything up like so:

distanceFromCentre = by(venues, 1:nrow(venues), function(row) { distHaversine(c(row$lon, row$lat), centre)  })
 
distanceFromCentre %>% head()
## 1:nrow(venues)
##      1      2      3      4      5      6 
## 2301.6 3422.6  957.5 2280.6 1974.1 1509.5

We can now add the distances to our venues data frame:

venuesWithCentre = venues %>% 
  mutate(distanceFromCentre = by(venues, 1:nrow(venues), function(row) { distHaversine(c(row$lon, row$lat), centre)  }))
 
venuesWithCentre %>% head()
##                        venue   lat      lon distanceFromCentre
## 1              Skills Matter 51.52 -0.09911             2301.6
## 2                   Skinkers 51.50 -0.08387             3422.6
## 3          Theodore Bullfrog 51.51 -0.12375              957.5
## 4 The Skills Matter eXchange 51.52 -0.09923             2280.6
## 5               The Guardian 51.53 -0.12234             1974.1
## 6            White Bear Yard 51.52 -0.10980             1509.5

Et voila!

Categories: Programming

Agile Risk Management: Why Isn’t Risk Management Always Practiced?

Lava is not a risk category that is commonly encountered in software development.

Lava is not a risk category that is commonly encountered in software development.

I have recently been asking software development practitioners (including enhancements and maintenance) why they think that risk management is not consistently being practiced or practiced well. The group includes methodologists, developers, project and program managers, testers and test managers, operations personnel and executives. Very few have indicated that they thought that risk management was top of mind for any project, other than on an occasional event basis, or as one executive stated “when a project is asked publicly about risks.” The most common reasons why risk management is a second-class citizen include:

  1. There are too many other things to do directly related to delivering functionality. When asked why some project managers are perceived not to see the value in risk management, one respondent answered “They can’t when they’re working 12 hour days to keep the critical path moving.” This is a fairly common point of view. First, when you’re trying to do a twelve-hour job in an eight-hour day you will cut corners (add in multitasking and a disaster looms). Building risk management into the flow of work so that it isn’t a separate process with a separate risk plan and risk register should reduce the overhead generated when risk is an add-on to day-to-day project activities. A lean risk-management process will incorporate risk activities into team activities, but what needs to be addressed to mitigate the problem is the growing insistence on 50 – 60 hour workweeks.
  2. Risk management processes are driven by a need for an external certification. Many if not most of the people I talked with had a risk process, and if pressed could find a list of risks. However those processes and deliverables were developed for auditors and appraisers. In many cases the linkage between external frameworks, an organization’s expectations and policies (specifically for risk) have not been clearly linked to project outcomes. This is a common problem when teams are early in the adoption of Agile. Coaching is generally required to help teams develop the understanding that they will be able to deliver more value when then spend some time considering risks that could impact their ability to deliver. Coaches need to spend the time needed to help the team, organization or project manager to see the linkage between avoiding real issues and delivering value.
  3. Common risks are continually identified and nothing is done about those risks. Continually identifying environmental or long-term organization cultural issue that can’t be solved by team or the IT department is debilitating. In many cases what is being identified are issues that need to be mitigated or planned around. The example used above of project managers working 12-hour days to keep projects on track is not a risk, but an issue. Everyone involved needs to help address the problem the behavior will cause, unless the organizational culture can be changed and that is not something totally up to a team. Techniques such as sharing roles (team level self-organization) can be helpful to mitigate these types of issues. This is similar to issues seen in retrospectives that continually focus on issues that are not solvable (for example, a team that wants to work from home, but corporate policy forbids it due to security reasons). Over time a team will begin to feel helpless and will reject process. This, very simply, is a training and coaching problem.

One person noted that since risk management was not directly mentioned in the original book on Scrum, it must not be very important. I believe the comment was meant sarcastically, however it does make the point that risk management is only an issue in large projects or when project or program managers are involved. Risks and issues plague every human endeavor large or small, and must be addressed for effective and efficient delivery. Overburdened, multitasking teams need to be addressed as a rule. Incorporate a lean risk process into standard Agile or waterfall processes. Risks are just a different kind of user story or requirement that can be addressed in the common flow of work.


Categories: Process Management