Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Bad Management Behaviours

Herding Cats - Glen Alleman - Sun, 04/23/2017 - 23:19

 I belong to an invitation blog for military people. In a recent post, they had a post from the York Region District School Board D'Youville College, Aleksandr Noudelman about leadership.

This post resonanted with me when I hear about simple and simple minded soultions to symptoms that involve bad management. Of course these solutions don't address the Root Cause, so they never get fixed 

Here are seven key behaviors that can be found a weak leader:

  1. Their team routinely suffers from burnout ‚Äď Being driven and ambitious are important traits for successful leaders. However, if you are excessively working your people or churning through staff then you aren‚Äôt effectively using your resources. You may take pride in your productivity, in doing more with less. However, today‚Äôs success may undermine long-term health. Crisis management can become a way of life that reduces morale and drives away or diminishes the effectiveness of dedicated people. With any business, there are times when you have to burn the midnight oil but it should be accompanied with time for your team to recharge and refuel.
  2. They lack emotional intelligence ‚Äď Leaders who are weak are always envious of other peoples' successes and are happy when other people fail. They see themselves in fundamental competition with other executives and even with their subordinates. Such envy is a root cause of the turf wars, backbiting, and dirty politics that can make any workplace an unhealthy one.
  3. They don‚Äôt provide adequate direction ‚Äď Failing to provide adequate direction can frustrate employees and will hinder their chances at completing tasks correctly and success. Poor leaders might not tell employees when a project is due or might suddenly move the deadline up without regards for the employee who's doing it. Project details can also be vague, making it difficult for staff to guess what factors the leader considers important. If a project involves participation from more than one employee, a poor leader may choose not explain who is responsible for what part. Good leaders provide adequate direction and are always there to provide descriptive feedback when it is needed.
  4. They find blame in everyone but themselves ‚Äď Weak leaders blame everyone else for their mistakes and for any mishaps that happen to them and their division/company. Every time they suffer a defeat or a setback, a subordinate is given the talk down, or worse, an ax. Great leaders don't do this and they always stay positive no matter what the circumstances are. They are accountable for the results and accept full responsibility for the outcomes.
  5. They don‚Äôt provide honest feedback ‚Äď It is very difficult for weak leaders to give the honest messages or constructive feedback to their subordinates. When they have to say something negative to someone, it's always someone else, usually a superior, who has told them to do. By that time it is too late and the leader hasn't really identified the problem before it reached the climax. They also make it a point to let the individual know that it's not their idea. Good leaders speak from the heart and provide honest feedback that is backed up by facts. They never wait for superiors to identify problems for them.
  6. They're Blind To Current Situation ‚Äď Because weak leaders are egocentric and believe that their way is the only way, their followers are afraid to suggest anything new. Those who follow such leaders only give them praise or the good news. Such appreciation only gives a boost to their status and ego and the leader is left clueless as to what the current situation is as well as the changing trends in the marketplace.
  7. They're Self-Serving ‚Äď If a leader doesn't understand the concept of ‚Äúservice above self‚ÄĚ they will not retain the trust, confidence, and loyalty of their subordinates. Any leader is only as good as their team‚Äôs hope to be led by them. Too much ego, pride, and arrogance are not signs of good leadership. Long story short; if a leader receives a vote of non-confidence from their subordinates‚Ķthe leader is a weak one.

The leaders mentioned in the orginal post are military leaders. But the same behaviours can be found in our commercial leaders. 

Related articles Root Cause Analysis Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management
Categories: Project Management

SPaMCAST 439 ‚Äď Alex Yakyma, It‚Äôs Time to Think

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 439 features Alex Yakyma. ¬†Our discussion focused on the industry’s broken mindset that prevents it from being Lean and Agile. ¬†A powerful and a possibly controversial interview.

Alex’s Bio

Alex Yakyma brings unique, extensive, and field-based experience to the topic of implementing Lean and Agile at scale. Throughout his career, he has served as an engineering and program manager in multi-cultural, highly-distributed environments. As a methodologist, trainer and consultant, he has led numerous rollouts of Lean and Agile at scale, involving teams in North America, Europe and Asia, and has trained over a thousand coaches and change agents whose key role is to help their organizations achieve higher productivity and quality through the adoption of scalable, agile methods.

Alex is a founder of Org Mindset (http://orgmindset.com), a company whose mission is to help enterprises grow Lean-Agile mentality and build organizational habits in support of exploration and fast delivery of customer value.

Re-Read Saturday News

Chapter 2 of Holacracy tackles why the consolidation of authority is harmful to the ability to nimble, agile (small a), and productive organizations and secondly, why the distribution of authority supports an organization’s ability to scale.  The argument in Chapter 2 is a central tenant of Holacracy.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

A Call To Action

I need your help. I have observed that most podcasts and speakers at conferences over-represent people from Europe and North America.  I would like to work on changing that exposure. I would like to develop a feature featuring alternate software development voices beginning with Africa and Southeast Asia. If this feature works we will extend it to other areas.   If you can introduce me to practitioners that would be willing to share their observations (short interviews) I would be appreciative!

Next SPaMCAST

The next Software Process and Measurement Cast will be a big show!  SPaMCAST 440 will feature our essay on two storytelling techniques: premortems and business obituaries.  We will also have columns from Jeremy Berriault, Jon Quigley, and Steve Tendon.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 


Categories: Process Management

SPaMCAST 439 - Alex Yakyma, It's Time to Think

Software Process and Measurement Cast - Sun, 04/23/2017 - 22:00

The Software Process and Measurement Cast 439 features Alex Yakyma.  Our discussion focused on the industry's broken mindset that prevents it from being Lean and Agile.  A powerful and a possibly controversial interview.

Alex’s Bio

Alex Yakyma brings unique, extensive, and field-based experience to the topic of implementing Lean and Agile at scale. Throughout his career, he has served as an engineering and program manager in multi-cultural, highly-distributed environments. As a methodologist, trainer and consultant, he has led numerous rollouts of Lean and Agile at scale, involving teams in North America, Europe and Asia, and has trained over a thousand coaches and change agents whose key role is to help their organizations achieve higher productivity and quality through the adoption of scalable, agile methods.

Alex is a founder of Org Mindset (http://orgmindset.com), a company whose mission is to help enterprises grow Lean-Agile mentality and build organizational habits in support of exploration and fast delivery of customer value.

Re-Read Saturday News

Chapter 2 of Holacracy tackles why the consolidation of authority is harmful to the ability to nimble, agile (small a), and productive organizations and secondly, why the distribution of authority supports an organization’s ability to scale.  The argument in Chapter 2 is a central tenant of Holacracy.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

A Call To Action

I need your help. I have observed that most podcasts and speakers at conferences over-represent people from Europe and North America.  I would like to work on changing that exposure. I would like to develop a feature featuring alternate software development voices beginning with Africa and Southeast Asia. If this feature works we will extend it to other areas.   If you can introduce me to practitioners that would be willing to share their observations (short interviews) I would be appreciative!

Next SPaMCAST

The next Software Process and Measurement Cast will be a big show!  SPaMCAST 440 will feature our essay on two storytelling techniques premortems and business obituaries.  We will also have columns from Jeremy Berriault, Jon Quigley, and Steve Tendon.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 

Categories: Process Management

SPaMCAST 439 - Alex Yakyma, It's Time to Think

Software Process and Measurement Cast - Sun, 04/23/2017 - 22:00

The Software Process and Measurement Cast 439 features Alex Yakyma.  Our discussion focused on the industry's broken mindset that prevents it from being Lean and Agile.  A powerful and a possibly controversial interview.

Alex’s Bio

Alex Yakyma brings unique, extensive, and field-based experience to the topic of implementing Lean and Agile at scale. Throughout his career, he has served as an engineering and program manager in multi-cultural, highly-distributed environments. As a methodologist, trainer and consultant, he has led numerous rollouts of Lean and Agile at scale, involving teams in North America, Europe and Asia, and has trained over a thousand coaches and change agents whose key role is to help their organizations achieve higher productivity and quality through the adoption of scalable, agile methods.

Alex is a founder of Org Mindset (http://orgmindset.com), a company whose mission is to help enterprises grow Lean-Agile mentality and build organizational habits in support of exploration and fast delivery of customer value.

Re-Read Saturday News

Chapter 2 of Holacracy tackles why the consolidation of authority is harmful to the ability to nimble, agile (small a), and productive organizations and secondly, why the distribution of authority supports an organization’s ability to scale.  The argument in Chapter 2 is a central tenant of Holacracy.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

A Call To Action

I need your help. I have observed that most podcasts and speakers at conferences over-represent people from Europe and North America.  I would like to work on changing that exposure. I would like to develop a feature featuring alternate software development voices beginning with Africa and Southeast Asia. If this feature works we will extend it to other areas.   If you can introduce me to practitioners that would be willing to share their observations (short interviews) I would be appreciative!

Next SPaMCAST

The next Software Process and Measurement Cast will be a big show!  SPaMCAST 440 will feature our essay on two storytelling techniques premortems and business obituaries.  We will also have columns from Jeremy Berriault, Jon Quigley, and Steve Tendon.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 

Categories: Process Management

Eight Characteristics of Successful Software Projects

Xebia Blog - Sun, 04/23/2017 - 09:21

We do a lot of software projects at Xebia Software Development. We work most of the time at our client‚Äôs location, in their teams. Together we improve the quality of their software, their process, and engineering culture. As such, we‚Äôve seen a lot of projects play out. Most of these efforts succeeded but some failed. […]

The post Eight Characteristics of Successful Software Projects appeared first on Xebia Blog.

Holacracy: Re-read Week 3, Chapter 2 Distributing Authority

Book Cover

Holacracy

This week, we tackle chapter 2 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Chapter 2 tackles why the consolidation of authority is harmful to the ability to nimble, agile (small a), and productive and secondly, why the distribution of authority supports an organization’s ability to scale.  The argument in Chapter 2 is a central tenant of Holacracy.

 

Chapter 2: Distributing Authority

Concentrating the decision making in any single spot isn‚Äôt agile (small a) or scalable. ¬†Holacracy distributes decision-making as close to the where the work is done as possible. ¬†Robertson uses two metaphors outside of the typical workaday world to illustrate these points early in Chapter 2. ¬†The first analogy centers around a discussion of why cities as they grow become more productive and innovative, while organizations become less productive and innovative as they grow. The conclusion is that work needs to emulate some of the attributes of cities, in particular how people are given a set of laws but within those constraints don’t have to wait for bosses/bureaucracy to make decisions and get things done. The second analogy provides a hint on how to make complex systems work. ¬†The human body is a distributed system. ¬†The system has overall rules, but each part runs separately with minimal input within those boundaries. ¬†No single top-down decision-making process can keep as complex a system as the human body running. ¬†

The metaphors illustrate that organic systems avoid top-down decision-making processes in practice. ¬†There is significant evidence that even when strict top-down processes exist on paper, the real process is far different. ¬†This ‚Äúun‚ÄĚ or identified decision-making process has its own problems making it difficult to know who is accountable or will generate friction. ¬†In other scenarios, consensus decision making is used as a substitute. ¬†Consensus decision making requires everyone‚Äôs input and is often slow, inefficient and unscalable. ¬†¬†

Holacracy addresses the problems in the typical organizational design by distributing the power to make decisions to the process which is defined the written constitution introduced in Chapter 1 (we will tie roles to the process later).  Using the city metaphor, the constitution represents the law and ordinances (or if you want a sporting metaphor to consider the constitution as the rulebook). The constitution the primary tool that distributes authority to the process and trumps everyone else, including the CEO, and those that wrote the constitution.  Power is ceded to the constitution. The constitution breaks the inferred parent-child relationships hierarchies are built upon.  Breaking the parent-child relationship creates an environment in which a team can self-organize to address the work they are presented with.  Determining how to change the organization to foster self-organization has been the single issue haunting the Agile movement since the term was coined. By distributing the decision-making into the process, Holacray provides people in the organization with the power to respond to issues locally within their domain without having to get everyone else to buy in. Empowerment!

Static decision making structures, whether distributed or not, rarely work well for long. ¬†There needs to be a governance structure to provide the basis from which we can learn and improve the process. ¬†Governance is an overused term with many definitions. For this book governance is defined as both who makes decisions and the limits under which those decisions are made. Robertson refines that definition by making a distinction between governance and operations. ¬†Governance is about how we work, while operations are about getting work done. ¬†A governance structure provides a decision-making process that allows the organization to change how it will work and make decisions based on the needs of the actual work. ¬†Governance that is concentrated at the top of the hierarchy requires one or a select few people to make all of the decisions. ¬†I have a number of friends that run ‚Äúsmall‚ÄĚ businesses (including my spouse), to a person they all had to learn the lesson that they need spend less time working ‚Äúin‚ÄĚ the business so they can work ‚Äúon‚ÄĚ the business. When that lesson hits home they learn how to distribute decision making.

Putting the concepts in Chapter 2 together, governance and the rules generated to implement the governance model define how the organization is structured.  That structure has only one goal, to bring about the organization’s purpose.  Operations delivers on the purpose.

Transformation Thoughts:  Changing an organization to be more able to quickly react to change or to speed knowledge work, requires pushing decision making down. Distributing the decision-making authority often means changing the organization’s structure.  This can rarely be done piecemeal.  Implementing Agile techniques in an effort to increase the flow value or increase speed to market without addressing distributed decision-making reduces the transformation effectiveness.

Team Coaching Thought:  All teams have a decision-making structure. Deciding on a governance model for the team that allows decision making to be distributed is as important as it is in larger organizations.   

Remember to buy a copy of Holacracy (use the link in the show notes to help support and defray the costs of the Software Process and Measurement Cast blog and podcast).

Previous Entries in the re-read:

Week 1:  Logistics and Introduction

Week 2: Evolving Organization


Categories: Process Management

Being an Agile Security Officer: user stories

Xebia Blog - Sat, 04/22/2017 - 14:28

This is the fourth part of my 'Being an Agile Security Officer series'. In this blog post I will go deeper into the details of how user stories are created and what role security stakeholders should play in that. The Epic Within Agile, work is usually defined in user stories. These are minimal and defined […]

The post Being an Agile Security Officer: user stories appeared first on Xebia Blog.

Stuff The Internet Says On Scalability For April 21st, 2017

Hey, it's HighScalability time:

 

Which do you see: Machines freeing people? Lost jobs? Slavery? Hyperactive Skittles?
If you like this sort of Stuff then please support me on Patreon.
  • year 1899: “Nobody has to use the Internet”; 12MPH: Speed news of Lincoln's assassination traveled the US; $200 million: Lyft tips; 500: data structures and algorithms interview questions; %0.00244140625: Odds of 13 straight male Dr. Who regens; 100: gigafactories could power the world; 100K: bots on Messenger; 1 million: containers Netflix lanched in one week; 5.2 trillion: 2014 US revenue; 52,129: iterations to converge on NFL schedule; 36 Gbps: Facebook's network in the sky; 

  • Quotable Quotes:
    • @mipsytipsy: "That doesn't sound hard. I could build that in a weekend."
    • @Noahpinion: The Elon Musk Future is the good future. The Peter Thiel Future is the bad future. But honestly you'll probably get the Jeff Bezos Future.
    • @BenedictEvans: In 2007 Google, Apple, Facebook & Amazon had maybe 50k staff between them. Today it's more like 400k.
    • @AWSonAir: @Expedia inserting 70,000 rows per second of hotel data with Amazon Aurora.
    • @swardley: STOP! If you're thinking of moving to cloud today (as in IaaS), you are so late that you need to consider moving to serverless ->
    • David Rosenthal: Silicon Valley would not exist but for Ph.D.s leaving research to create products in industry.
    • @cmeik: Distributed applications today treat the database like shared memory, and that's why we love things like Spanner.  This is a flawed design.
    • @Jason: Apple's cash hoard swells to five Teslas / four Ubers / 25 Twitters
Categories: Architecture

Cheating and building secure iOS games

Xebia Blog - Fri, 04/21/2017 - 07:53

You probably have one of the million games where you earn achievements and unlock specials on your iPad or iPhone. If you develop games, you've probably wondered about people cheating your games? In this blog we're going to show you how to try cheating out yourself and how to build secure iOS games.The actual question […]

The post Cheating and building secure iOS games appeared first on Xebia Blog.

Using field masks with update requests to Google APIs

Google Code Blog - Fri, 04/21/2017 - 04:00
Originally posted on the G Suite Developers Blog
Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

We recently demonstrated how to use field masks to limit the amount of data that comes back via response payloads from read (GET) calls to Google APIs. Today, we'll focus on a different use case for field masks: update requests.

In this scenario, field masks serve a different, but similar purpose‚ÄĒthey still filter, but function more like bitmasks by controlling which API fields to update. The following video walks through several examples of update field mask usage with both the Google Sheets and Slides APIs. Check it out.


In the sample JSON payload below, note the request to set the cells’ bold attribute to true (per the cell directive below), then notice that the field mask (fields) practically mirrors the request:

{
"repeatCell": {
"range": {
"endRowIndex": 1
},
"cell": {
"userEnteredFormat": {
"textFormat": {
"bold": true
}
}
},
"fields": "userEnteredFormat/textFormat/bold",
}
}

Now, you might think, "is that redundant?" Above, we highlighted that it takes two parts: 1) the request provides the data for the desired changes, and 2) the field mask states what should be updated, such as the userEnteredFormat/textFormat/bold attribute for all the cells in the first row. To more clearly illustrate this, let's add something else to the mask like italics so that it has both bold and italic fields:

        "fields": "userEnteredFormat/textFormat(bold,italic)"
However, while both elements are in the field mask, we've only provided the update data for bold. There's no data for italic setting specified in the request body. In this case, italics for all cells will be reset, meaning if the cells were originally italicized, those italics will be removed after this API request completes. And vice versa, if the cells were not italicized to begin with, they'll stay that way. This feature gives developers the ability to undo or reset any prior settings on affected range of cells. Check out the video for more examples and tips for using field masks for update requests.

To learn more about using field masks for partial response in API payloads, check out this video and the first post in this two-part series. For one of the most comprehensive write-ups on both (read and update) use cases, see the guide in the Google Slides API documentation. Happy field-masking!
Categories: Programming

Capability Teams ‚Äď One Solution for Dynamic Teams

Not Exactly A Capability Team But Close!

One of the holy grails of Agile in software development and other business scenarios is how to organize so that stable teams are efficient, effective and safe. The great preponderance of organizations use some variant of an organizational model that groups people by specialty and then allocate them to project teams. ¬†This creates a matrix in which any practitioner will be part of two or more teams, which, in turn, means they have two or more managers and serve two or more masters. ¬†People, like desks, chairs, and laptops, flow to the area of need, disband, and then return to a waiting pool until needed again. ¬†One of the basic assumptions is that within some limits people are fungible and can be exchanged with relative impunity. ¬†This approach has problems. ¬†Ariana Racz, Senior Quality Assurance Analyst, provided a great summary of what is wrong with the idea that people are fungible in her response to Get Rid of Dynamic Teams: The Teams. ¬†Ariana stated, ‚ÄúA resource on paper is not a resource in application.‚ÄĚ In most circumstances, dynamic/matrixed teams reduce the productivity of knowledge workers.

An operational solution to using a dynamic/matrixed approach (we will return to where dynamic teams make sense ‚Äď later) to build and manage a team is to adopt the idea of a capability (also known as a functionality) team. ¬†The version of capability teams I espouse was influenced by an article written by Karl Scotland (What is Capability, Karl also appeared on SPaMCAST 174 to discuss Kanban Thinking).

A capability team is the group of roles needed to address and deliver a set of technical or business outcomes.  For example, to script, record, edit, post and promote the weekly Software process and Measurement Cast podcast there are five primary roles.  These roles are played by a fixed team of three people. This team has been together on this project for approximately 11 years with only a few minor tweaks in membership.  Each person in this small team has a specialty but can perform or support the roles of other members.  The podcast capability team can take the podcast from idea to delivery.

There are several important attributes of a capability team:

  1.      The team needs to include people that can perform all of the roles need to deliver functionality from idea to production.  A SaaS application capability team would include the roles of architect, business analysis, configuration, coding, testing, security and release management.  Many roles might be played by a single person or several people might play a single role depending on the work being addressed by the group.
  2.      The team reports to a single manager. The capability team represents the most important work team each member belongs to.
  3.      Membership in the team changes slowly based on the evolving context of the business (barring externalities such as people leaving the team).  Every team will change over time, however, in capability teams, change happens through evolution.  When teams need new knowledge the team members find a way to acquire that knowledge and increase the capability of the team.
  4.      A backlog of related valuable work is available for the team to draw work from.  This generally requires an organization to take a product view rather than a project view of work.  A product view recognizes that there is a need for enhancements, changes, and support continuously across the entire lifecycle of a product.  This flow does not easily conform to arbitrary start and end dates that are part of the definition of projects.
  5.      Capability teams dissolve or are repurposed when they are no longer needed.  No team is forever. When the value of the backlog of work they are serving is less than the cost of the team, they need to dissolve or find something else to do.

Capability teams are not a new invention. Capability teams are the norm in some industries.  My brother Nick builds custom homes in Baton Rouge, Louisiana.  Nick employees several capability teams.  For example, he has two crews that roof houses.  Each crew has people to perform all of the roles needed to successfully roof the houses Nick designs and builds. Team members rarely change and Nick says that it is impressive to watch them work as they seem to be able to anticipate the needs of other team members.  As impressive is the safety record of long-lived teams. Capability teams represent an approach to address one of the holy grails of increased efficiency and effectiveness promised by Agile: long-lived stable teams.

Next: Implementing: Capability Teams


Categories: Process Management

App onboarding for kids: how Budge Studios creates a more engaging experience for families

Android Developers Blog - Thu, 04/20/2017 - 17:26
Posted by Josh Solt (Partner Developer Manager, Kids Apps at Google Play) and Noemie Dupuy (Founder & Co-CEO at Budge Studios)

Developers spend a considerable amount of resources driving users to download their apps, but what happens next is often the most critical part of the user journey. User onboarding is especially nuanced in the kids space since developers must consider two audiences: parents and children. When done correctly, a compelling onboarding experience will meet the needs of both parents and kids while also accounting for unique considerations, such as a child's attention span.

Budge Studios has successfully grown their catalog of children's titles by making onboarding a focal point of their business. Their target demographic is three to eight-year olds, and their portfolio of games include top titles featuring Strawberry Shortcake, Hello Kitty, Crayola, Caillou and The Smurfs.

"First impressions matter, as do users' first experience with your app. In fact, 70%1 of users who delete an app will do so within a day of having downloaded it, leaving little time for second chances. As an expert in kids' content, Budge tapped into our knowledge of kids to improve and optimize the onboarding experience, leading to increased initial game-loop completion and retention." - Noemie, Founder & Co-CEO at Budge Studios
Three key ways Budge Studios designs better onboarding experiences:
1. Make sure your game is tailor-made for kids

When Budge released their app Crayola Colorful Creatures, they looked at data to identify opportunities to create a smoother onboarding flow for kids. At launch, only 25% of first-time users were completing the initial game loop. Budge analyzed data against gameplay and realized the last activity was causing a drastic drop-off. It required kids to use the device's microphone, and that proved too challenging for very young kids. Budge was able to adjust the initial game loop so that all the activities were accessible to the youngest players. These adjustments almost tripled the initial loop completion, resulting in 74% of first-time users progressing to see additional activities.

2. Earn parents trust by providing real value upfront

Budge has a large of portfolio of apps. Earning parents' trust by providing valuable and engaging experiences for kids is important for retaining users in their ecosystem and achieving long term success.

With every new app, Budge identifies what content is playable for free, and what content must be purchased. Early on, Budge greatly limited the amount of free content they offered, but over time has realized providing high quality free content enhances the first-time user experience. Parents are more willing to spend on an app if their child has shown a real interest in a title.

Working with top kids' brands means that Budge can tap into brand loyalty of popular kids characters to provide value. To launch Strawberry Shortcake Dreams, Budge decided to offer Strawberry Shortcake, the most popular character in the series, as a free character. Dress Up Dreams is among the highest converting apps in the Budge portfolio, indicating that giving away the most popular character for free helped conversions rather than hurting it.

3. Test with real users

Budge knows there is no substitute for direct feedback from its end-users, so Budge involves kids every step of the way. Budge Playgroup is a playtesting program that invites families to try out apps at the alpha, beta and first-playable development stages.

The benefits from early testing can be as basic as understanding how the size and coordination of kids' hands affect their ability to complete certain actions or even hold the device, and as specific as pinpointing a less-than-effective button.

In the testing stages of Strawberry Shortcake Holiday Hair, Budge caught an issue with the main menu of the app, which would not have been evident without observing kids using the app.

Prior to Playtesting:
After Playtesting:
In the original design, users were prompted to start gameplay by audio cues. During testing, it was clear that the voiceover was not sufficient in guiding kids to initiate play, and that additional visual clues would significantly improve the experience. A simple design change resulted in a greatly enhanced user experience.

The onboarding experience is just one component of an app, but just like first impressions, it has a disproportionate impact on your users' perception of your app. As Budge has experienced, involving users in testing your app, using data to flag issues and providing real value to your users upfront, creates a smoother, more accessible onboarding experience and leads to better results.

For more best practices on developing family apps and games, please check out The Family Playbook for developers. And visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.

1.http://www.cmswire.com/customer-experience/mobile-app-retention-5-key-strategies-to-keep-your-customers/

How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ
 

Categories: Programming

Engineering the Software Engineer

From the Editor of Methods & Tools - Wed, 04/19/2017 - 14:54
What are the characteristics of a good software engineer? It‚Äôs a topic many people would argue endlessly about. This is not surprising given we are effectively living in the era of software alchemy. Some of the best programmers draw on a strong scientific and engineering background. They combine this with craft like coding skills in […]

Software Development Linkopedia April 2017

From the Editor of Methods & Tools - Wed, 04/19/2017 - 08:04
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about team complexity, developer burnout, blaming, tester skills, code coverage, integration testing, front end architectures and agile antipatterns. Text: Team of Teams & Complexity: An Approach for breaking down Silos Text: […]

Get Rid Of Dynamic Teams: The Premise

Five Dysfunctions of a Team

Five Dysfunctions of a Team

The use of teams to deliver business value are at the core of most business models.  In matrix organizations teams are generally viewed as mutable, being formed and reformed from specialty labor pools to meet specific contexts. Teams can be customized to address emerging needs or critical problems and then fade away gracefully.  Examples of these kinds of teams include red teams or tiger teams.  This approach is viewed as maximizing organizational flexibility in a crisis.  The crisis generates the energy needed to focus the team on a specific problem.  However, as a general approach, dynamic teams have several problems because of how the organizations are structured and how people interact and become teams.

The Five Dysfunctions of a Team by Patrick Lencioni identified two principles that directly highlight issues with dynamic teams.

  1.       Team members are not easily replaceable parts.
  2.      You can only have primary loyalty to one team.

Each person on a team is a set of behaviors, capabilities, opinions, and biases.  For a team to be effective, the team members need to figure out how to fits all those different components together.  This requires a combination of self-knowledge and knowledge of your colleagues on the team.  When teams are established, members begin the process of learning each other’s biases and capabilities. This is often called team building. For a practical perspective, this knowledge is important for team effectively know how to allocate work and to identify warning signs when problems arise.  Almost all team change models, such as the Tuckman model (storming, norming, and performing), recognize that teams become more effective when they build this type of knowledge. Continually disrupting teams stops teams from becoming more effective.

The second principle that Lencioni raises that directly impacts dynamic teams is loyalty.  In a dynamic team, each team member needs to answer which team do they have primary loyalty towards.  Team members will trust team members from their primary team more than others (boundary biases) and will try not expose attributes of that team that could be perceived negatively.  When the needs of their primary team conflict with the need of their other team, one team will suffer.  It is not hard to guess which will get the short end of the stick.  The effectiveness of the team on the short end will suffer for two reasons.  The first is obvious, their work will either not get done or require others to step in to complete.  Second, team members that are not directly committed to the team will always be suspect. Their peers will always be looking over their shoulder waiting for them to drop the ball.

Even more basic are the cognitive biases which drive human behaviors.  Cognitive biases are patterns of behavior that reflect a deviation in judgment that occurs in particular situations. Teams that are always learning the nature and behavior of team members often fall victim to social and attribution biases.  These types of biases reflect errors we make when evaluating the rational for both our own behavior as well as the behavior of others. Team members that misinterpret behavior of other team members make mistakes.  Mistakes reduce team effectiveness and deliver defects which reduce the team’s perceived value setting off a negative spiral.

Dynamic teams driven by matrix management are an anchor that reduces the effectiveness of teams. Creating static teams that can meet organizational needs efficiently and effective is not as simple as declaring that all teams are fixed.  

Next:  Teams need to be constructed to match capabilities to the flow of work.  

 


Categories: Process Management

SE-Radio Episode 288: Francois Raynaud on DevSecOps

Francois Raynaud and Kim Carter discuss what’s wrong with the traditional delivery approach and why we need to change. They explore the dangers of retrofitting security to the end of projects, how to combine development, operations, and security people into the same development teams and why, along with cost-benefit analysis. Francois and Kim discuss the […]
Categories: Programming

Where do our flaky tests come from?

Google Testing Blog - Mon, 04/17/2017 - 22:45
author: Jeff Listfield

When tests fail on code that was previously tested, this is a strong signal that something is newly wrong with the code. Before, the tests passed and the code was correct; now the tests fail and the code is not working right. The goal of a good test suite is to make this signal as clear and directed as possible.

Flaky (nondeterministic) tests, however, are different. Flaky tests are tests that exhibit both a passing and a failing result with the same code. Given this, a test failure may or may not mean that there's a new problem. And trying to recreate the failure, by rerunning the test with the same version of code, may or may not result in a passing test. We start viewing these tests as unreliable and eventually they lose their value. If the root cause is nondeterminism in the production code, ignoring the test means ignoring a production bug.
Flaky Tests at Google

Google has around 4.2 million tests that run on our continuous integration system. Of these, around 63 thousand have a flaky run over the course of a week. While this represents less than 2% of our tests, it still causes significant drag on our engineers.
If we want to fix our flaky tests (and avoid writing new ones) we need to understand them. At Google, we collect lots of data on our tests: execution times, test types, run flags, and consumed resources. I've studied how some of this data correlates with flaky tests and believe this research can lead us to better, more stable testing practices. Overwhelmingly, the larger the test (as measured by binary size, RAM use, or number of libraries built), the more likely it is to be flaky. The rest of this post will discuss some of my findings.
For a previous discussion of our flaky tests, see John Micco's postfrom May 2016.
Test size - Large tests are more likely to be flaky

We categorize our tests into three general sizes: small, medium and large. Every test has a size, but the choice of label is subjective. The engineer chooses the size when they initially write the test, and the size is not always updated as the test changes. For some tests it doesn't reflect the nature of the test anymore. Nonetheless, it has some predictive value. Over the course of a week, 0.5% of our small tests were flaky, 1.6% of our medium tests were flaky, and 14% of our large tests were flaky [1]. There's a clear increase in flakiness from small to medium and from medium to large. But this still leaves open a lot of questions. There's only so much we can learn looking at three sizes.
The larger the test, the more likely it will be flaky

There are some objective measures of size we collect: test binary size and RAM used when running the test [2]. For these two metrics, I grouped tests into equal-sized buckets [3] and calculated the percentage of tests in each bucket that were flaky. The numbers below are the r2 values of the linear best fit [4].

Correlation between metric and likelihood of test being flaky Metric r2 Binary size 0.82 RAM used 0.76

The tests that I'm looking at are (for the most part) hermetic tests that provide a pass/fail signal. Binary size and RAM use correlated quite well when looking across our tests and there's not much difference between them. So it's not just that large tests are likely to be flaky, it's that the larger the tests get, the more likely they are to be flaky.

I have charted the full set of tests below for those two metrics. Flakiness increases with increases in binary size [5], but we also see increasing linear fit residuals [6] at larger sizes.


The RAM use chart below has a clearer progression and only starts showing large residuals between the first and second vertical lines.



While the bucket sizes are constant, the number of tests in each bucket is different. The points on the right with larger residuals include much fewer tests than those on the left. If I take the smallest 96% of our tests (which ends just past the first vertical line) and then shrink the bucket size, I get a much stronger correlation (r2 is 0.94). It perhaps indicates that RAM and binary size are much better predictors than the overall charts show.



Certain tools correlate with a higher rate of flaky tests
Some tools get blamed for being the cause of flaky tests. For example, WebDriver tests (whether written in Java, Python, or JavaScript) have a reputation for being flaky [7]. For a few of our common testing tools, I determined the percentage of all the tests written with that tool that were flaky. Of note, all of these tools tend to be used with our larger tests. This is not an exhaustive list of all our testing tools, and represents around a third of our overall tests. The remainder of the tests use less common tools or have no readily identifiable tool.
Flakiness of tests using some of our common testing tools Category % of tests that are flaky % of all flaky tests All tests 1.65% 100% Java WebDriver 10.45% 20.3% Python WebDriver 18.72% 4.0% An internal integration tool 14.94% 10.6% Android emulator 25.46% 11.9%

All of these tools have higher than average flakiness. And given that 1 in 5 of our flaky tests are Java WebDriver tests, I can understand why people complain about them. But correlation is not causation, and given our results from the previous section, there might be something other than the tool causing the increased rate of flakiness.
Size is more predictive than tool

We can combine tool choice and test size to see which is more important. For each tool above, I isolated tests that use the tool and bucketed those based on memory usage (RAM) and binary size, similar to my previous approach. I calculated the line of best fit and how well it correlated with the data (r2). I then computed the predicted likelihood a test would be flaky at the smallest bucket [8] (which is already the 48th percentile of all our tests) as well as the 90th and 95th percentile of RAM used.
Predicted flaky likelihood by RAM and tool Category r2 Smallest bucket
(48th percentile) 90th percentile 95th percentile All tests 0.76 1.5% 5.3% 9.2% Java WebDriver 0.70 2.6% 6.8% 11% Python WebDriver 0.65 -2.0% 2.4% 6.8% An internal integration tool 0.80 -1.9% 3.1% 8.1% Android emulator 0.45 7.1% 12% 17%

This table shows the results of these calculations for RAM. The correlation is stronger for the tools other than Android emulator. If we ignore that tool, the difference in correlations between tools for similar RAM use are around 4-5%. The differences from the smallest test to the 95th percentile for the tests are 8-10%. This is one of the most useful outcomes from this research: tools have some impact, but RAM use accounts for larger deviations in flakiness.
Predicted flaky likelihood by binary sizeand tool

Category r2 Smallest bucket
(33rd percentile) 90th percentile 95th percentile All tests 0.82 -4.4% 4.5% 9.0% Java WebDriver 0.81 -0.7% 14% 21% Python WebDriver 0.61 -0.9% 11% 17% An internal integration tool 0.80 -1.8% 10% 17% Android emulator 0.05 18% 23% 25%

There's virtually no correlation between binary size and flakiness for Android emulator tests. For the other tools, you see greater variation in predicted flakiness between the small tests and large tests compared to RAM; up to 12% points. But you also see wider differences from the smallest size to the largest; 22% at the max. This is similar to what we saw with RAM use and another of the most useful outcomes of this research: binary size accounts for larger deviations in flakiness than the tool you use.
Conclusions

Engineer-selected test size correlates with flakiness, but within Google there are not enough test size options to be particularly useful.
Objectively measured test binary size and RAM have strong correlations with whether a test is flaky. This is a continuous function rather than a step function. A step function would have sudden jumps and could indicate that we're transitioning from one type of test to another at those points (e.g. unit tests to system tests or system tests to integration tests).
Tests written with certain tools exhibit a higher rate of flakiness. But much of that can be explained by the generally larger size of these tests. The tool itself seems to contribute only a small amount to this difference.
We need to be more careful before we decide to write large tests. Think about what code you are testing and what a minimal test would look like. And we need to be careful as we write large tests. Without additional effort aimed at preventing flakiness, there's is a strong likelihood you will have flaky tests that require maintenance.
Footnotes
  1. A test was flaky if it had at least one flaky run during the week.
  2. I also considered number of libraries built to create the test. In a 1% sample of tests, binary size (0.39) and RAM use (0.34) had stronger correlations than number of libraries (0.27). I only studied binary size and RAM use moving forward.
  3. I aimed for around 100 buckets for each metric.
  4. r2 measures how closely the line of best fit matches the data. A value of 1 means the line matches the data exactly.
  5. There are two interesting areas where the points actually reverse their upward slope. The first starts about halfway to the first vertical line and lasts for a few data points and the second goes from right before the first vertical line to right after. The sample size is large enough here that it's unlikely to just be random noise. There are clumps of tests around these points that are more or less flaky than I'd expect only considering binary size. This is an opportunity for further study.
  6. Distance from the observed point and the line of best fit.
  7. Other web testing tools get blamed as well, but WebDriver is our most commonly used one.
  8. Some of the predicted flakiness percents for the smallest buckets end up being negative. While we can't have a negative percent of tests be flaky, it is a possible outcome using this type of prediction.
Categories: Testing & QA

Android Developer Story: Robinhood uses Android Studio to quickly build and test new features

Android Developers Blog - Mon, 04/17/2017 - 17:45
Posted by Christopher Katsaros, Developer Marketing, Android

Robinhood allows users to buy and sell stocks commission-free* in the US. It is designed to make financial investment easy for all users, even if you‚Äôve never traded before.

With a team of two Android developers, the company has relied on fast tools like Android Studio to build rich new features, which have helped make Robinhood the highest-rated stock brokerage app on Google Play.

Watch Robinhood's Joe Binney, VP of Product Engineering, and Dan Hill, Android Developer, talk about how Android Studio is helping them achieve strong growth on Android.


The top Android developers use Android Studio to build powerful and successful apps on Google Play; learn more about the official IDE for Android app development and get started for yourself.

Get more tips and watch other success stories in the Playbook for Developers app.

*Free trading refers to $0 commissions for Robinhood Financial self-directed individual cash or margin brokerage accounts that trade U.S. listed securities via mobile devices. SEC & FINRA regulatory fees may apply.

 How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ

Categories: Programming

5 tips for indie game success, from indie game developers

Android Developers Blog - Mon, 04/17/2017 - 17:37

Posted by Sarah Thomson, BD Partnerships Lead, Indies, Google Play Games

Mobile gaming is a fun place to be right now. It's a landscape seeing tremendous success year after year with great potential for additional growth and innovation. It's also a space where developers can express themselves with creative game styles, mechanics, design and more. This is what the indie community does best.

Here are 5 tips for indies by indies, shared by our gaming partners at 505 Games, About Fun, Disruptor Beam, Klei Entertainment, and Schell Games.


1. Embrace being indie
Indies are inherently smaller operations and should embrace their agility and ability to take risks. Petr Vodak, CEO at About Fun, recommends getting your product out there so you can start taking feedback and apply your learnings to future projects. Don't be afraid to fail! Remaining flexible and building in modularity so you can evolve with the business needs is a strategy embraced by Pete Arden, CMO at Disruptor Beam. For instance, with their game Star Trek Timelines, the initial user experience was tailored to avid Star Trek fans. Over time, as user acquisition costs increased, they've changed the new player experience to appeal to their evolving user base of gamers looking for a fun entertainment experience and less the specific Star Trek IP.

2. Find a way to stand out
To help stand out in the ultra competitive mobile space, Jesse Schell, CEO of Schell Games, recommends doing something clever or very different. This strategy has led them to explore the growth areas of new platforms such as AR & VR. While new platforms present a field for opportunity and creativity, they're best to be approached with the long term in mind allowing you to sustain the business until critical mass is reached.

3. Build a community
There are many ways to build communities. If you have an existing fan base on other platforms, cross-promote to drive awareness of your mobile offerings. You can also look at porting titles over, but be aware of the differences in mobile gaming habits and ensure you adapt your game accordingly.

4. Engage after install
Both 505 Games and Klei Entertainment recommend running your premium titles as a service. Through monitoring user reviews you can gain invaluable feedback and trends helping you better understand user pain points and desires. In addition, by releasing regular content updates and in-game events you create reason for users to get back in the game. This not only drives reengagement, but 505 Games also sees strong spikes in new installs aligned with major game updates.

5. Monetize in different ways
Similar strategy to above, dropping regular content refreshes and game updates while offering a variety of monetization options gives users more ways to engage with your game. Keeping your games fresh gives users reason to come back and builds loyalty so you can cross-promote to your users with future game launches.

If you're looking for a fun new game to play, check out the great selection on Indie Corner on Google Play. And if you're working on a new indie game of your own, nominate your title for inclusion.

Watch more sessions from Google Developer Day at GDC17 on the Android Developers YouTube channel to learn tips for success. Visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ

Categories: Programming

A New Issue Tracker for our AOSP Developers

Android Developers Blog - Mon, 04/17/2017 - 06:33
Posted by Sandie Gong, Developer Relations Program Manager & Chris Iremonger, Android Technical Program Manager

Like many other issue trackers at Google, we're upgrading our Android Open Source Project (AOSP) issue tracking system to Issue Tracker. We are hoping to facilitate a better collaboration between our developers and our Android product teams by using a tool we use internally at Google to track bugs and feature requests during product development.

Starting today, all issues formerly at code.google.com/p/android/issues will migrate to Issue Tracker under the Android Public Tracker component. You may have noticed that we are already using the new tool to collect feedback on the O Developer Preview!

What has been migrated

Getting started with Issue Tracker
You can learn more about navigating our Issue Tracker from our developer documentation. By default, Issue Tracker displays only the issues assigned to you. You can easily change that to show a hotlist of your choice, a bookmark group, or a saved search. You can also adjust notification settings by clicking the gear icon in the top right corner and selecting Settings.

The mappings in Issue Tracker are also slightly different than code.google.com so make sure to check out Life of a Bug to learn more about what the various statuses mean.



Searching for component specific issues
Opening a code.google.com issue link will automatically redirect you to the new system. We've cleaned up some of the spam, but you'll be able to find all of the other issues from code.google.com in Issue Tracker, including any issue you've reported, commented on, or starred.

You can view all reported Android issues in the Android Public Tracker component and drill down to see reported issues for specific categories of issues, such as Tools and Support Libraries, by searching for specific components.
Filing a bug or feature request
Before filing a new issue, please check if it is already reported in the issues list. Let us know what issues are important to you by starring an existing issue.

Submitting a new issue is easy. Once you click "Create Issue", search for the appropriate component for your issue. Alternatively, you can just follow the correct issue creation link for each component listed in Report Bugs.

Here's some helpful links to get you started! table, th, td { border: 1px solid black; }
Topic Relevant Links Navigating and creating issues in the Android component Navigating Google Issue Tracker Google Issue Tracker announcements for other products
Categories: Programming