Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Agile software architecture documentation

Coding the Architecture - Simon Brown - Tue, 05/31/2016 - 22:43

"We value working software over comprehensive documentation" is what the manifesto for agile software development says. I know it's now a cliche, but the typical misinterpretation of these few words is "don't write documentation". Of course, that's not actually what the manifesto says and "no documentation" certainly wasn't the intent. To be honest, I think many software teams never produced or liked producing any documentation anyway, and they're now simply using the manifesto as a way to justify their approach. What's done is done, and we must move on.

One of the most common questions I get asked is how to produce "agile documentation", specifically with regards to documenting how a software system works. I've met many people who have tried the traditional "software architecture document" approach and struggled with it for a number of reasons, irrespective of whether the implementation was a Microsoft Word document or a wiki like Atlassian Confluence. My simple advice is to think of such documentation as being supplementary to the code, describing what you can't get from the code alone.

Readers of my Software Architecture for Developers ebook will know that I propose something akin to a travel guidebook. Imagine you arrive in a new city. Without any maps or a sense of direction, you'll end up just walking up and down every street trying to find something you recognise or something of interest. You can certainly have conversations with the people who you meet, but that will get tiring really quickly. If I was a new joiner on an existing software development team, what I'd personally like is something that I can sit down and read over a coffee, perhaps for an hour or so, that will give me a really good starting point to jump into and start exploring the code.

The software guidebook

Although the content of this document will vary from team to team (after all, that's the whole point of being agile), I propose the following section headings as a starting point.

  1. Context
  2. Functional Overview
  3. Quality Attributes
  4. Constraints
  5. Principles
  6. Software Architecture
  7. Code
  8. Data
  9. Infrastructure Architecture
  10. Deployment
  11. Development Environment
  12. Operation and Support
  13. Decision Log

The definitions of these sections are included in my ebook and they're now available to read for free on the Structurizr website (see the hyperlinks above). This is because the next big feature that I'm rolling out on Structurizr is the ability to add lightweight supplementary documentation into the existing software architecture model. The teams I work with seem to really like the guidebook approach, and some even restructure the content on their wiki to match the section headings above. Others don't have a wiki though, and are stuck using tools like Microsoft Word. There's nothing inherently wrong with using Microsoft Word, of course, in the same way that using Microsoft Visio to create software architecture diagrams is okay. But it's 2016 and we should be able to do better.

Documentation in Structurizr

The basic premise of the documentation support in Structurizr is to create one Markdown file per guidebook section and to link that with an appropriate element in the software architecture model, embedding software architecture diagrams where necessary. If you're interested to see what this looks like, I've pushed an initial release and there is some documentation for the techtribes.je and the Financial Risk System that I use in my workshops. The Java code and Markdown looks like this.

Even if you're not using Structurizr, I hope that this blog post and publishing the definitions of the sections I typically include in my software architecture documentation will help you create better documentation to complement your code. Remember, this is all about lightweight documentation that describes what you can't get from the code and only documenting something if it adds value.

Categories: Architecture

Search at I/O 16 Recap: Eight things you don't want to miss

Google Code Blog - Tue, 05/31/2016 - 19:54

Posted by Fabian Schlup, Software Engineer

Two weeks ago, over 7,000 developers descended upon Mountain View for this year’s Google I/O, with a takeaway that it’s truly an exciting time for Search. People come to Google billions of times per day to fulfill their daily information needs. We’re focused on creating features and tools that we believe will help users and publishers make the most of Search in today’s world. As Google continues to evolve and expand to new interfaces, such as the Google assistant and Google Home, we want to make it easy for publishers to integrate and grow with Google.

In case you didn’t have a chance to attend all our sessions, we put together a recap of all the Search happenings at I/O.

1: Introducing rich cards

We announced rich cards, a new Search result format building on rich snippets, that uses schema.org markup to display content in an even more engaging and visual format. Rich cards are available in English for recipes and movies and we’re excited to roll out for more content categories soon. To learn more, browse the new gallery with screenshots and code samples of each markup type or watch our rich cards devByte.

2: New Search Console reports

We want to make it easy for webmasters and developers to track and measure their performance in search results. We launched a new report in Search Console to help developers confirm that their rich card markup is valid. In the report we highlight ‚Äúenhanceable cards,‚ÄĚ which are cards that can benefit from marking up more fields. The new Search Appearance filter also makes it easy for webmasters to filter their traffic by AMP and rich cards.

3: Real-time indexing

Users are searching for more than recipes and movies: they’re often coming to Search to find fresh information about what’s happening right now. This insight kickstarted our efforts to use real-time indexing to connect users searching for real-time events with fresh content. Instead of waiting for content to be crawled and indexed, publishers will be able to use the Google Indexing API to trigger the indexing of their content in real time. It’s still in its early days, but we’re excited to launch a pilot later this summer.

3: Getting up to speed with Accelerated Mobile Pages

We provided an update on our use of AMP, an open source effort to speed up the mobile web. Google Search uses AMP to enable instant-loading content. Speed is important---over 40% of users abandon a page that takes more than three seconds to load. We announced that we’re bringing AMPed news carousels to the iOS and Android Google apps, as well as experimenting with combining AMP and rich cards. Stay tuned for more via our blog and github page.

In addition to the sessions, attendees could talk directly with Googlers at the Search & AMP sandbox.

5: A new and improved Structured Data Testing Tool

We updated the popular Structured Data Testing tool. The tool is now tightly integrated with the DevSite Search Gallery and the new Search Preview service, which lets you preview how your rich cards will look on the search results page.

6: App Indexing got a new home (and new features)

We announced App Indexing’s migration to Firebase, Google’s unified developer platform. Watch the session to learn how to grow your app with Firebase App Indexing.

7: App streaming

App streaming is a new way for Android users to try out games without having to download and install the app -- and it’s already available in Google Search. Check out the session to learn more.

8. Revamped documentation

We also revamped our developer documentation, organizing our docs around topical guides to make it easier to follow.

Thanks to all who came to I/O -- it’s always great to talk directly with developers and hear about experiences first-hand. And whether you came in person or tuned in from afar, let’s continue the conversation on the webmaster forum or during our office hours, hosted weekly via hangouts-on-air.

Categories: Programming

Don’t Be Silly

NOOP.NL - Jurgen Appelo - Tue, 05/31/2016 - 17:23
Don't Be Silly

I’m giving away exclusive keynotes and webinars at the price of just a few books. Don’t be stupid. Grab that chance before June 27!

I get many requests for free keynotes and frequent inquiries for online webinars. I decline them all. I don’t believe in working for free (unless there is a non-monetary value attached to the work). And I have bad experiences with webinars.

However… I will make some exceptions.

After the release of my new book Managing for Happiness on June 27, I will pick FIVE lucky communities and companies who will get from me a free keynote. They will not pay me anything, not even travel or accommodation!

I will also pick FIVE lucky winners who will get from me a free online webinar. Again, I will require nothing from them except an appreciative audience and the handling of all logistics.

You could be one of those winners!

All you need to do is taking part in the Preorder Party: order some copies of Managing for Happiness, send me the proof of purchase and nominate a community or company. The communities and companies with the most votes have a big chance of winning one of my free keynotes or webinars.

Tip: Nearly all communities and companies in this pre-order party have been nominated only once! But my free talks go to those with the most votes. It should be very, very easy to become a winner in this contest. Be the one with more nominations than others!

Seriously, don’t be silly. Don’t email me asking for a free keynote or webinar. I will just say no. Simply pre-order some books, nominate your community or company, and make sure you get more than one vote.

I am the silliest person here, giving away so much value at such a small price.

Categories: Project Management

The Best Way to Establish a Baseline When Playing Planning Poker

Mike Cohn's Blog - Tue, 05/31/2016 - 15:00

Planning Poker relies on relative estimating, in which the item being estimated is compared to one or more previously estimated items. It is the ratio between items that is important. An item estimated as 10 units of work (generally, story points) is estimated to take twice as long to complete as an item estimated as five units of work.

An advantage to relative estimating is that it becomes easier to do as a team estimates more items.

Estimating a new item becomes a matter of looking at the previously estimated items and finding something requiring a similar amount of work. This is easier to do when the team has already estimated 100 items than when they’ve only estimated 10.

But, relative estimating like with Planning Poker suffers from a bootstrapping problem: How does a team select the initial estimates to which they’ll compare?

My recommendation is that when a team first starts playing Planning Poker, team members identify two values that will establish their baseline. They do this without playing Planning Poker. They do it just through discussion. After the baseline is established, team members can use Planning Poker to estimate additional items.

Ideally, the team is able to identify both a two-point story and a five-point story. There is evidence that humans estimate most reliably when sticking within one order of magnitude.

Identifying a two-point product backlog item and a five-point item does a good job of spanning this order of magnitude. Many other items can then be more reliably compared against the two and the five.

If finding a two and a five proves difficult, look instead for a two and an eight, or a three and an eight. Anything that spans the one to 10 range where we’re good estimators will work.

Avoid Starting with a One-Point Story

I like to avoid starting with a one-point story. It doesn’t leave room for anything smaller without resorting to fractions, and those are harder to work with later.

Additionally, comparing all subsequent stories to a one-point story is difficult. Saying one product backlog item will take two or three times longer than another seems intuitively easier and more accurate than saying something will take 10 times longer.

I made this point in my 2005 “Agile Estimating and Planning” book (now also a video course). In 2013, it was confirmed by Magne Jørgensen of the Simula Research Lab. Jørgensen, a highly respected researcher, conducted experiments involving 62 developers. He found that “using a small user story as the reference tends to make the stories to be estimated too small due to an assimilation effect.”

Why Use Two Values for a Baseline?

Establishing a baseline of two values allows for even the first stories being estimated to be compared to two other items. This is known as triangulating and helps achieve more consistent estimates.

If a team has established a baseline with two- and five-point stories, team members can validate a three-point estimate by thinking whether it will take longer than the two and less time than the five.

Citing again the research of Jørgensen, there is evidence that the direction of comparison matters. Comparing the item being estimated to one story that will take less time to develop and another that will take longer is likely to improve the estimate.

Don’t Establish a New Baseline Every Project

Some teams establish a new baseline at the start of each project. Because this results in losing all historical velocity data, I don’t recommend doing this as long as two things are true:

  • The team members developing the new system will be largely those involved in the prior system. The team doesn’t need to stay entirely the same, but as long as about half the team remains the same, you’re better off using the same baseline.
  • The team will be building a somewhat similar system. If a team is switching from developing a website to embedded firmware, for example, they should establish a new baseline. But if the systems being built are somewhat similar in either the domain or technologies used, don’t establish a new baseline.

Whenever possible, retain the value of historical data by keeping a team’s baseline consistent from sprint to sprint.

How Do Establish Your Baseline?

How do you estimate your baseline and initial estimates for a new team? Please share your thoughts in the comments below.

Memorial Day 2016

Herding Cats - Glen Alleman - Mon, 05/30/2016 - 15:49

Memorialday

If we don't remember those who have died, they died for nothing

From 1775 to Present - 2,852,901

Categories: Project Management

Software Development Conferences Forecast May 2016

From the Editor of Methods & Tools - Mon, 05/30/2016 - 13:29
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

SPaMCAST 396 ‚Äď Mike Burrows, Agendashift

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes                   Check out the podcast on Google Play Music

The Software Process and Measurement Cast 396 begins our run up to Episode 400 with our interview of with Mike Burrows.  Mike and I talked about his game changing idea of Agendashift.  Agendashift Identifies opportunities for positive change by exploring an organization’s alignment to the values of transparency, balance, collaboration, customer focus, flow, and leadership.  Along the way, we also revisited parts of our previous interview on the podcast covering Mike’s book, Kanban from the Inside (Kindle).

Mike’s Bio

Mike is the founder of Agendashift, author of the book Kanban from the Inside, consultant, coach, and trainer. In recent months, he has been the interim delivery manager for two UK government digital “exemplar” projects and consultant to public and private sector organisations at home and abroad. Prior to his consulting career, he was global development manager and Executive Director at a top tier investment bank, and IT Director for an energy risk management startup.

Agendashift Blog: https://www.agendashift.com/
Twitter: @asplake and  @KanbanInside

 Re-Read Saturday News

We continue the read of Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Maassen, Matts, and Geary.¬† Buy your copy today and read along (use the link to support the podcast).¬†This week we tackle chapter 6. Chapter 6 layers ideas from game theory to explain why real options works.¬† Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast includes three columns.  The first is our essay on cumulative flow diagrams.  Cumulative flow diagrams are extremely versatile tools for managing work.  I am becoming more and more convinced that they should be used universally.

We will also have a visit to the QA Corner with Jeremy Berriault.  Jeremy brings us his unique wisdom to testing topics.  Our conversations are always illuminating!

Jon M. Quigley brings a new column to the SPaMCAST next week.  Jon is a serial author and consultant, who first appeared on SPaMCAST 346.  In the first installment of the column, we discuss project risk, scope and strategy selection.  Jon has not named his column and is look for your suggestions!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 396 - Mike Burrows, Agendashift

Software Process and Measurement Cast - Sun, 05/29/2016 - 22:00

The Software Process and Measurement Cast 396 begins our run up to Episode 400 with our interview of with Mike Burrows. Mike and I talked about his game changing idea of Agendashift. Agendashift identifies opportunities for positive change by exploring an organization’s alignment to the values of transparency, balance, collaboration, customer focus, flow, and leadership. Along the way, we also revisited parts of our previous interview on the podcast covering Mike’s book, Kanban From The Inside.

Mike’s Bio
Mike is the founder of Agendashift, author of the book Kanban from the Inside, consultant, coach, and trainer. In recent months, he has been the interim delivery manager for two UK government digital "exemplar" projects and consultant to public and private sector organisations at home and abroad. Prior to his consulting career, he was global development manager and Executive Director at a top tier investment bank, and IT Director for an energy risk management startup.

Agendashift Blog: https://www.agendashift.com/
Twitter: @asplake and @KanbanInside

Re-Read Saturday News
We continue the read of Commitment ‚Äď Novel About Managing Project Risk by Maassen, Matts, and Geary. Buy your copy today and read along (use the link to support the podcast). This week we tackle chapter 6. Chapter 6 layers ideas from game theory to explain why real options works. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST
The next Software Process and Measurement Cast includes three columns. The first is our essay on cumulative flow diagrams. Cumulative flow diagrams are extremely versatile tools for managing work. I am becoming more and more convinced that they should be used universally.


We will also have a visit to the QA Corner with Jeremy Berriault. Jeremy brings us his unique wisdom to testing topics. Our conversations are always illuminating!

Jon M. Quigley. Jon is a serial author and consultant, who first appeared on SPaMCAST 346. We discussed. We began his unnamed column (we need your help) with a discussion of project risk and scope and strategy selection.

Shameless Ad for my book!
Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.
‚ÄÉ

Categories: Process Management

Re-Read Saturday: Commitment ‚Äď Novel about Managing Project Risk, Part 5

Picture of the book cover

Commitment

If you have ever had to defend a project, team or concept, the plot in Chapter 6 will resonate. This week‚Äôs installment I would title ‘explanations and office politics’, as we close in on the peak of our read¬†of¬†Commitment¬†‚ÄstNovel about Managing Project¬†Risk¬†by¬†Olav Maassen, Chris Matts and Chris Geary (2nd edition, 2016)¬†.¬† In Chapter 6 we meet the corporate inquisitor. ¬†Management sends Duncan to find out what’s going on in the project. The briefing ends on the note, “We just need to make sure we do the right thing.”

The first major exposition in the chapter is¬†on the game theory.¬† Game theory is the study of models (often mathematical) of conflict and cooperation in decision making.¬† Game theory is useful for both describing and anticipating activity between team members. Much of the theory behind real options is built on the prediction of how individuals interact, otherwise known as game theory. The exposition on game theory includes a discussion of the prisoner‚Äôs dilemma and the strategy of conflict. In the prisoner‚Äôs dilemma, the police arrest two prisoners for a crime they committed together. The police isolate each prisoner with no means of talking with each other. There is not sufficient evidence to convict the pair, so the prosecutor(s) hopes to convict each one on a lesser charge (typically stated as one year). In tandem, the prosecutors offer each prisoner a bargain. The¬†prosecutors¬†offer each prisoner the opportunity either to betray the other and avoid going to jail while the other¬†prisoner¬†goes to jail for several years (typically¬†three years). The first person to “roll over” gets the deal. If neither testifies they will receive the same one-year sentence. In the prisoner‚Äôs dilemma, a purely rational player would immediately maximize their own utility by betraying the other¬†prisoner, even though¬†if they cooperate both there would be¬†less overall jail time served between the two. The second aspect of the game theory discussed in the callout is the strategy of conflict.¬† Some of the main features of the strategy of conflict are to withhold information between participants and to not allow participants to negotiate directly with the decision maker. Under the constraints of the strategy of conflict, a system works up to a point and then fails due to a failure to collaborate and share information.¬† At this point, participants see the need to work together to survive and begin to collaborate. ¬†Forcing teams to make decisions to deal with options at some point will precipitate the failure scenario. As soon as the team senses the potential for failure there will be a natural tendency to tip into a more collaborative process. Learning to resolve conflict improves the team‚Äôs ability to make decisions.¬† Suppressing conflict will cause the team to develop conflict avoidance skills, which will reduce the effectiveness of decision making.

Duncan, the hatchet man, begins his inquisition by telling Rose he is there to observe, but withholds the impact of a bad review.  Duncan observes and interviews the team in action before coming back to talk with Rose.

The authors use the Duncan’s observation and his discussion with Rose in Chapter 6 to tie real options to the game theory.  Rose points out that most people, when given the choice between being uncertain and being certain, will choose certainty. Therefore, if given the option of waiting for more information to make a decision people will tend to make the decision even if they risk being wrong.  Real options works when teams get to the point that they collaborate rather than just avoiding uncertainty; that point is predictable based on the strategy of conflict.  Rose walks Duncan through how the team has worked through the perception of failure and arrived at collaboration.

In a blog entry Rose points out that since learning about real options, she has begun to see options everywhere. Balancing options become a matter of the price you’re willing to pay for a choice to be an option rather than the commitment. In real life, too much choice is a bad because people would rather not make a choice.¬† The too many options issue is one reason many sales managers provide a single clear choice rather than options.¬† Advice for adjusting to the options rich world tuned by Rose is:

  • Be deliberate about what you treat as an option; not everything needs to be an option even if it appears to be.
  • Be deliberate about making commitments as commitments are nonreversible.
  • Don’t expect too much; allow yourself to get accustomed to options thinking.

Options have value, therefore, a normal question to ask would be how to value real options. In a blog entry, it is noted that real options began in the financial markets and uses the Black-Scholes equation to value options (predicted result * probability equals weighted value). While the equation is relatively simple, the use of estimated probability complicates the use of the equation. The blog suggests that while the equation works in financial markets, real world liquidity causes this type of valuation not to work for software development projects. It should be noted that Douglas W Hubbard’s How to Measure Anything, provides a mechanism to address this problem. Valuation, while a touchy subject for some, is a topic that needs to be understood.  Practice has taught me teams and organizations need to understand which options require extra levels of evaluation  and therefore valuation and which do not.  

Jumping over the kickboxing and dancing, at the end of the chapter, Duncan gives the project Rose is leading a clean bill of health. However, due to perceived risk of implementation, the client has canceled the project, which means that everyone will lose their job at the end of the month.¬† The chapter ends with Duncan uttering a great line, ‚ÄúAll their options have expired.‚ÄĚ

Previous Installments:

Part 1 (Chapters 1 and 2)

Part 2 (Chapter 3)

Part 3 (Charter 4) 

Part 4 (Chapter 5)


Categories: Process Management

Flaky Tests at Google and How We Mitigate Them

Google Testing Blog - Sat, 05/28/2016 - 01:34
by John Micco

At Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion.

Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.

Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests.

When doing post-submit testing, our Continuous Integration (CI) system identifies when a passing test transitions to failing, so that we can investigate the code submission that caused the failure. What we find in practice is that about 84% of the transitions we observe from pass to fail involve a flaky test! This causes extra repetitive work to determine whether a new failure is a flaky result or a legitimate failure. It is quite common to ignore legitimate failures in flaky tests due to the high number of false-positives. At the very least, build monitors typically wait for additional CI cycles to run this test again to determine whether or not the test has been broken by a submission adding to the delay of identifying real problems and increasing the pool of changes that could contribute.

In addition to the cost of build monitoring, consider that the average project contains 1000 or so individual tests. To release a project, we require that all these tests pass with the latest code changes. If 1.5% of test results are flaky, 15 tests will likely fail, requiring expensive investigation by a build cop or developer. In some cases, developers dismiss a failing result as flaky only to later realize that it was a legitimate failure caused by the code. It is human nature to ignore alarms when there is a history of false signals coming from a system. For example, see this article about airline pilots ignoring an alarm on 737s. The same phenomenon occurs with pre-submit testing. The same 15 or so failing tests block submission and introduce costly delays into the core development process. Ignoring legitimate failures at this stage results in the submission of broken code.

We have several mitigation strategies for flaky tests during presubmit testing, including the ability to re-run only failing tests, and an option to re-run tests automatically when they fail. We even have a way to denote a test as flaky - causing it to report a failure only if it fails 3 times in a row. This reduces false positives, but encourages developers to ignore flakiness in their own tests unless their tests start failing 3 times in a row, which is hardly a perfect solution.
Imagine a 15 minute integration test marked as flaky that is broken by my code submission. The breakage will not be discovered until 3 executions of the test complete, or 45 minutes, after which it will need to be determined if the test is broken (and needs to be fixed) or if the test just flaked three times in a row.

Other mitigation strategies include:
  • A tool that monitors the flakiness of tests and if the flakiness is too high, it automatically quarantines the test. Quarantining removes the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested.
  • Another tool detects changes in the flakiness level of tests and works to identify the change that caused the test to change the level of flakiness.

In summary, test flakiness is an important problem, and Google is continuing to invest in detecting, mitigating, tracking, and fixing test flakiness throughout our code base. For example:
  • We have a new team dedicated to providing accurate and timely information about test flakiness to help developers and build monitors so that they know whether they are being harmed by test flakiness.
  • As we analyze the data from flaky test executions, we are seeing promising correlations with features that should enable us to identify a flaky result accurately without re-running the test.


By continually advancing the state of the art for teams at Google, we aim to remove the friction caused by test flakiness from the core developer workflows.

Categories: Testing & QA

The Fallacy of Beneficial Ignorance

Herding Cats - Glen Alleman - Sat, 05/28/2016 - 01:01

The basis of #Noestimates is that decisions can be made in the presence of uncertainty without having to estimate the impact of those decisions

No Estimates

Here's a research paper that hopefully will put an end to the nonsensical idea of #NoEstimates.

All project work is uncertain. And has been stated here endless times, uncertainty comes in two forms - Reducible (Epistemic) and Irreducible (Aleatory).

Add to that the biases found on all projects - confirmation and optimism are two we encounter all the time. The conjecture - and it is pure conjecture- that decisions can be made when spending other people's money in the presence of uncertainty without estimating the consequences of those decisions is not only conjecture, it's factually false - a Fallacy.

Here's the paper at SSRN. You'll need to create a free account. This version is the pre-publication version, but it's the same paper I downloaded from my account. Read the paper, discover how to reject the notion of #NoEstimates, not only by its ignorance of managerial finance, probabilistic decision making, business governance violations, but foundational mathematics.

So time to learn why estimates are needed to make decisions in the presence of uncertanty, how to make those estimates, and start behaving as adults in the presence of the risk created by this uncertainty as Tim Lister tells us Risk Management is how Adults Manage Projects.

Screen Shot 2016-05-27 at 5.50.13 PM

Related articles What's the Smell of Dysfunction? Making Decisions in the Presence of Uncertainty Making Choices in the Absence of Information Making Conjectures Without Testable Outcomes Estimating and Making Decisions in Presence of Uncertainty Making Decisions In The Presence of Uncertainty Intellectual Honesty of Managing in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Bringing virtual cats to your world with Project Tango

Google Code Blog - Fri, 05/27/2016 - 23:44

Posted by Jason Guo, Developer Programs Engineer, Project Tango

Project Tango brings augmented reality (AR) experiences to life. From the practical to the whimsical, Project Tango apps help place virtual objects -- anything from new living room furniture to a full-sized dinosaur -- into your physical world.

Last month we showed you how to quickly and easily make a simple solar system in AR. But if you are ready for something more advanced, the tutorial below describes how to use Project Tango’s depth APIs to associate virtual objects with real world objects. It also shows you how to use a Tango Support Library function to find the planar surface in an environment.

So what’s our new tutorial project? We figured that since cats rule the Internet, we’d place a virtual cat in AR! The developer experience is designed to be simple -- when you tap on the screen, the app creates a virtual cat based on real-world geometry. You then use the depth camera to locate the surface you tapped on, and register (place) the cat in the right 3D position.

Bring on the cats!

Before you start, you’ll need to download the Project Tango Unity SDK. Then you can follow the steps below to create your own cats.

Step 1: Create a new Unity project and import the Tango SDK package into the project.

Step 2: Create a new scene. If you don’t know how to do this, look back at the solar system tutorial. Just like the solar system project, you’ll use the Tango Manager and Tango AR Camera in the scene and remove the default Main Camera gameobject. After doing this, you should see the scene hierarchy like this:

Step 3: Build and run once, making sure sure the application shows the video feed from Tango’s camera.

Step 4: Enable the Depth checkbox on the Tango Manager gameobject.

Step 5: Drag and drop the Tango Point Cloud prefab to the scene from the TangoPrefab folder.

Tango Point Cloud includes a bunch of useful functions related to point cloud, including finding the floor, transforming pointcloud to unity global space, and rendering debug points. In this case, you’ll use the FindPlane function to find a plane based on the touch event.

Step 6: Create a UI Controller gameobject in the scene. To do this, click the ‚ÄúCreate‚ÄĚ button under the Hierarchy tab, then click ‚ÄúCreate Empty.‚ÄĚ The UI Controller will be the hosting gameobject to run your UIController.cs script (which you‚Äôll create in the next step).

Step 7: Click on ‚ÄúUIController gameobject‚ÄĚ in the inspector window, then click ‚ÄúAdd Component‚ÄĚ to add a C# script named KittyUIController.cs. KittyUIController.cs will handle the touch event, call the FindPlane function, and place your kitty into the scene.

Step 8: Double click on the KittyUIController.cs file and replace the script with the following code

using UnityEngine;
using System.Collections;

public class KittyUIController : MonoBehaviour
{
public GameObject m_kitten;
private TangoPointCloud m_pointCloud;

void Start()
{
m_pointCloud = FindObjectOfType();
}

void Update ()
{
if (Input.touchCount == 1)
{
// Trigger place kitten function when single touch ended.
Touch t = Input.GetTouch(0);
if (t.phase == TouchPhase.Ended)
{
PlaceKitten(t.position);
}
}
}

void PlaceKitten(Vector2 touchPosition)
{
// Find the plane.
Camera cam = Camera.main;
Vector3 planeCenter;
Plane plane;
if (!m_pointCloud.FindPlane(cam, touchPosition, out planeCenter, out plane))
{
Debug.Log("cannot find plane.");
return;
}

// Place kitten on the surface, and make it always face the camera.
if (Vector3.Angle(plane.normal, Vector3.up) < 30.0f)
{
Vector3 up = plane.normal;
Vector3 right = Vector3.Cross(plane.normal, cam.transform.forward).normalized;
Vector3 forward = Vector3.Cross(right, plane.normal).normalized;
Instantiate(m_kitten, planeCenter, Quaternion.LookRotation(forward, up));
}
else
{
Debug.Log("surface is too steep for kitten to stand on.");
}
}
}
Notes on the code
Here are some notes on the code above:
  • m_kitten is a reference to the Kitten gameobject (we‚Äôll add the model in the following steps)
  • m_pointCloud is a reference to the TangoPointCloud script on the Tango Point Cloud gameobject. We need this reference to call the FindPlane method on it
  • We assign the m_pointcloud reference in the Start() function
  • We check the touch count and its state in the Update() function when the single touch has ended
  • We invoke the PlaceKitten(Vector2 touchPosition) function to place the cat into 3D space. It queries the main camera‚Äôs location (in this case, the AR camera), then calls the FindPlane function based on the camera‚Äôs position and touch position. FindPlane returns an estimated plane from the touch point, then places the cat on a plane if it‚Äôs not too steep. As a note, the FindPlane function is provided in the Tango Support Library. You can visit TangoSDK/TangoSupport/Scripts/TangoSupport.cs to see all of its functionalities.
Step 9: Put everything together by downloading the kitty.unitypackage, which includes a cat model with some simple animations. Double click on the package to import it into your project. In the project folder you will find a Kitty prefab, which you can drag and drop to the Kitten field on the KittyUIController.
Step 10:
Compile and run the application again. You should able to tap the screen and place kittens everywhere!We hope you enjoyed this tutorial combining the joy of cats with the magic of AR. Stay tuned to this blog for more AR updates and tutorials!

A final note on this tutorial
So you’ve just created virtual cats that live in AR. That’s great, but from a coding perspective, you’ll need to follow some additional steps to make a truly performant AR application. Check out our Unity example code on Github (especially the Augmented Reality example) to learn more about building a good AR application. Also, if you need a refresher, check out this talk from I/O around building 6DOF games with Project Tango.
Categories: Programming

Learn about building for Google Maps over Coffee with Ankur Kotwal

Google Code Blog - Fri, 05/27/2016 - 21:59

Posted by Laurence Moroney, Developer Advocate

If you’ve ever used any of the Google Maps or Geo APIs, you’ll likely have watched a video, read a doc, or explored some code written by Ankur Kotwal. We sat down with him to discuss his experience with these APIs, from the past, to the present and through to the future. We discuss how to get started in building mapping apps, and how to consume many of Google’s web services that support them.

We also discuss the Santa Tracker application, that Ankur was instrumental in delivering, including some fun behind the scenes stories of the hardes project manager he’s ever worked with!

Categories: Programming

Estimating on Agile Projects

Herding Cats - Glen Alleman - Fri, 05/27/2016 - 03:39

Here's a straightforward approach to estimating on agile projects. This is an example of the estimating profession found on many domains. 

Categories: Project Management

3 Requirements For Using Storytelling To Generate The Big Picture

A story helps you see the big picture

A story helps you see the big picture

At one point in my career, I gathered requirements on an almost¬†daily basis.¬† I got good at interviewing people to help them discover what they wanted a project to deliver.¬† In most cases I collected all sorts of ‚Äúshalls,‚ÄĚ ‚Äúmusts,‚ÄĚ ‚Äúwills‚ÄĚ and an occasional ‚Äúshould.‚ÄĚ The organization I worked for¬†detailed¬†the outcome of the process in a requirements document that included¬†technical, non-functional and functional requirements.¬† All of the requirements toed the line defined in IEEE standards. Once we had requirements, my teams would leap into action writing, coding and testing to their hearts content. Looking back the problem was that the cocktail napkin or cost-benefit analysis that spawned this orgy of action often did not capture the nuances of the business outcome. The failure to anchor the nuances of the business outcome in everyone‚Äôs mind meant that despite carefully crafted charters, projects were apt to wander off track. This caused all sorts of stress when they were winding down to done.¬† One solution to this problem is to have the sponsors, stakeholders and team capture an outcome-based big picture.¬† Storytelling as a tool to anchor an idea is not new. If you need proof that storytelling is part of human nature consider that some of the oldest human artifacts, the Lascaux Cave paintings reflect the history (story) of people from approximately 15,000 B.C. Stories help us remember and they help us connect.¬† In the workplace, the big picture acts both as an anchor for the team and as a container to shape or guide the outcome.¬† Effective storytelling to guide work requires the right participation, proper timing, and a process.

People are an important component. Building a narrative for the outcome that a piece of work wants to deliver is not a solitary endeavor.¬† Identify a cross-functional team of impacted subject matter experts, product owners, product managers and team members.¬† Keep the group to 5 ‚Äď 9 people.¬† In scenarios that involve larger groups, I typically suggest developing the story using a cascading hierarchy of smaller groups beginning with a group of the most visionary and then spreading out to more tactical groups (from a senior executive level to more of department level). As with any endeavor, size and degree of difficulty are positively correlated.

Timing matters!  In order to get the most benefit out constructing a big picture narrative story for a project or piece work, construct the story before teams begin building functionality.  For example, in Scrum before the development of functionality means before sprint one.  In SAFe, the big picture narrative needs to exist before planning for a PI occurs. This should be before a building or at least prioritizing the backlog that will drive the work.  That said, developing a big picture narrative for a piece of work is a very powerful tool to generate alignment.

The third requirement for using storytelling in the business environment is a process.  We will explore the process in detail in the next installment however the process outline is: 

  • Identity¬†participants and set a workshop date.¬† The story session is not an ad hoc meeting between a random group of participants.
  • Assign pre-work to set the context and to gather information so that the session is not focused on educating participants. When introducing a new group to generating a big picture narrative I always have them review story structures and story uses before the session.
  • Hold the story meeting (until the next installment visualizes a cloud that says a miracle happens here). The story session typically includes an iterative process of information sharing, story development, and consensus generation.¬†¬†
  • Communicate the consensus narrative to the organization and the whole team.¬† Use the narrative as a tool to identify features, epics, user stories, and the effort‚Äôs minimum viable product.¬†

Storytelling in the business environment is useful.  While the concept of storytelling is not new, it is definitely chic.  It can be tempting to wing it when it comes to generating a big picture narrative, but you will end up with a better result if you spend the time and effort to identify the right people, get them to prepare and then use a standard process.


Categories: Process Management

Setting up AWS CloudFront for Magento

Agile Testing - Grig Gheorghiu - Thu, 05/26/2016 - 19:00
Here are some steps I jotted down for setting up AWS CloudFront as a CDN for the 3 asset directories that are used by Magento installations. I am assuming your Magento application servers are behind an ELB.


SSL certificate upload to AWS
Install aws command line utilities.
$ pip install awscli
Configure AWS credentials
Create IAM user and associate it with the IAMFullAccess policy. Run ‚Äėaws configure‚Äô and specify the user‚Äôs keys and the region.

Bring SSL key, certificate and intermediate certificate in current directory:
-rw-r--r-- 1 root root 4795 Apr 11 20:34 gd_bundle-g2-g1.crt-rw-r--r-- 1 root root 1830 Apr 11 20:34 wildcard.mydomain.com.crt-rw------- 1 root root 1675 Apr 11 20:34 wildcard.mydomain.com.key
Run following script for installing wildcard SSL certificate to be used in staging CloudFront setup:
$ cat add_ssl_cert_to_iam_for_prod_cloudfront.sh#!/bin/bash
aws iam upload-server-certificate --server-certificate-name WILDCARD_MYDOMAIN_COM_FOR_PROD_CF --certificate-body file://wildcard.mydomain.com.crt --private-key file://wildcard.mydomain.com.key --certificate-chain file://gd_bundle-g2-g1.crt --path /cloudfront/prod/

After uploading the SSL certificates, they will be available in drop-downs when configuring CloudFront for SSL.
Apache Cache-Control headers setup
  • Add these directives (modifying max-age accordingly) in all Apache vhosts, both for port 80 and for port 443
 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">        Header set Cache-Control "max-age=604800, public" </FilesMatch>
CloudFront setup
  • Origin: prod ELB (mydomain-production-lb-9321962155.us-west-2.elb.amazonaws.com)
  • Alternate domain name: cdn.mydomain.com\
  • SSL certificate: ID_OF_CERTIFICATE_UPLOADED_ABOVE
  • Custom SSL client support: Only Clients that Support Server Name Indication (SNI)
  • Domain name: eg7ac9k0fa3qwc.cloudfront.net
  • Behaviors
    • /media/* /skin/* /js/*
    • Viewer protocol policy: HTTP and HTTPS
    • Allowed HTTP methods: GET, HEAD
    • Forward headers: None
    • Object caching: Use origin cache headers
    • Forward cookies: None
    • Forward query strings: Yes
    • Smooth streaming: No
    • Restrict viewer access: No
    • Compress objects automatically: No

DNS setup
  • cdn.mydomain.com is a CNAME pointing to the CloudFront domain name above eg7ac9k0fa3qwc.cloudfront.net

Magento setup
This depends on the version of Magento you are running (1.x or 2.x), but you want to look for settings for the Base Skin URL, Base Media URL and Base Javascript URL, which are usually under System->Configuration->General-Web. You need to set them to point to the domain name you set up as a CNAME to CloudFront.

Base Skin URL: http://cdn.mydomain.com/skin
Base Media URL: http://cdn.mydomain.com/media
Base Javascript URL: http://cdn.mydomain.com/js
More in-depth Magento-specific instructions for integrating with CloudFront are available here.

7 Agile Practices You Can Apply in a Controlled Environment

Xebia Blog - Thu, 05/26/2016 - 11:25
So your teams want to do Agile, perhaps have even started doing so. Now your project managers run around wondering what story points are and why any number of people seem to be attributing hours to their project code. So the question is: what can you adopt easily without turning the Governance of your organisation

Android Studio 2.2 Preview - New UI Designer & Constraint Layout

Android Developers Blog - Thu, 05/26/2016 - 00:50
By Jamal Eason, Product Manager, Android

This week at Google I/O 2016 we launched Android Studio 2.2 Preview. This release is a large update that builds upon our focus to create a fast and productive integrated development environment (IDE) for Android. Developed in sync with the Android platform, Android Studio allows you to develop with the latest Android APIs and features. Since launching Android Studio at Google I/O just 3 years ago, we received great feedback from on you on what features you want the most. Today 92% of the top 125 apps & game developers on Google Play, plus millions of developers worldwide, use Android Studio. We want to continue to build features that will continue to make you more efficient when developing for Android and more productive.
Android Studio 2.2 Preview includes a portfolio of new features along the spectrum of developments, ranging from designing user interfaces to building and debugging your app in new ways. This preview includes the following new categories of features:
Design 
  • Layout Editor: A new user interface designer that helps you visually design the layouts in your app. Features like blueprint mode and the new properties panel allow you to quickly edit layouts and widgets faster.
  • Constraint Layout: A new powerful and flexible Android layout that allows you to express complex UIs without nesting multiple layouts. 
  • Layout Inspector: Debug a snapshot of your app layout running on the Android Emulator or device. Inspect the view hierarchy and corresponding attributes.

Develop
  • Firebase Plugin: Explore and integrate the suite of services offered by Firebase inside of Android Studio. Adding services like Analytics, Authentication, Notifications, and AdMob are just a few clicks away.
  • Enhanced Code Analysis: Android Studio checks the quality of your Android app code. In addition to 260 Android lint and code inspections, this release includes new code quality checks for Java 8 language usage and a new inspection infrastructure for more cross-file analysis.
  • Samples Browser: Referencing Android sample code is now even easier. Within the code editor window, find occurrences of your app code snippets in Google Android sample code to help jump start your app development.
  • Improved C++ Support: Android Studio 2.2 improves C++ development with the ability to edit, build, and debug pre-existing Android projects that use ndk-build or CMake rather than Gradle. Additionally, the existing lldb C++ debugger is now even better with project type auto-detection and a Java language aware C++ mode that lets you use a single debugger process to inspect both Java language and C++ runtimes.
  • IntelliJ 2016.1: Android Studio 2.2 includes all the latest updates from the underlying JetBrains product platforms IntelliJ.

Build
  • Jack Compiler Improvements: For those using the new Jack compiler, Android Studio 2.2 adds support for annotation processing, as well as incremental builds for reduced build times.
  • Merged Manifest Viewer: Diagnose how your AndroidManifest.xml merges with your app dependences across your project build variants. 

Test
  • Espresso Test Recorder: Record Espresso UI tests simply by using your app as a normal user. As you click through your app UI, reusable and editable test code is then generated for you. You can run the generated tests locally, in your Continuous Integration environment, or in Firebase Test lab
  • APK Analyzer: Drill into your APK to help you reduce your APK size, debug 64K method limit issues, view contents of Dex files and more.


Google I/O ‚Äė16: What‚Äôs New in Android Development Tools

Deeper Dive into the New Features  Design
  • Layout Editor: Android Studio 2.2 features a new user interface designer. There are many enhancements but some of the highlights include: 
    • Drag-and-drop widgets from the palette to the design surface or the component tree view of your app.
    • Design surface has a blueprint mode to inspect the spacing and arrangement of your layout. 
    • Properties panel now shows a curated set of properties for quick widget edits with a full sheet of advanced properties one click away.
    • UI builder can edit menu and system preference files. 
The new Layout Editor in Android Studio 2.2 Preview Edit Menus in the new Layout Editor
  • Constraint Layout: This new layout is a flexible layout manager for your app that allows you to create dynamic user interfaces without nesting multiple layouts. It is distributed as a support library that is tightly coupled with Android Studio and backwards compatible to API Level 9. 
At first glance, Constraint Layout is similar to RelativeLayout. However, the Constraint Layout was designed to be used in Studio and it can efficiently express your app design so that you rely on fewer layouts like LinearLayout, FrameLayout, TableLayout, or GridLayout. Lastly, with the built-in automatic constraints inference engine. You can freely design your UI to your liking and let Android Studio do the hard work.

To help you get started, the built-in templates in the New Project Wizard in Android Studio 2.2 Preview now generate  a Constraint Layout. Alternately, you can right click on any layout in the new Layout Editor and select the Convert to ConstraintLayout option.

This is an early preview of the UI designer and Constraint Layout, and we will rapidly add enchantments in upcoming releases. Learn more on the Android Studio tools site.


    Constraint Layout

    Start Layout Inspector
    • Layout Inspector: For new and existing layouts, many times you may want to debug your app UI to determine if your layout is rendering as expected. With the new Layout Inspector, you can drill into the view hierarchy of your app and analyze the attributes of each component of UI on the screen. 
    To use the tool, just click on Layout Inspector Icon in the Android Monitor Window, and then Android Studio creates a snapshot of the current view hierarchy of your app for you to inspect.
    Layout Inspector
    Develop
    • Firebase Plugin: Firebase is the new suite of developers services that can help you develop high-quality apps, grow your user base, and earn more money. Inside of Android Studio, you can add Firebase to a new or existing Android app with the new Assistant window. To access the Firebase features click on the Tools menu and then select Firebase. You will want to first setup the brand new Firebase Analytics as the foundation as you explore other Firebase services like Firebase Cloud Messaging or Firease Crash Reporting to add your application. Learn more about the Firebase integration inside Android Studio here.


    Firebase Plugin for Android Studio
    • Code Sample Browser: In addition to importing Android Samples, the Code Sample Browser is a menu option inside Android Studio 2.2 Preview that allows you to find high-quality, Google-provided Android code samples based on the currently highlighted symbol in your project. To use the feature, highlight a Variables, Types and Methods in your code then Right Click to show a context menu for Find Sample Code. The results are displayed in a bottom output box.   
    Code Sample Browser Build
    • CMake and NDK-Build: For those of you using the Android NDK, Android Studio now supports building CMake and NDK-Build Android app projects by pointing Gradle at your existing build files. Once you‚Äôve added your cmake or ndk-build project to Gradle, Android Studio will automatically open your relevant Android code files for editing and debugging in Studio.

    For CMake users, just add the path to your CMList.txt file in the externalNativeBuild section of your Gradle file: CMake Build in Android Studio
    For NDK-Build Users, just add the path to your *.mk file in the section of your Gradle file: NDK-Build in Android Studio
    • Improved Jack Tools: The new Jack Toolchain compiles your Java language source into Android dex bytecode. The Jack compiler allows some Java 8 language features, like lambdas, to be used on all versions of Android. This release adds incremental build and full support for annotation processing, so you can explore using Java 8 language features in your existing projects.
    To use incremental build with Jack add the following to your build.gradle file:

    Enable Jack Incremental Compile Option
    Jack will automatically apply annotations processors in your classpath. To use an annotation processor at compile-time without bundling it in your apk, use the new annotationProcessor dependency scope:
    Enable Jack Annotation Processing
    • Merged Manifest Viewer: Figuring out how your AndroidManifest merges with your project dependencies based on build types, flavors and variants is now easier with Android Studio. Navigate to your AndroidManifest.xml and click on the new Merged Manifest bottom tab. Explore how each node of your AndroidManifest resolves with various project dependencies.  
    Merged Manifest Viewer Test

    • Espresso Test Recorder: Sometimes writing UI tests can be tedious. With the Record Espresso UI tests feature, creating tests is now as easy as just using your app. Android Studio will capture all your UI interactions  and convert them into a fully reusable Espresso Test that you can run locally or even on Firebase Test lab. To use the recorder, go to the Run menu and select Record Espresso Test.

    Espresso Test Recorder
    • APK Analyzer: The new APK Analyzer helps you understand the contents and the sizes of different components in your APK. You can also use it to avoid 64K referenced method limit issues with your Dex files, diagnose ProGuard configuration issues, view merged AndroidManifest.xml file, and inspect the compiled resources file (resources.arsc). This can help you reduce your APK size and ensure your APK contains exactly the things you expect.
    The APK Analyzer shows you both the raw file size as well as the download size of various components in your APK. The download size is the estimated size users need to download when the APK is served from Google Play. This information should help you prioritize where to focus in your size reduction efforts.
    To use this new feature, click on the Build menu and select Analyze APK… Then, select any APK that you want to analyze.

    APK Analyzer
    • Java-aware C++ Debugger:  When debugging C++ code on targets running N and above, you can now use a single, Java language aware lldb instance. This debugger continues to support great lldb features like fast steps and memory watchpoints while also allowing you to stop on Java language breakpoints and view your Java language memory contents.

    • Auto Debugger Selection: Android Studio apps can now use debugger type ‚ÄúAuto.‚ÄĚ This will automatically enable the appropriate debugger -- the Java language aware C++ debugger if enabled and otherwise the hybrid debugger for C++ projects.  Projects exclusively using the Java language will continue to use the Java language debugger.

    Enable Auto Debugger for C++ What's Next  Download

    If you are using a previous version of Android Studio, you can check for updates on the Canary channel from the navigation menu (Help ‚Üí Check for Update [Windows/Linux] , Android Studio ‚Üí Check for Updates [OS X]). This update will download a new version, and not patch your existing copy of Android Studio. You can also download Android Studio 2.2 Preview from canary release site.

    For the Android Studio 2.2 Preview, we recommend you run a stable version alongside the new canary. Check out the tools site on how to run two versions at the same time.

    We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our Google+ page or on Twitter
    Categories: Programming

    How to Make a Decision in the Presence of Uncertainty

    Herding Cats - Glen Alleman - Wed, 05/25/2016 - 23:28

    Uncertainty is all around us. In the project world, uncertanty comes in two forms:

    1. Aleatory Uncertainty - the naturally occurring variances due to the underlying statistical processes of the project. These can be schedule variances, cost variances, and technical variances - all driven by a stochastic process with a known or unknown statistical distribution. If you don't know what the distribution is, the Triangle Distribution is a good place to start. For example: The statistical processes of testing our code ranges from 2 to 4 days for a full cyber security scan. Planning on a specific duration has to consider this range and provide the needed margin. Aleatory uncertainty is irreducible. Only margin can protect the project from this uncertainty.
    2. Epistemic Uncertainty - the probability that something will happen in the future. The something we're interested in is usually unfavorable. For example: The probability that the server capacity we have selected will not meet the demands of the user when we go live. Epistemic uncertainty, being probabilistic, can be addressed with redundancy, extra capacity, experiments, surge capacity and other direction actions to buy down the risk that results from this uncertanty before the risk turns into an issue.

    When we hear you can make decisions without estimates, this is physically not possible if you accept the fundamental principle that uncertanty is present on all projects. If there is no uncertanty - no aleatory or epistemic uncertainties - then there will be no probabilistic or statistical processes driving the project's outcomes. If that is the case, then decision have no probabilistic or statistical impact and whatever decision you make with the information you have is Deterministic.

    So if you want to learn how and why estimating is needed to make decisions in the presence of uncertainty start here:

    And then when you hear about a conjecture that decisions can be made without estimating you'll know better, and consider anyone making that conjecture as uninformed about how probabilistic and stochastic processes in the project world actually work - especially when spending other people's money.

     

    Categories: Project Management

    Successes and Failures are Illusions

    NOOP.NL - Jurgen Appelo - Wed, 05/25/2016 - 09:19
    Successes and Failures.jpg

    In my talks around the world, I emphasize the need to run management experiments and I offer examples of interesting ideas that worked well for my team. Of course, with so many events per year, it was inevitable that someone would ask me, “What is your least successful experiment?”

    I had to think about that for a moment and I had difficulty coming up with examples. That was strange, I thought. According to information theory, we learn most when roughly half of our experiments fail. When I’m able to name a good number of ideas that work, and I’m not able to list ideas that don’t work, does that mean that my learning process is suboptimal? That would be a reason for concern!

    When I thought about it, I realized that, at least for me, success and failure are temporary statuses and I perceive them both with an optimistic mind. I have plenty of ideas that work for now, and I have a lot of ideas that don’t work yet. This means that, when you ask me about a successful experiment, I will happily share with you something that is successful now, knowing quite well that it may turn into a failure later. Likewise, when someone asks me about a failure, I have difficulty producing examples because I’m not considering the ideas that aren’t working yet as failures. They often just need more time, adaptation, and customization to make them work.

    In other words, for my long-term optimistic brain, half of the experiments succeed and the other half will succeed later. I’m sure there are people with a negative mindset who would turn it all around: Half of the experiments fail and the other half will fail tomorrow. (My short-term pessimistic brain often works like that: “Nothing works, and even if something works, it breaks when I start using it.”)

    Successes and failures are convenient illusions. They are judgement calls of the human mind, subjective evaluations of the consequences of our actions. Outcomes can be observed by anyone but successes and failures exist only in the eyes of the beholder.

    Photo: (C) 2010 Paul Keller, Creative Commons 2.0

    My new book Managing for Happiness is available from June 2016. PRE-ORDER NOW!

    Managing for Happiness cover (front)

    The post Successes and Failures are Illusions appeared first on NOOP.NL.

    Categories: Project Management