Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=4' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Experience virtual reality art in your browser

Google Code Blog - Tue, 04/19/2016 - 19:55

Posted by Jeff Nusz, Data Arts Team, Pixel Painter

Two weeks ago, we introduced Tilt Brush, a new app that enables artists to use virtual reality to paint the 3D space around them. Part virtual reality, part physical reality, it can be difficult to describe how it feels without trying it firsthand. Today, we bring you a little closer to the experience of painting with Tilt Brush using the powers of the web in a new Chrome Experiment titled Virtual Art Sessions.

Virtual Art Sessions lets you observe six world-renowned artists as they develop blank canvases into beautiful works of art using Tilt Brush. Each session can be explored from start to finish from any angle, including the artist’s perspective – all viewable right from the browser.

Participating artists include illustrator Christoph Niemann, fashion illustrator Katie Rodgers, sculptor Andrea Blasich, installation artist Seung Yul Oh, automotive concept designer Harald Belker, and street artist duo Sheryo & Yok. The artists’ unique approaches to this new medium become apparent when seeing them work inside their Tilt Brush creations. Watch this behind-the-scenes video to hear what the artists had to say about their experience:


Virtual Art Sessions makes use of Google Chrome’s V8 Javascript engine for high-performance processing power to render large volumes of data in real time. This includes point cloud data of the artist’s physical form, 3D geometry data of the artwork, and position data of the VR controllers. It also relies on Chrome’s support of WebM video and WebGL to produce the 360° representations of the artists and artwork – the artist portrayals alone require the browser to draw over 200,000 points at 30 times a second. For a deeper look, read the technical case study or browse the project code that is available open source from the site’s tech page.

We hope this experiment provides a window into the world of painting in virtual reality using Tilt Brush. We are excited by this new medium and hope the experience leaves you feeling the same. Visit g.co/VirtualArtSessions to start exploring.

Categories: Programming

Sprint Planning for Agile Teams That Have Lots of Interruptions

Mike Cohn's Blog - Tue, 04/19/2016 - 15:00

Many teams have at least a moderate ability to plan and control their time. They're able to say, "We will work on these things over the coming sprint," and have a somewhat reasonable expectation of that being the case.

And that's the type of team we encounter in much of the Scrum literature--the literature that says to plan a sprint and keep change out.

But what should teams do when change cannot be kept out of a sprint?

In this post, I want to address this topic for two different types of teams:

  • A team that has occasional, but not excessive, interruptions
  • A team that is highly interrupt-driven
Planning with a Moderate Margin of Safety

Many teams will benefit from including a moderate amount of safety into each sprint. Basically, these teams should not assume they can keep all changes out of the sprint. For example, a team might want to leave room when planning a sprint for things like:

  • Fixing critical operational issues, such as a server going down
  • Fixing high-severity bugs
  • Doing first- or second-level tech support

There could be many other similar examples. Consider your own environment. You want to try to set a high threshold for what constitutes a worthy interruption to a sprint. Teams really do best when they have large blocks of dedicated time that will not be interrupted.

To accommodate work like this, all a team needs to do is leave a bit of buffer when planning the sprint. Let’s see how that works.

The Three Things That Must Go into Each Sprint

I think of a sprint as containing three things: corporate overhead, and plannable and unplanned time. I think of this graphically as shown in Figure 1.

Figure 1:

Corporate overhead is that time that goes towards things like all-company meetings, answering emails from your past project, attending the HR sensitivity training and so on. Some of these activities may be necessary, but in many organizations, they consume a lot of time.

I put Scrum meetings (planning, daily scrum, etc.) in the corporate overhead category as well.

Plannable time is the second thing that goes into a sprint. This is the time that belongs collectively to the team.

But the team does not want to fill the entire remainder of the sprint with plannable time. The team needs to acknowledge the need to leave room for some amount of unplanned time.

Unplanned time goes towards three things:

  • Emergencies
  • Tasks that will get bigger than the team thinks
  • Tasks that no one thinks of during the sprint planning meeting
The Appropriate Percentages

I’m frequently asked what percentages teams should use for each of the three categories. I can’t answer that. But I can tell you how to figure it out:

After each sprint, consider how well the unplanned time the team allocated matched the unplanned time the team needed for the sprint. And then adjust up or down a bit for the next sprint. This is not something a team can ever get perfect.

Instead, it’s a game of averages. The team needs to save the right amount of time for unplanned tasks on average. Then some sprints will have more unplanned tasks occur and some sprints will have fewer.

When fewer occur, the team should get ahead on their work. so they’re prepared for when more unplanned tasks occur.

What to Do When a Team Is Highly Interrupt-Driven

The preceding advice works well for the majority of agile teams--those that are only interrupted a moderate amount. Some teams, however, are highly interrupt-driven.

Again, I want to resist putting actual percentages on the regions in Figure 1, but I’m describing a situation in which the area of “unplanned time” becomes much larger than shown.

I actually want to talk about the cases in which unplanned time becomes the dominant of the three areas. Such teams are highly interrupt-driven.

These teams still want to include space in their sprints for unplanned time. But there are usually a few other things you may want to consider if you are on a highly interrupt-driven team.

First, you may want to adjust your sprint length. One option is to go with a long sprint length. Increasing sprint length has the benefit of making the rate of interruption more predictable because the variance will not be so great from sprint to sprint.

To see how that works, imagine you chose a one-year sprint. (Don’t do that!) It’s easy to imagine with such long sprints, the fluctuations a team faces with short sprints will wash out. Sure, this year (this sprint) might have more interruptions than last year (last sprint), but it’s such a long period that the team has time to recover from any excessive fluctuations.

The other option is to go with short, one-week sprints and just live with the unpredictability. The team will be less able to assure bosses “we will be done with this” by a given period, but I find that to be a worthwhile tradeoff.

Second, a highly interrupt-driven team should make sprint planning a very lightweight activity.

Sprint planning should be a quick effort to grab a few things the team thinks it can do in the coming week, and that’s that. It should be a very minimal effort--15 or 30 minutes for many teams.

To illustrate this, think about planning a party, and imagine a spectrum with planning a wedding reception on one end. That’s some serious party planning. At the other end of the spectrum is inviting some friends over tonight to watch the big game on TV. To plan that, I’m going to check the fridge for beer and order a pizza. That’s a different level of party planning.

Sprint planning for a highly interrupt-driven team should be much more like the latter--quick, easy and just enough to be successful.

What Do You Do?

How do you handle interruptions on your agile team? Please share your thoughts in the comments below.

R: substr – Getting a vector of positions

Mark Needham - Mon, 04/18/2016 - 20:49

I recently found myself writing an R script to extract parts of a string based on a beginning and end index which is reasonably easy using the substr function:

> substr("mark loves graphs", 0, 4)
[1] "mark"

But what if we have a vector of start and end positions?

> substr("mark loves graphs", c(0, 6), c(4, 10))
[1] "mark"

Hmmm that didn’t work as I expected! It turns out we actually need to use the substring function instead which wasn’t initially obvious to me on reading the documentation:

> substring("mark loves graphs", c(0, 6, 12), c(4, 10, 17))
[1] "mark"   "loves"  "graphs"

Easy when you know how!

Categories: Programming

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04
Validated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast. In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04

pdcaValidated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.

In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.

Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de “gewone” Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:

  1. Hypothese

Welk probleem ga je oplossen? En hoe wil je dat gaan doen?

Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen

  1. Voorspelling van de uitkomsten

Wat is je verwachting van de uitkomsten? Wat ga je zien?

Geen lagere productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken

  1. Experiment

Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?

De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor

  1. Observaties

Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?

Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.

A well designed experiment is as likely to fail as it is to succeed – Free to Don Reinertsen

 Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.

Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.

 

Hadoop and Salesforce Integration: the Ultimate Successful Database Merger

How we can transfer salesforce data to hadoop? It is big challenge to everyday users. What are different features of data transfer tools.

Categories: Architecture

30 Problems That Affect Software Projects

Herding Cats - Glen Alleman - Mon, 04/18/2016 - 15:10

From Estimating Software Costs: Bringing Realism To Estimating, 2nd Edition.

  1. Initial requirements are seldom more than 50 percent complete
  2. Requirements grow at about 2 percent per calendar month during development
  3. About 20 percent of initial requirements are delayed until a second release
  4. Finding and fixing bugs is the most expensive software activity
  5. Creating paper documents is the second most expensive software activity
  6. Coding is the third most expensive software activity
  7. Meetings and discussion are the fourth most expensive activity
  8. Most forms of testing are less that 30 percent efficient in finding bugs
  9. Most forms of testing touch less than 50 percent of the code being tested
  10. There are more defects in requirements and design than in source code
  11. There are more defects in test cases than in the software itself
  12. Defects in requirements, design and code average 5.0 per function point
  13. Total defect-removal efficiency before release averages only about 80 percent
  14. About 15 percent of software defects are delivered to customers
  15. Delivered defects are expensive and cause customer dissatisfaction
  16. About 5 percent of modules in applications will contain 50 percent of all defects
  17. About 7 percent of all defect repairs will accidentally inject new defects
  18. Software reuse is the only effective for materials that approach zero defects
  19. About 5 percent of software outsource contracts end up in litigation
  20. About 35 percent of projects greater that 10,000 function points will be canceled
  21. About 50 percent of project greater than 110,000 function points will be one year late
  22. The failure mode for most cost estimates is to be excessively optimistic.
  23. Productivity rates in the U.S. are about 10 function points per staff month
  24. Assignment scopes for development are about 150 function points
  25. Assignment scopes for maintenance are about 750 function points
  26. Development costs about $1,200 per function point in the U.S.
  27. Maintenance costs about $150 per function point per calendar year
  28. After delivery applications grow at about 7 percent per calendar year during use
  29. Average defect repair rates are about ten bugs or defect per month
  30. Programmers need about ten days of annual training to stay current

Agile addresses 1, 2, 3, 4, 5, and 6 well. 

So if these are the causes of project difficulties - and there may be others since this publication, what are the fixes?

 

Categories: Project Management

6 Red Flags That You Need To Start Cutting Your Losses

Making the Complex Simple - John Sonmez - Mon, 04/18/2016 - 15:00

There’s a stigma in our society about quitting that causes us to cling on to projects long after they should be let go. Quitters never win. Quitting lasts forever. Champions never quit. You’re never a loser till you quit trying. No one wants to lose. No one wants to be a loser. Well, at least […]

The post 6 Red Flags That You Need To Start Cutting Your Losses appeared first on Simple Programmer.

Categories: Programming

What Is Management in the Context of Agile

Herding Cats - Glen Alleman - Sun, 04/17/2016 - 23:07

There's a meme going around in some parts of agile that management is inhumane. This is an extreme view of course, likely informed by anecdotal experience with Bad Management, or worse lack of actual management experience.

Managing in the presence of Agile is not the same as managing in traditional domains. The platitude is Stewardship, but that has little actionable outcomes needed to move the work efforts along toward getting value delivered to customers in exchange for money and time.

One view of management in Agile can be informed by Governance of the work efforts. Here's a version of Governance, from "Agile Governance Theory: conceptual development,” Alexandre J. H. de O. Luna, Philippe Kruchten , and Hermano Moura.

Screen Shot 2016-04-17 at 4.03.36 PM

 

Related articles Agile Software Development in the DOD Deadlines Always Matter Risk Management is How Adults Manage Projects Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

SPaMCAST 390 – Vinay Patankar, Agile Value and Lean Start-ups

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 390 features our interview with Vinay Patankar.  We discussed his start up Process Street and the path Vinay and his partner took in order to embrace agile because it delivered value, not just because it was cool.  We also discussed how Agile fits or helps in a lean start-up and the lessons Vinay wants to pass on to others.

Vinay’s Bio:

Vinay Patankar is the co-founder and CEO of Process Street, the simplest way to manage your teams recurring processes and workflows. Easily set up new clients, onboard employees and manage content publishing with Process Street.

Process Street is a venture-backed SaaS company and AngelPad alum with numerous fortune 500 clients.

When not running Process Street, Vinay loves to travel and spent 4 years as a digital nomad roaming the globe running different internet businesses. He enjoys food, fitness and talking shop.

Twitter: @vinayp10

Re-Read Saturday News

We continue the read Commitment – Novel About Managing Project Risk by Maassen, Matts, and Geary.  Buy your copy today and read along (use the link to support the podcast). This week we tackle Chapters Three which explores visualization, knowledge options and focusing on outcomes. Visit the Software Process and Measurement Blog to catchup on past installments of Re-Read Saturday.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland, May 10th and 11th. Register Now!

Next SPaMCAST

The next three weeks will feature mix tapes with the “if you could fix two things” questions from the top downloads of 2007/08, 2009 and 2010.  I will be doing a bit of vacationing and all the while researching, writing content and editing new interviews for the sprint to episode 400 and beyond.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 390 – Vinay Patankar, Agile Value and Lean Start-ups

Software Process and Measurement Cast - Sun, 04/17/2016 - 22:00

The Software Process and Measurement Cast 390 features our interview with Vinay Patankar. We discussed his start up Process Street and the path Vinay and his partner took in order to embrace agile because it delivered value, not just because it was cool. We also discussed how Agile fits or helps in a lean start-up and the lessons Vinay wants to pass on to others.

Vinay’s Bio:

Vinay Patankar is the co-founder and CEO of Process Street, the simplest way to manage your teams recurring processes and workflows. Easily set up new clients, onboard employees and manage content publishing with Process Street.

Process Street is a venture-backed SaaS company and AngelPad alum with numerous fortune 500 clients.

When not running Process Street, Vinay loves to travel and spent 4 years as a digital nomad roaming the globe running different internet businesses. He enjoys food, fitness and talking shop.

Twitter: @vinayp10

Re-Read Saturday News

We continue the read Commitment – Novel About Managing Project Risk by Maassen, Matts, and Geary. Buy your copy today and read along (use the link to support the podcast). This week we tackle Chapters Three which explores visualization, knowledge options and focusing on outcomes. Visit the Software Process and Measurement Blog to catch up on past installments of Re-Read Saturday.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd. I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th. Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland, May 10th and 11th. Register Now!

Next SPaMCAST

The next three weeks will feature mix tapes with the “if you could fix two things” questions from the top downloads of 2007/08, 2009 and 2010. I will be doing a bit of vacationing and all the while researching, writing content and editing new interviews for the sprint to episode 400 and beyond.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Re-Read Saturday: Commitment – Novel about Managing Project Risk, Part 2

Picture of the book cover


This week we continue the read of Commitment – Novel about Managing Project Risk by Olav Maassen, Chris Matts and Chris Geary (2nd edition, 2016) with chapter 3.  Chapter 3 introduces a number of Agile and lean techniques.  Susan follows the path of the hero as she crosses the threshold into a new world, signifying her commitment to the journey, and she begins to identify allies.

As a reminder, Chapter 2 ended with the management team telling Rose that the plan she presented was no different from the one David had presented.  Management fired David after presenting his plan to complete the project. The management committee is ready to shut down the project and terminate the lot.  Chapter 3 begins with Rose’s cry for help to Lily. Lily listens and convinces Rose to ask for help from the mentors. She finally calls the number on the business card Lily gave her in chapter 1 to set up a meeting with a mentor. As Rose is leaving the office (with everyone laboring into the evening) to go to the meeting with a mentor at the Cantina of Learning, no one even says goodbye. Rose is a pariah even to her own team.

There’s a call out in one of Lily’s blogs that describes the knowledge options and relationships as the transition from Rose’s leaving work and getting to the Cantina of Learning. Knowledge options are where you know just enough about a subject to know how to apply the subject and how long it will take to get up to speed on the topic when needed. With this knowledge, a practitioner can assess a situation, assess his or her options and then commit based on context. There’s an old saying “if the only tool you own is a hammer, then everything looks like a nail.” That is an example of a practitioner not having (or understanding that there are) knowledge options. Having knowledge options allows individuals and teams to pivot at the last possible moment when the option has the highest value. Knowledge options help you make decisions with the best possible knowledge, which will improve the outcome of the decision. Knowledge options are an acknowledgment that being consciously incompetent has value if you know how long it takes to move from consciously incompetent to consciously competent. For example, if a new piece of functionality required learning Ruby and that it would take two months of study to become proficient you would be able to decide when the piece of work could be done based on your capability in Ruby.   

The blog entry goes on further to provide advice on finding the right mentor. The advice is to find an author that has published a good book on the subject and then identify the community that generally exists around the author’s work. The practitioners that are active in the community often have the time and experience to act as mentors. I use a variant of this process to find and engage people for Software Process and Measurement Cast interviews.

Once Rose is ensconced in the Cantina of Learning, the journey to redemption begins. Rose meets Jon who tells Rose about work visualization and stand-ups.  Visualization helps the team know where they are and how they can coordinate their own activities. The process of visualizing the flow of work identifies all the work in process (WIP).  A side benefit of identifying WIP is that you can see when a person is working on more than one piece of work at a time.  WOrking on more than one thing at a time is multitasking. Multitasking actually hides work sitting in queue (waiting).  Work that is sitting slows the process of delivery. Rose’s role is to help unblock tasks and to find and break dependencies between tasks.  Dependencies are blockages waiting to happen. Visualization is perhaps one of the most basic techniques in Agile and lean.  When work is visualized teams can self-organize and self-manage.  The value of visualizing work is huge; however, the urgency around the concept has waned as tools have replaced card walls and paper Kanban board which reduces the effectiveness of the technique.

The chapter’s action culminates with Jon introducing Rose to Liz.  Liz builds on visualization by reinforcing the ideas behind real options.  Liz points out that each option has three possible solutions.  Using options as the decision process is a tool to manage uncertainty. The three possible solutions are:

  1. Postpone the decision and collect more information.
  2. Choose the option that’s easiest to change.
  3. Invest in a different approach that allows change to be easier.

There is no one, best solution.  Projects use all three solutions to maximize value based on the context.  Visualization provides a platform for the team to know where work is in the development life cycle.  The use of options to evaluate generate alternate paths for a decision reduces uncertainty. The combination of these tools gives the project the power to change direction quickly to maximize the value it can deliver. It is important that decisions are made consciously, which means examining each decision with an eye to which option you intend to exercise. 

In order to use the new tools, Susan realizes she needs a coach.  Liz points to a person she knows that happens to be walking the table by as the type of person that she should hire.  It turns out to be Gary, one the leads currently on her team. Gary agrees to act as Rose’s real options coach. 

Chapter 3 ends with Rose writing to Susan and a blog entry from Liz.  The letter introduces the concept of technical debt using the metaphor of a group of roommates that let dishes build up in a dirty kitchen.  After a time, it becomes so hard to clean the kitchen that no one even makes the effort.  Paying down technical debt (refactoring) is important to increase efficiency and makes it easier to respond quickly to change.  Liz’s blog discusses the value of change and reprioritization of the backlog by the customer (or product owner).  The ability to periodically change and reprioritize the backlog is a form of an option. Agile and lean techniques provide a set of tools so the team works on the top priorities, delivers functionality that generates feedback and allows the customer to exercises his/her options by reprioritizing based on functional code.


Categories: Process Management

Re-Read Saturday: Commitment – Novel about Managing Project Risk, Part 2

Picture of the book cover


This week we continue the read of Commitment – Novel about Managing Project Risk by Olav Maassen, Chris Matts and Chris Geary (2nd edition, 2016) with chapter 3.  Chapter 3 introduces a number of Agile and lean techniques.  Susan follows the path of the hero as she crosses the threshold into a new world, signifying her commitment to the journey, and she begins to identify allies.

As a reminder, Chapter 2 ended with the management team telling Rose that the plan she presented was no different from the one David had presented.  Management fired David after presenting his plan to complete the project. The management committee is ready to shut down the project and terminate the lot.  Chapter 3 begins with Rose’s cry for help to Lily. Lily listens and convinces Rose to ask for help from the mentors. She finally calls the number on the business card Lily gave her in chapter 1 to set up a meeting with a mentor. As Rose is leaving the office (with everyone laboring into the evening) to go to the meeting with a mentor at the Cantina of Learning, no one even says goodbye. Rose is a pariah even to her own team.

There’s a call out in one of Lily’s blogs that describes the knowledge options and relationships as the transition from Rose’s leaving work and getting to the Cantina of Learning. Knowledge options are where you know just enough about a subject to know how to apply the subject and how long it will take to get up to speed on the topic when needed. With this knowledge, a practitioner can assess a situation, assess his or her options and then commit based on context. There’s an old saying “if the only tool you own is a hammer, then everything looks like a nail.” That is an example of a practitioner not having (or understanding that there are) knowledge options. Having knowledge options allows individuals and teams to pivot at the last possible moment when the option has the highest value. Knowledge options help you make decisions with the best possible knowledge, which will improve the outcome of the decision. Knowledge options are an acknowledgment that being consciously incompetent has value if you know how long it takes to move from consciously incompetent to consciously competent. For example, if a new piece of functionality required learning Ruby and that it would take two months of study to become proficient you would be able to decide when the piece of work could be done based on your capability in Ruby.   

The blog entry goes on further to provide advice on finding the right mentor. The advice is to find an author that has published a good book on the subject and then identify the community that generally exists around the author’s work. The practitioners that are active in the community often have the time and experience to act as mentors. I use a variant of this process to find and engage people for Software Process and Measurement Cast interviews.

Once Rose is ensconced in the Cantina of Learning, the journey to redemption begins. Rose meets Jon who tells Rose about work visualization and stand-ups.  Visualization helps the team know where they are and how they can coordinate their own activities. The process of visualizing the flow of work identifies all the work in process (WIP).  A side benefit of identifying WIP is that you can see when a person is working on more than one piece of work at a time.  WOrking on more than one thing at a time is multitasking. Multitasking actually hides work sitting in queue (waiting).  Work that is sitting slows the process of delivery. Rose’s role is to help unblock tasks and to find and break dependencies between tasks.  Dependencies are blockages waiting to happen. Visualization is perhaps one of the most basic techniques in Agile and lean.  When work is visualized teams can self-organize and self-manage.  The value of visualizing work is huge; however, the urgency around the concept has waned as tools have replaced card walls and paper Kanban board which reduces the effectiveness of the technique.

The chapter’s action culminates with Jon introducing Rose to Liz.  Liz builds on visualization by reinforcing the ideas behind real options.  Liz points out that each option has three possible solutions.  Using options as the decision process is a tool to manage uncertainty. The three possible solutions are:

  1. Postpone the decision and collect more information.
  2. Choose the option that’s easiest to change.
  3. Invest in a different approach that allows change to be easier.

There is no one, best solution.  Projects use all three solutions to maximize value based on the context.  Visualization provides a platform for the team to know where work is in the development life cycle.  The use of options to evaluate generate alternate paths for a decision reduces uncertainty. The combination of these tools gives the project the power to change direction quickly to maximize the value it can deliver. It is important that decisions are made consciously, which means examining each decision with an eye to which option you intend to exercise. 

In order to use the new tools, Susan realizes she needs a coach.  Liz points to a person she knows that happens to be walking the table by as the type of person that she should hire.  It turns out to be Gary, one the leads currently on her team. Gary agrees to act as Rose’s real options coach. 

Chapter 3 ends with Rose writing to Susan and a blog entry from Liz.  The letter introduces the concept of technical debt using the metaphor of a group of roommates that let dishes build up in a dirty kitchen.  After a time, it becomes so hard to clean the kitchen that no one even makes the effort.  Paying down technical debt (refactoring) is important to increase efficiency and makes it easier to respond quickly to change.  Liz’s blog discusses the value of change and reprioritization of the backlog by the customer (or product owner).  The ability to periodically change and reprioritize the backlog is a form of an option. Agile and lean techniques provide a set of tools so the team works on the top priorities, delivers functionality that generates feedback and allows the customer to exercises his/her options by reprioritizing based on functional code.


Categories: Process Management

Learn about Android Development Patterns over Coffee with Joanna Smith

Google Code Blog - Fri, 04/15/2016 - 19:25

Posted by Laurence Moroney, developer advocate

One of the great benefits of Android development is in the flexibility provided by the sheer number of APIs available in the framework, support libraries, Google Play services and elsewhere. While variety is the spice of life, it can lead to some tough decisions when developing -- and good guidance about repeatable patterns for development tasks is always welcome!

With that in mind, Joanna Smith and Ian Lake started Android Development Patterns to help developers not just know how to use an API but also which APIs to choose to begin with.

You can learn more about Android Development Patterns by watching the videos on YouTube, reading this blog post, or checking out the Google Developers page on Medium.

Categories: Programming

LDAP server setup and client authentication

Agile Testing - Grig Gheorghiu - Fri, 04/15/2016 - 19:24
We recently bought at work a CloudBees Jenkins Enterprise license and I wanted to tie the user accounts to a directory service. I first tried to set up Jenkins authentication via the AWS Directory Service, hoping it will be pretty much like talking to an Active Directory server. That proved to be impossible to set up, at least for me. I also tried to have an LDAP proxy server talking to the AWS Directory Service and have Jenkins authenticate against the LDAP proxy. No dice. I ended up setting up a good old-fashioned LDAP server and managed to get Jenkins working with it. Here are some of my notes.

OpenLDAP server setup
I followed this excellent guide from Digital Ocean. The server was an Ubuntu 14.04 EC2 instance in my case. What follows in terms of the server setup is taken almost verbatim from the DO guide.

Set the hostname

# hostnamectl set-hostname my-ldap-server

Edit /etc/hosts and make sure this entry exists:
LOCAL_IP_ADDRESS my-ldap-server.mycompany.com my-ldap-server
(it makes a difference that the FQDN is the first entry in the line above!)

Make sure the following types of names are returned when you run hostname with different options:


# hostnamemy-ldap-server
# hostname -fmy-ldap-server.mycompany.com
# hostname -dmycompany.com

Install slapd

# apt-get install slapd ldap-utils# dpkg-reconfigure slapd
(here you specify the LDAP admin password)

Install the SSL Components

# apt-get install gnutls-bin ssl-cert

Create the CA Template
# mkdir /etc/ssl/templates
# vi /etc/ssl/templates/ca_server.conf# cat /etc/ssl/templates/ca_server.confcn = LDAP Server CAcacert_signing_key

Create the LDAP Service Template

# vi /etc/ssl/templates/ldap_server.conf# cat /etc/ssl/templates/ldap_server.conforganization = "My Company"cn = my-ldap-server.mycompany.comtls_www_serverencryption_keysigning_keyexpiration_days = 3650

Create the CA Key and Certificate

# certtool -p --outfile /etc/ssl/private/ca_server.key# certtool -s --load-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ca_server.conf --outfile /etc/ssl/certs/ca_server.pem
Create the LDAP Service Key and Certificate

# certtool -p --sec-param high --outfile /etc/ssl/private/ldap_server.key# certtool -c --load-privkey /etc/ssl/private/ldap_server.key --load-ca-certificate /etc/ssl/certs/ca_server.pem --load-ca-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ldap_server.conf --outfile /etc/ssl/certs/ldap_server.pem

Give OpenLDAP Access to the LDAP Server Key

# usermod -aG ssl-cert openldap# chown :ssl-cert /etc/ssl/private/ldap_server.key# chmod 640 /etc/ssl/private/ldap_server.key

Configure OpenLDAP to Use the Certificate and Keys
IMPORTANT NOTE: in modern versions of slapd, configuring the server is not done via slapd.conf anymore. Instead, you put together ldif files and run LDAP client utilities such as ldapmodify against the local server. The Distinguished Name of the entity you want to modify in terms of configuration is generally dn: cn=config but it can also be the LDAP database dn: olcDatabase={1}hdb,cn=config.
# vi addcerts.ldif# cat addcerts.ldifdn: cn=configchangetype: modifyadd: olcTLSCACertificateFileolcTLSCACertificateFile: /etc/ssl/certs/ca_server.pem-add: olcTLSCertificateFileolcTLSCertificateFile: /etc/ssl/certs/ldap_server.pem-add: olcTLSCertificateKeyFileolcTLSCertificateKeyFile: /etc/ssl/private/ldap_server.key

# ldapmodify -H ldapi:// -Y EXTERNAL -f addcerts.ldif# service slapd force-reload# cp /etc/ssl/certs/ca_server.pem /etc/ldap/ca_certs.pem# vi /etc/ldap/ldap.conf
* set TLS_CACERT to following:TLS_CACERT /etc/ldap/ca_certs.pem
# ldapwhoami -H ldap:// -x -ZZAnonymous

Force Connections to Use TLS
Change olcSecurity attribute to include 'tls=1':

# vi forcetls.ldif# cat forcetls.ldifdn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcSecurityolcSecurity: tls=1

# ldapmodify -H ldapi:// -Y EXTERNAL -f forcetls.ldif# service slapd force-reload# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL dn(shouldn’t work)
# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL -Z dn(should work)

Disallow anonymous bind
Create user binduser to be used for LDAP searches:


# vi binduser.ldif# cat binduser.ldifdn: cn=binduser,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: binduseruid: binduseruidNumber: 2000gidNumber: 200homeDirectory: /home/binduserloginShell: /bin/bashgecos: suseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1

# ldapadd -x -W -D "cn=admin,dc=mycompany,dc=com" -Z -f binduser.ldifEnter LDAP Password:adding new entry "cn=binduser,dc=mycompany,dc=com"
Change olcDissalows attribute to include bind_anon:


# vi disallow_anon_bind.ldif# cat disallow_anon_bind.ldifdn: cn=configchangetype: modifyadd: olcDisallowsolcDisallows: bind_anon

# ldapmodify -H ldapi:// -Y EXTERNAL -ZZ -f disallow_anon_bind.ldif# service slapd force-reload
Also disable anonymous access to frontend:

# vi disable_anon_frontend.ldif# cat disable_anon_frontend.ldifdn: olcDatabase={-1}frontend,cn=configchangetype: modifyadd: olcRequiresolcRequires: authc

# ldapmodify -H ldapi:// -Y EXTERNAL -f disable_anon_frontend.ldif# service slapd force-reload

Create organizational units and users
Create helper scripts:

# cat add_ldap_ldif.sh
#!/bin/bash

LDIF=$1

ldapadd -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat modify_ldap_ldif.sh#!/bin/bash

LDIF=$1

ldapmodify -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat set_ldap_pass.sh#!/bin/bash

USER=$1PASS=$2

ldappasswd -s $PASS -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -x "uid=$USER,ou=users,dc=mycompany,dc=com" -Z
Create ‘mypeople’ organizational unit:


# cat add_ou_mypeople.ldifdn: ou=mypeople,dc=mycompany,dc=comobjectclass: organizationalunitou: usersdescription: all users
# ./add_ldap_ldif.sh add_ou_mypeople.ldif
Create 'groups' organizational unit:


# cat add_ou_groups.ldifdn: ou=groups,dc=mycompany,dc=comobjectclass: organizationalunitou: groupsdescription: all groups

# ./add_ldap_ldif.sh add_ou_groups.ldif
Create users (note the shadow attributes set to -1, which means they will be ignored):


# cat add_user_myuser.ldifdn: uid=myuser,ou=mypeople,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: myuseruid: myuseruidNumber: 2001gidNumber: 201homeDirectory: /home/myuserloginShell: /bin/bashgecos: myuseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1
# ./add_ldap_ldif.sh add_user_myuser.ldif# ./set_ldap_pass.sh myuser MYPASS

Enable LDAPS
In /etc/default/slapd set:

SLAPD_SERVICES="ldap:/// ldaps:/// ldapi:///"

Enable debugging
This was a life saver when it came to troubleshooting connection issues from clients such as Jenkins or other Linux boxes. To enable full debug output, set olcLogLevel to -1:

# cat enable_debugging.ldifdn: cn=configchangetype: modifyadd: olcLogLevelolcLogLevel: -1
# ldapadd -H ldapi:// -Y EXTERNAL -f enable_debugging.ldif
# service slapd force-reload

Configuring Jenkins LDAP authentication
Verify LDAPS connectivity from Jenkins to LDAP server
In my case, the Jenkins server is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the Jenkins box pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the Jenkins server:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
Set up LDAPS client on Jenkins server (StartTLSdoes not work w/ Jenkins LDAP plugin!)
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on Jenkins server and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Add LDAP certificates to Java keystore used by Jenkins
As user jenkins:
$ mkdir .keystore$ cp /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/security/cacerts .keystore/(you may need to customize the above line in terms of the path to the cacerts file -- it is the one under your JAVA_HOME)

$ keytool --keystore /var/lib/jenkins/.keystore/cacerts --import --alias my-ldap-server.mycompany.com:636 --file /etc/ldap/ca_certs.pemEnter keystore password: changeitOwner: CN=LDAP Server CAIssuer: CN=LDAP Server CASerial number: 570bddb0Valid from: Mon Apr 11 17:24:00 UTC 2016 until: Tue Apr 11 17:24:00 UTC 2017Certificate fingerprints:....Extensions:....
Trust this certificate? [no]:  yesCertificate was added to keystore
In /etc/default/jenkins, set JAVA_ARGS to:
JAVA_ARGS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/var/lib/jenkins/.keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit"  
As root, restart jenkins:

# service jenkins restart
Jenkins settings for LDAP plugin
This took me a while to get right. The trick was to set the rootDN to dc=mycompany, dc=com and the userSearchBase to ou=mypeople (or to whatever name you gave to your users' organizational unit). I also tried to get LDAP groups to work but wasn't very successful.
Here is the LDAP section in /var/lib/jenkins/config.xml:
 <securityRealm class="hudson.security.LDAPSecurityRealm" plugin="ldap@1.11">    <server>ldaps://my-ldap-server.mycompany.com:636</server>    <rootDN>dc=mycompany,dc=com</rootDN>    <inhibitInferRootDN>true</inhibitInferRootDN>    <userSearchBase>ou=mypeople</userSearchBase>    <userSearch>uid={0}</userSearch> <groupSearchBase>ou=groups</groupSearchBase> <groupMembershipStrategy class="jenkins.security.plugins.ldap.FromGroupSearchLDAPGroupMembershipStrategy"> <filter>member={0}</filter> </groupMembershipStrategy>    <managerDN>cn=binduser,dc=mycompany,dc=com</managerDN>    <managerPasswordSecret>JGeIGFZwjipl6hJNefTzCwClRcLqYWEUNmnXlC3AOXI=</managerPasswordSecret>    <disableMailAddressResolver>false</disableMailAddressResolver>    <displayNameAttributeName>displayname</displayNameAttributeName>    <mailAddressAttributeName>mail</mailAddressAttributeName>    <userIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>    <groupIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>
 </securityRealm>

At this point, I was able to create users on the LDAP server and have them log in to Jenkins. With CloudBees Jenkins Enterprise, I was also able to use the Role-Based Access Control and Folder plugins in order to create project-specific folders and folder-specific groups specifying various roles. For example, a folder MyProjectNumber1 would have a Developers group defined inside it, as well as an Administrators group and a Readers group. These groups would be associated with fine-grained roles that only allow certain Jenkins operations for each group.
I tried to have these groups read by Jenkins from the LDAP server, but was unsuccessful. Instead, I had to populate the folder-specific groups in Jenkins with user names that were at least still defined in LDAP.  So that was half a win. Still waiting to see if I can define the groups in LDAP, but for now this is a workaround that works for me.
Allowing users to change their LDAP password
This was again a seemingly easy task but turned out to be pretty complicated. I set up another small EC2 instance to act as a jumpbox for users who want to change their LDAP password.
The jumpbox is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the jumpbox pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the jumpbox:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on the jumpbox and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Next, I followed this LDAP Client Authentication guide from the Ubuntu documentation.
# apt-get install ldap-auth-client nscd
Here I had to answer the setup questions on LDAP server FQDN, admin DN and password, and bind user DN and password. 
# auth-client-config -t nss -p lac_ldap
I edited /etc/auth-client-config/profile.d/ldap-auth-config and set:
[lac_ldap]nss_passwd=passwd: ldap filesnss_group=group: ldap filesnss_shadow=shadow: ldap filesnss_netgroup=netgroup: nis
I edited /etc/ldap.conf and made sure the following entries were there:
base dc=mycompany,dc=comuri ldaps://my-ldap-server.mycompany.combinddn cn=binduser,mycompany,dc=combindpw BINDUSERPASSrootbinddn cn=admin,mycompany,dc=comport 636ssl ontls_cacertfile /etc/ldap/ca_certs.pemtls_cacertdir /etc/ssl/certs
I allowed password-based ssh logins to the jumpbox by editing /etc/ssh/sshd_config and setting:
PasswordAuthentication yes
# service ssh restart

IMPORTANT: On the LDAP server, I had to allow users to change their own password by adding this ACL:
# cat set_userpassword_acl.ldif
dn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcAccessolcAccess: {0}to attrs=userpassword by dn="cn=admin,dc=mycompany,dc=com" write by self write by anonymous auth by users none
Then:
# ldapmodify -H ldapi:// -Y EXTERNAL -f set_userpassword_acl.ldif

At this point, users were able to log in via ssh to the jumpbox using a pre-set LDAP password, and change their LDAP password by using the regular Unix 'passwd' command.
I am still fine-tuning the LDAP setup on all fronts: LDAP server, LDAP client jumpbox and Jenkis server. The setup I have so far allows me to have a single sign-on account for users to log in to Jenkins. Some of my next steps is to use the same user LDAP accounts  for authentication and access control into MySQL and other services.

Stuff The Internet Says On Scalability For April 15th, 2016

Hey, it's HighScalability time:


What happens when Beyoncé meets eCommerce? Ring the alarm.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • $14 billion: one day of purchases on Alibaba; 47 megawatts: Microsoft's new data center space for its MegaCloud; 50%: do not speak English on Facebook; 70-80%: of all Intel servers shipped will be deployed in large scale datacenters by 2025; 1024 TB: of storage for 3D imagery currently in Google Earth; $7: WeChat average revenue per user; 1 trillion: new trees; 

  • Quotable Quotes:
    • @PicardTips: Picard management tip: Know your audience. Display strength to Klingons, logic to Vulcans, and opportunity to Ferengi.
    • Mark Burgess: Microservices cannot be a panacea. What we see clearly from cities is that they can be semantically valuable, but they can be economically expensive, scaling with superlinear cost. 
    • ethanpil: I'm crying. Remember when messaging was built on open platforms and standards like XMPP and IRC? The golden year(s?) when Google Talk worked with AIM and anyone could choose whatever client they preferred?
    • @acmurthy: @raghurwi from @Microsoft talking about scaling Hadoop YARN to 100K+ clusters. Yes, 100,000 
    • @ryanbigg: Took a Rails view rendering time from ~300ms to 50ms. Rewrote it in Elixir: it’s now 6-7ms. #MyElixirStatus
    • Dmitriy Samovskiy: In the past, our [Operations] primary purpose in life was to build and babysit production. Today operations teams focus on scale.
    • @Agandrau: Sir Tim Berners-Lee thinks that if we can predict what the internet will look like in 20 years, than we are not creative enough. #www2016
    • @EconBizFin: Apple and Tesla are today’s most talked-about companies, and the most vertically integrated
    • Kevin Fishner: Nomad was able to schedule one million containers across 5,000 hosts in Google Cloud in under five minutes.
    • David Rosenthal: The Web we have is a huge success disaster. Whatever replaces it will be at least as big a success disaster. Lets not have the causes of the disaster be things we knew about all along.
    • Kurt Marko: The days of homogeneous server farms with racks and racks of largely identical systems are over.
    • Jonathan Eisen: This is humbling, we know virtually nothing right now about the biology of most of the tree of life.
    • @adrianco: Google has a global network IP model (more convenient), AWS regional (more resilient). Choices...
    • @jason_kint: Stupid scary stats in this. Ad tech responsible for 70% of server calls and 50% of your mobile data plan.
    • apy: I found myself agreeing with many of Pike’s statements but then not understanding how he wound up at Go. 
    • @TomBirtwhistle: The head of Apple Music claims YouTube accounts for 40% of music consumption yet only 4% of online industry revenue 
    • @0x604: Immutable Laws of Software: Anyone slower than you is over-engineering, anyone faster is introducing technical debt
    • surrealvortex: I'm currently using flame graphs at work. If your application hasn't been profiled recently, you'll usually get lots of improvement for very little effort. Some 15 minutes of work improved CPU usage of my team's biggest fleet by ~40%. Considering we scaled up to 1500 c3.4xlarge hosts at peak in NA alone on that fleet, those 15 minutes kinda made my month :)
    • @cleverdevil: Did you know that Virtual Machines spin up in the opposite direction in the southern hemisphere? Little known fact.
    • ksec: Yes, and I think Intel is not certain to win, just much more likely. The Power9 is here is targeting 2H 2017 release. Which is actually up against Intel Skylake/Kabylake Xeon Purley Platform in similar timeframe.
    • @jon_moore: Platforms make promises; constraints are the contracts that allow platforms to do their jobs. #oreillysacon
    • @CBILIEN: Scaling data platforms:compute and storage have to be scaled independently #HS16Dublin

  • A morning reverie. Chopped for programmers. Call it Crashed. You have three rounds with four competitors. Each round is an hour. The competitors must create a certain kind of program, say a game, or a productivity tool, anything really, using a basket of three selected technologies, say Google Cloud, wit.ai, and Twilio. Plus the programmer can choose to use any other technologies from the pantry that is the Internet. The program can take any form the programmer chooses. It could be a web app, iOS or Android app, an Alexa skill, a Slack bot, anything, it's up to the creativity of the programmer. The program is judged by an esteemed panel based on creativity, quality, and how well the basket technologies are highlighted. When a programmer loses a round they have been Crashed. The winner becomes the Crashed Champion. Sound fun?

  • Jeff Dean when talking about deep learning at Google makes it clear a big part of their secret sauce is being able to train neural nets at scale using their bespoke distributed infrastructure. Now Google has released Tensor Flow with distributed computing support. It's not clear if this is the same infrastructure Google uses internally, but it seems to work: using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Also, the tensorflow playground is a cool way to visualize what's going on inside.

  • Christopher Meiklejohn with an interesting history of the Remote Procedure Call. It started way back in 1974: RFC 674, “Procedure Call Protocol Documents, Version 2”. RFC 674 attempts to define a general way to share resources across all 70 nodes of the Internet

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Here's The Programming Game You Never Asked For

Coding Horror - Jeff Atwood - Fri, 04/15/2016 - 10:48

You know what's universally regarded as un-fun by most programmers? Writing assembly language code.

As Steve McConnell said back in 1994:

Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C. You save time when you don't need to have an awards ceremony every time a C statement does what it's supposed to.

Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we've gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to "gee, we kinda stopped caring about CPU performance altogether five years ago", there hasn't been much need for the kind of hand-tuned performance you get from assembly. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.

So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?

And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – was to read a (blessedly short 14 page) photocopied reference manual and fix its corrupted boot sequence … using assembly language?

Well now, by God, it's time to learn us some assembly and get to the bottom of this mystery, isn't it? As its creator notes, this is the assembly language programming game you never asked for!

I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he's not already working full time with us programming Discourse. Programmers gotta … program.

Of course there's a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even "simple" programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky's Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:

Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers I’ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn “educational” game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.

Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Plato’s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the game—and it was one serious challenge—was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: “I swore to myself,” he told me, “that as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.”

I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other games. I got a little turned off by the way it treated programming as electrical engineering; messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I probably didn't have and I wasn't sure I even wanted.

I'll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation, like TIS-100.

A more gentle example of a modern programming game is Tomorrow Corporation's excellent Human Resource Machine.

It has exactly the irreverent sense of humor you'd expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. If you've ever wanted to find out if someone is truly interested in programming, recommend this game to them and see. It starts with only 2 instructions and slowly widens to include 11. Corporate drudgery has never been so … er, fun?

I'm thinking about this because I believe there's a strong connection between programming games and being a talented software engineer. It's that essential sense of play, the idea that you're experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:

Joel implied that good programmers love programming so much they'd do it for no pay at all. I won't go quite that far, but I will note that the best programmers I've known have all had a lifelong passion for what they do. There's no way a minor economic blip would ever convince them they should do anything else. No way. No how.

I'd rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this interview about demonstrating competency in a certain technical skill that's worth a certain amount of money, or showing me how you can improvise and have fun?

That's why I was so excited when Patrick, Thomas, and Erin founded Starfighter.

If you want to know how competent a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it's rarely extended to programming, until now. Their first simulation is StockFighter.

Participants are given:

  • An interactive trading blotter interface
  • A real, functioning set of limit-order-book venues
  • A carefully documented JSON HTTP API, with an API explorer
  • A series of programming missions.

Participants are asked to:

  • Implement programmatic trading against a real exchange in a thickly traded market.
  • Execute block-shopping trading strategies.
  • Implement electronic market makers.
  • Pull off an elaborate HFT trading heist.

This is a seriously next level hiring strategy, far beyond anything else I've seen out there. It's so next level that to be honest, I got really jealous reading about it, because I've felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.

I know we've talked about giving lip service to the idea of hiring the best, but if that's really what you want to do, the best programmers I've ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering of an existing system, even to the point of finding rare exploits.

Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It's the programming game we never asked for. But here we are, regardless.

An arbitrary programming game, particularly one that goes to great lengths to simulate a fictional system, is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I'd gladly hire a dozen people just like that any day, and set them loose on our very real programming project.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Optimize, Develop, and Debug with Vulkan Developer Tools

Android Developers Blog - Fri, 04/15/2016 - 10:30

Posted by Shannon Woods, Technical Program Manager

Today we're pleased to bring you a preview of Android development tools for Vulkan™. Vulkan is a new 3D rendering API which we’ve helped to develop as a member of Khronos, geared at providing explicit, low-overhead GPU (Graphics Processor Unit) control to developers. Vulkan’s reduction of CPU overhead allows some synthetic benchmarks to see as much as 10 times the draw call throughput on a single core as compared to OpenGL ES. Combined with a threading-friendly API design which allows multiple cores to be used in parallel with high efficiency, this offers a significant boost in performance for draw-call heavy applications.

Vulkan support is available now via the Android N Preview on devices which support it, including Nexus 5X and Nexus 6P. (Of course, you will still be able to use OpenGL ES as well!)

To help developers start coding quickly, we’ve put together a set of samples and guides that illustrate how to use Vulkan effectively.

You can see Vulkan in action running on an Android device with Robert Hodgin’s Fish Tornado demo, ported by Google’s Art, Copy, and Code team:


Optimization: The Vulkan API

There are many similarities between OpenGL ES and Vulkan, but Vulkan offers new features for developers who need to make every millisecond count.

  • Application control of memory allocation. Vulkan provides mechanisms for fine-grained control of how and when memory is allocated on the GPU. This allows developers to use their own allocation and recycling policies to fit their application, ultimately reducing execution and memory overhead and allowing applications to control when expensive allocations occur.
  • Asynchronous command generation. In OpenGL ES, draw calls are issued to the GPU as soon as the application calls them. In Vulkan, the application instead submits draw calls to command buffers, which allows the work of forming and recording the draw call to be separated from the act of issuing it to the GPU. By spreading command generation across several threads, applications can more effectively make use of multiple CPU cores. These command buffers can also be reused, reducing the overhead involved in command creation and issuance.
  • No hidden work. One OpenGL ES pitfall is that some commands may trigger work at points which are not explicitly spelled out in the API specification or made obvious to the developer. Vulkan makes performance more predictable and consistent by specifying which commands will explicitly trigger work and which will not.
  • Multithreaded design, from the ground up. All OpenGL ES applications must issue commands for a context only from a single thread in order to render predictably and correctly. By contrast, Vulkan doesn’t have this requirement, allowing applications to do work like command buffer generation in parallel— but at the same time, it doesn’t make implicit guarantees about the safety of modifying and reading data from multiple threads at the same time. The power and responsibility of managing thread synchronization is in the hands of the application.
  • Mobile-friendly features. Vulkan includes features particularly helpful for achieving high performance on tiling GPUs, used by many mobile devices. Applications can provide information about the interaction between separate rendering passes, allowing tiling GPUs to make effective use of limited memory bandwidth, and avoid performing off-chip reads.
  • Offline shader compilation. Vulkan mandates support for SPIR-V, an intermediate language for shaders. This allows developers to compile shaders ahead of time, and ship SPIR-V binaries with their applications. These binaries are simpler to parse than high-level languages like GLSL, which means less variance in how drivers perform this parsing. SPIR-V also opens the door for third parties to provide compilers for specialized or cross-platform shading languages.
  • Optional validation. OpenGL ES validates every command you call, checking that arguments are within expected ranges, and objects are in the correct state to be operated upon. Vulkan doesn’t perform any of this validation itself. Instead, developers can use optional debug tools to ensure their calls are correct, incurring no run-time overhead in the final product.

Debugging: Validation Layers

As noted above, Vulkan’s lack of implicit validation requires developers to make use of tools outside the API in order to validate their code. Vulkan’s layer mechanism allows validation code and other developer tools to inspect every API call during development, without incurring any overhead in the shipping version. Our guides show you how to build the validation layers for use with the Android NDK, giving you the tools necessary to build bug-free Vulkan code from start to finish.


Develop: Shader toolchain
The Shaderc collection of tools provides developers with build-time and run-time tools for compiling GLSL into SPIR-V. Shaders can be compiled at build time using glslc, a command-line compiler, for easy integration into existing build systems. Or, for shaders which are generated or edited during execution, developers can use the Shaderc library to compile GLSL shaders to SPIR-V via a C interface. Both tools are built on top of Khronos’s reference compiler.


Additional Resources

The Vulkan ecosystem is a broad one, and the resources to get you started don’t end here. There is a wealth of material to explore, including:

Categories: Programming

Android Studio 2.0

Android Developers Blog - Fri, 04/15/2016 - 04:54

Posted by Jamal Eason, Product Manager, Android

Android Studio 2.0 is the fastest way to build high quality, performant apps for the Android platform, including phones and tablets, Android Auto, Android Wear, and Android TV. As the official IDE from Google, Android Studio includes everything you need to build an app, including a code editor, code analysis tools, emulators and more. This new and stable version of Android Studio has fast build speeds and a fast emulator with support for the latest Android version and Google Play Services.

Android Studio is built in coordination with the Android platform and supports all of the latest and greatest APIs. If you are developing for Android, you should be using Android Studio 2.0. It is available today as a easy download or update on the stable release channel.

Android Studio 2.0 includes the following new features that Android developer can use in their workflow :

  • Instant Run - For every developer who loves faster build speeds. Make changes and see them appear live in your running app. With many build/run accelerations ranging from VM hot swapping to warm swapping app resources, Instant Run will save you time every day.
  • Android Emulator - The new emulator runs ~3x faster than Android’s previous emulator, and with ADB enhancements you can now push apps and data 10x faster to the emulator than to a physical device. Like a physical device, the official Android emulator also includes Google Play Services built-in, so you can test out more API functionality. Finally, the new emulator has rich new features to manage calls, battery, network, GPS, and more.
  • Cloud Test Lab Integration - Write once, run anywhere. Improve the quality of your apps by quickly and easily testing on a wide range of physical Android devices in the Cloud Test Lab right from within Android Studio.
  • App Indexing Code Generation & Test - Help promote the visibility your app in Google Search for your users by adding auto-generated URLS with the App Indexing feature in Android Studio. With a few click you can add indexable URL links that you can test all within the IDE.
  • GPU Debugger Preview - For those of you developing OpenGL ES based games or apps, you can now see each frame and the GL state with the new GPU debugger. Uncover and diagnosis GL rendering issues by capturing and analyzing the GPU stream from your Android device.
  • IntelliJ 15 Update - Android Studio is built on the world class Intellij coding platform. Check out the latest Intellij features here.

Deeper Dive into the New Features

Instant Run

Today, mobile platforms are centered around speed and agility. And yet, building for mobile can sometimes feel clunky and slow. Instant Run in Android Studio is our solution to keep you in a fast and fluid development flow. The feature increases your developer productivity by accelerating your edit, build, run cycles. When you click on the Instant Run button (), Instant Run will analyze the changes you have made and determine how it can deploy your new code in the fastest way.

New Instant Run Buttons

Whenever possible, it will inject your code changes into your running app process, avoiding re-deployment and re-installation your APK. For some types of changes, an activity or app restart is required, but your edit, build and run cycles should still be generally much faster than before. Instant Run works with any Android Device or emulator running API 14 (Ice Cream Sandwich) or higher.

Since previewing Instant Run at the end of last year, we’ve spent countless hours incorporating your feedback and refining for the stable release. Look for even more acceleration in future releases because build speeds can never be too fast. To learn how you can make the most out of Instant Run in your app development today, please check out our Instant Run documentation.

Android Emulator

The new Android Emulator is up to 3x faster in CPU, RAM, & I/O in comparison to the previous Android emulator. And when you're ready to build, ADB push speeds are a whopping 10x faster! In most situations, developing on the official Android Emulator is faster than a real device, and new features like Instant Run will work best with the new Android emulator.

In addition to speed and performance, the Android Emulator has a brand user interface and sensor controls. Enhanced since the initial release, with the emulator you can drag and drop APKs for quick installation, resize and rescale the window, use multi-touch actions (pinch & zoom, pan, rotate, tilt) and much more.

Android Emulator User Interface: Toolbar & Extend Controls Panel

Trying out the new emulator is as easy as updating your SDK Tools to 25.1.1 or higher, create a fresh Android Virtual Device using one of the recommended x86 system images and you are ready to go. Learn more about the Android Emulator by checking out the documentation.

Cloud Test Lab

Cloud Test Lab is a new service that allows you to test your app across a wide range of devices and device configurations at scale in the cloud. Once you complete your initial testing with your Android Emulator or Android device, Cloud Test Lab is a great extension to your testing process that provides you to run through a collection of tests against a portfolio of physical devices hosted in Google’s data centers. Even if you do not have tests explicitly written, Cloud Test Lab can perform a basic set of tests to ensure that your app does not crash.

The new interface in Android Studio allows you to configure the portfolio of tests you want to run on Cloud Test Lab, and allows you to also see the results of your tests. To learn more about the service go here.

Setup for Cloud Test Lab

App Indexing

It is now easier for your users to find your app in Google Search with the App Indexing API. Android Studio 2.0 helps you to create the correct URL structure in your app code and add attributes in your AndroidManifest.xml file that will work the Google App Indexing service. After you add the URLs to your app, can you test and validate your app indexing code as shown here:

Google App Indexing Testing

Check out this link for more details about app indexing support in Android Studio.

GPU Debugger Preview

If you are developing OpenGL ES games or graphics-intensive apps, you have a new GPU debugger with Android Studio 2.0. Although the GPU debugger is a preview, you can step through your app frame by frame to identify and debug graphics rendering issues with rich information about the GL state. For more details on how to setup your Android device and app to work with the tool, check out the tech documentations here.

GPU Debugger Preview

What's Next

Update

If you are using a previous version of Android Studio, you can check for updates on the Beta channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). If you need a new copy of Android Studio, you can download it here. If you developing for the N Developer Preview, check out this additional setup instructions.

Set Up Instant Run & Android Emulator

After you update to or download Android Studio 2.0, you should upgrade your projects to use Instant Run, and create a fresh Android Virtual Device (AVD) for the new Android emulator and you are on your way to a fast Android development experience.

Using Instant Run is easy. For each of your existing projects you will see a quick prompt to update your project to the new gradle plugin version (com.android.tools.build:gradle:2.0.0).

Prompt to update your gradle version in your project

For all new app projects in Android Studio 2.0, Instant Run is on by default. Check out the documentation for more details.

We are already hard at work developing the next release of Android Studio. We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our new Google+ page or on Twitter.

Categories: Programming

Android N Developer Preview 2, out today!

Android Developers Blog - Fri, 04/15/2016 - 04:49

Posted by Dave Burke, VP of Engineering

Last month, we released the first Developer Preview of Android N, to give you a sneak peek at our next platform. The feedback you’ve shared to-date has helped us catch bugs and improve features. Today, the second release in our series of Developer Previews is ready for you to continue testing against your apps.

This latest preview of Android N fixes a few bugs you helped us identify, such as not being able to connect to hidden Wi-Fi networks (AOSP 203116), Multiwindow pauses (AOSP 203424), and Direct Reply closing an open activity (AOSP 204411), to name just a few. We’re still on the hunt for more; please continue to share feedback, either in the N Developer Preview issue tracker or in the N preview community.


What’s new:

Last month’s Developer Preview introduced a host of new features, like Multi-window, bundled notifications and more. This preview builds on those and includes a few new ones:

  • Vulkan: Vulkan is a new 3D rendering API which we’ve helped to develop as a member of Khronos, geared at providing explicit, low-overhead GPU (Graphics Processor Unit) control to developers and offers a significant boost in performance for draw-call heavy applications. Vulkan’s reduction of CPU overhead allows some synthetic benchmarks to see as much as 10 times the draw-call throughput on a single core as compared to OpenGL ES. Combined with a threading-friendly API design which allows multiple cores to be used in parallel with high efficiency, this offers a significant boost in performance for draw-call heavy applications. With Android N, we’ve made Vulkan a part of the platform; you can try it out on supported devices running Developer Preview 2. Read more here. Vulkan Developer Tools blog here.
  • Launcher shortcuts: Now, apps can define shortcuts which users can expose in the launcher to help them perform actions quicker. These shortcuts contain an Intent into specific points within your app (like sending a message to your best friend, navigating home in a mapping app, or playing the next episode of a TV show in a media app).

    An application can publish shortcuts with ShortcutManager.setDynamicShortcuts(List) and ShortcutManager.addDynamicShortcut(ShortcutInfo), and launchers can be expected to show 3-5 shortcuts for a given app.

  • Emoji Unicode 9 support: We are introducing a new emoji design for people emoji that moves away from our generic look in favor of a more human-looking design. If you’re a keyboard or messaging app developer, you should start incorporating these emoji into your apps. The update also introduces support for skin tone variations and Unicode 9 glyphs, like the bacon, selfie and face palm. You can dynamically check for the new emoji characters using Paint.hasGlyph().

New human emoji

New activity emoji

  • API changes: This update includes API changes as we continue to refine features such as multi-window support (you can now specify a separate minimum height and minimum width for an activity), notifications, and others. For details, take a look at the diff reports available in the downloadable API reference package.

  • Bug fixes: We’ve resolved a number of issues throughout the system, including these fixes for issues that you’ve reported through the public issue tracker. Please continue to let us know what you find and follow along with the known issues here.

How to get the update:

The easiest way to get this and later preview updates is by enrolling your devices in the Android Beta Program. Just visit g.co/androidbeta and opt-in your eligible Android phone or tablet -- you’ll soon receive this (and later) preview updates over-the-air. If you’ve already enrolled your device, you’ll receive the update shortly, no action is needed on your part. You can also download and flash this update manually. Developer Preview 2 is intended for developers and not as a daily driver; this build is not yet optimized for performance and battery life.

The N Developer Preview is currently available for Nexus 6, Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices, as well as General Mobile 4G [Android One] devices. For Nexus Player, the update to Developer Preview 2 will follow the other devices by several days.

To build and test apps with Developer Preview 2, you need to use Android Studio 2.1 -- the same version that was required for Developer Preview 1. You’ll need to check for SDK components updates (including build tools and emulator system images) for Developer Preview 2 -- see here for details.

Thanks so much for all of your feedback so far. Please continue to share feedback, either in the N Developer Preview issue tracker or in the N preview community. The sooner we’re able to get your feedback, the more of of it we will be able to incorporate in the next release of Android.

Categories: Programming