Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=3' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

30 Problems That Affect Software Projects

Herding Cats - Glen Alleman - Mon, 04/18/2016 - 15:10

From Estimating Software Costs: Bringing Realism To Estimating, 2nd Edition.

  1. Initial requirements are seldom more than 50 percent complete
  2. Requirements grow at about 2 percent per calendar month during development
  3. About 20 percent of initial requirements are delayed until a second release
  4. Finding and fixing bugs is the most expensive software activity
  5. Creating paper documents is the second most expensive software activity
  6. Coding is the third most expensive software activity
  7. Meetings and discussion are the fourth most expensive activity
  8. Most forms of testing are less that 30 percent efficient in finding bugs
  9. Most forms of testing touch less than 50 percent of the code being tested
  10. There are more defects in requirements and design than in source code
  11. There are more defects in test cases than in the software itself
  12. Defects in requirements, design and code average 5.0 per function point
  13. Total defect-removal efficiency before release averages only about 80 percent
  14. About 15 percent of software defects are delivered to customers
  15. Delivered defects are expensive and cause customer dissatisfaction
  16. About 5 percent of modules in applications will contain 50 percent of all defects
  17. About 7 percent of all defect repairs will accidentally inject new defects
  18. Software reuse is the only effective for materials that approach zero defects
  19. About 5 percent of software outsource contracts end up in litigation
  20. About 35 percent of projects greater that 10,000 function points will be canceled
  21. About 50 percent of project greater than 110,000 function points will be one year late
  22. The failure mode for most cost estimates is to be excessively optimistic.
  23. Productivity rates in the U.S. are about 10 function points per staff month
  24. Assignment scopes for development are about 150 function points
  25. Assignment scopes for maintenance are about 750 function points
  26. Development costs about $1,200 per function point in the U.S.
  27. Maintenance costs about $150 per function point per calendar year
  28. After delivery applications grow at about 7 percent per calendar year during use
  29. Average defect repair rates are about ten bugs or defect per month
  30. Programmers need about ten days of annual training to stay current

Agile addresses 1, 2, 3, 4, 5, and 6 well. 

So if these are the causes of project difficulties - and there may be others since this publication, what are the fixes?

 

Categories: Project Management

6 Red Flags That You Need To Start Cutting Your Losses

Making the Complex Simple - John Sonmez - Mon, 04/18/2016 - 15:00

There’s a stigma in our society about quitting that causes us to cling on to projects long after they should be let go. Quitters never win. Quitting lasts forever. Champions never quit. You’re never a loser till you quit trying. No one wants to lose. No one wants to be a loser. Well, at least […]

The post 6 Red Flags That You Need To Start Cutting Your Losses appeared first on Simple Programmer.

Categories: Programming

What Is Management in the Context of Agile

Herding Cats - Glen Alleman - Sun, 04/17/2016 - 23:07

There's a meme going around in some parts of agile that management is inhumane. This is an extreme view of course, likely informed by anecdotal experience with Bad Management, or worse lack of actual management experience.

Managing in the presence of Agile is not the same as managing in traditional domains. The platitude is Stewardship, but that has little actionable outcomes needed to move the work efforts along toward getting value delivered to customers in exchange for money and time.

One view of management in Agile can be informed by Governance of the work efforts. Here's a version of Governance, from "Agile Governance Theory: conceptual development,” Alexandre J. H. de O. Luna, Philippe Kruchten , and Hermano Moura.

Screen Shot 2016-04-17 at 4.03.36 PM

 

Related articles Agile Software Development in the DOD Deadlines Always Matter Risk Management is How Adults Manage Projects Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

SPaMCAST 390 – Vinay Patankar, Agile Value and Lean Start-ups

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 390 features our interview with Vinay Patankar.  We discussed his start up Process Street and the path Vinay and his partner took in order to embrace agile because it delivered value, not just because it was cool.  We also discussed how Agile fits or helps in a lean start-up and the lessons Vinay wants to pass on to others.

Vinay’s Bio:

Vinay Patankar is the co-founder and CEO of Process Street, the simplest way to manage your teams recurring processes and workflows. Easily set up new clients, onboard employees and manage content publishing with Process Street.

Process Street is a venture-backed SaaS company and AngelPad alum with numerous fortune 500 clients.

When not running Process Street, Vinay loves to travel and spent 4 years as a digital nomad roaming the globe running different internet businesses. He enjoys food, fitness and talking shop.

Twitter: @vinayp10

Re-Read Saturday News

We continue the read Commitment – Novel About Managing Project Risk by Maassen, Matts, and Geary.  Buy your copy today and read along (use the link to support the podcast). This week we tackle Chapters Three which explores visualization, knowledge options and focusing on outcomes. Visit the Software Process and Measurement Blog to catchup on past installments of Re-Read Saturday.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland, May 10th and 11th. Register Now!

Next SPaMCAST

The next three weeks will feature mix tapes with the “if you could fix two things” questions from the top downloads of 2007/08, 2009 and 2010.  I will be doing a bit of vacationing and all the while researching, writing content and editing new interviews for the sprint to episode 400 and beyond.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 390 – Vinay Patankar, Agile Value and Lean Start-ups

Software Process and Measurement Cast - Sun, 04/17/2016 - 22:00

The Software Process and Measurement Cast 390 features our interview with Vinay Patankar. We discussed his start up Process Street and the path Vinay and his partner took in order to embrace agile because it delivered value, not just because it was cool. We also discussed how Agile fits or helps in a lean start-up and the lessons Vinay wants to pass on to others.

Vinay’s Bio:

Vinay Patankar is the co-founder and CEO of Process Street, the simplest way to manage your teams recurring processes and workflows. Easily set up new clients, onboard employees and manage content publishing with Process Street.

Process Street is a venture-backed SaaS company and AngelPad alum with numerous fortune 500 clients.

When not running Process Street, Vinay loves to travel and spent 4 years as a digital nomad roaming the globe running different internet businesses. He enjoys food, fitness and talking shop.

Twitter: @vinayp10

Re-Read Saturday News

We continue the read Commitment – Novel About Managing Project Risk by Maassen, Matts, and Geary. Buy your copy today and read along (use the link to support the podcast). This week we tackle Chapters Three which explores visualization, knowledge options and focusing on outcomes. Visit the Software Process and Measurement Blog to catch up on past installments of Re-Read Saturday.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd. I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th. Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland, May 10th and 11th. Register Now!

Next SPaMCAST

The next three weeks will feature mix tapes with the “if you could fix two things” questions from the top downloads of 2007/08, 2009 and 2010. I will be doing a bit of vacationing and all the while researching, writing content and editing new interviews for the sprint to episode 400 and beyond.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Re-Read Saturday: Commitment – Novel about Managing Project Risk, Part 2

Picture of the book cover


This week we continue the read of Commitment – Novel about Managing Project Risk by Olav Maassen, Chris Matts and Chris Geary (2nd edition, 2016) with chapter 3.  Chapter 3 introduces a number of Agile and lean techniques.  Susan follows the path of the hero as she crosses the threshold into a new world, signifying her commitment to the journey, and she begins to identify allies.

As a reminder, Chapter 2 ended with the management team telling Rose that the plan she presented was no different from the one David had presented.  Management fired David after presenting his plan to complete the project. The management committee is ready to shut down the project and terminate the lot.  Chapter 3 begins with Rose’s cry for help to Lily. Lily listens and convinces Rose to ask for help from the mentors. She finally calls the number on the business card Lily gave her in chapter 1 to set up a meeting with a mentor. As Rose is leaving the office (with everyone laboring into the evening) to go to the meeting with a mentor at the Cantina of Learning, no one even says goodbye. Rose is a pariah even to her own team.

There’s a call out in one of Lily’s blogs that describes the knowledge options and relationships as the transition from Rose’s leaving work and getting to the Cantina of Learning. Knowledge options are where you know just enough about a subject to know how to apply the subject and how long it will take to get up to speed on the topic when needed. With this knowledge, a practitioner can assess a situation, assess his or her options and then commit based on context. There’s an old saying “if the only tool you own is a hammer, then everything looks like a nail.” That is an example of a practitioner not having (or understanding that there are) knowledge options. Having knowledge options allows individuals and teams to pivot at the last possible moment when the option has the highest value. Knowledge options help you make decisions with the best possible knowledge, which will improve the outcome of the decision. Knowledge options are an acknowledgment that being consciously incompetent has value if you know how long it takes to move from consciously incompetent to consciously competent. For example, if a new piece of functionality required learning Ruby and that it would take two months of study to become proficient you would be able to decide when the piece of work could be done based on your capability in Ruby.   

The blog entry goes on further to provide advice on finding the right mentor. The advice is to find an author that has published a good book on the subject and then identify the community that generally exists around the author’s work. The practitioners that are active in the community often have the time and experience to act as mentors. I use a variant of this process to find and engage people for Software Process and Measurement Cast interviews.

Once Rose is ensconced in the Cantina of Learning, the journey to redemption begins. Rose meets Jon who tells Rose about work visualization and stand-ups.  Visualization helps the team know where they are and how they can coordinate their own activities. The process of visualizing the flow of work identifies all the work in process (WIP).  A side benefit of identifying WIP is that you can see when a person is working on more than one piece of work at a time.  WOrking on more than one thing at a time is multitasking. Multitasking actually hides work sitting in queue (waiting).  Work that is sitting slows the process of delivery. Rose’s role is to help unblock tasks and to find and break dependencies between tasks.  Dependencies are blockages waiting to happen. Visualization is perhaps one of the most basic techniques in Agile and lean.  When work is visualized teams can self-organize and self-manage.  The value of visualizing work is huge; however, the urgency around the concept has waned as tools have replaced card walls and paper Kanban board which reduces the effectiveness of the technique.

The chapter’s action culminates with Jon introducing Rose to Liz.  Liz builds on visualization by reinforcing the ideas behind real options.  Liz points out that each option has three possible solutions.  Using options as the decision process is a tool to manage uncertainty. The three possible solutions are:

  1. Postpone the decision and collect more information.
  2. Choose the option that’s easiest to change.
  3. Invest in a different approach that allows change to be easier.

There is no one, best solution.  Projects use all three solutions to maximize value based on the context.  Visualization provides a platform for the team to know where work is in the development life cycle.  The use of options to evaluate generate alternate paths for a decision reduces uncertainty. The combination of these tools gives the project the power to change direction quickly to maximize the value it can deliver. It is important that decisions are made consciously, which means examining each decision with an eye to which option you intend to exercise. 

In order to use the new tools, Susan realizes she needs a coach.  Liz points to a person she knows that happens to be walking the table by as the type of person that she should hire.  It turns out to be Gary, one the leads currently on her team. Gary agrees to act as Rose’s real options coach. 

Chapter 3 ends with Rose writing to Susan and a blog entry from Liz.  The letter introduces the concept of technical debt using the metaphor of a group of roommates that let dishes build up in a dirty kitchen.  After a time, it becomes so hard to clean the kitchen that no one even makes the effort.  Paying down technical debt (refactoring) is important to increase efficiency and makes it easier to respond quickly to change.  Liz’s blog discusses the value of change and reprioritization of the backlog by the customer (or product owner).  The ability to periodically change and reprioritize the backlog is a form of an option. Agile and lean techniques provide a set of tools so the team works on the top priorities, delivers functionality that generates feedback and allows the customer to exercises his/her options by reprioritizing based on functional code.


Categories: Process Management

Re-Read Saturday: Commitment – Novel about Managing Project Risk, Part 2

Picture of the book cover


This week we continue the read of Commitment – Novel about Managing Project Risk by Olav Maassen, Chris Matts and Chris Geary (2nd edition, 2016) with chapter 3.  Chapter 3 introduces a number of Agile and lean techniques.  Susan follows the path of the hero as she crosses the threshold into a new world, signifying her commitment to the journey, and she begins to identify allies.

As a reminder, Chapter 2 ended with the management team telling Rose that the plan she presented was no different from the one David had presented.  Management fired David after presenting his plan to complete the project. The management committee is ready to shut down the project and terminate the lot.  Chapter 3 begins with Rose’s cry for help to Lily. Lily listens and convinces Rose to ask for help from the mentors. She finally calls the number on the business card Lily gave her in chapter 1 to set up a meeting with a mentor. As Rose is leaving the office (with everyone laboring into the evening) to go to the meeting with a mentor at the Cantina of Learning, no one even says goodbye. Rose is a pariah even to her own team.

There’s a call out in one of Lily’s blogs that describes the knowledge options and relationships as the transition from Rose’s leaving work and getting to the Cantina of Learning. Knowledge options are where you know just enough about a subject to know how to apply the subject and how long it will take to get up to speed on the topic when needed. With this knowledge, a practitioner can assess a situation, assess his or her options and then commit based on context. There’s an old saying “if the only tool you own is a hammer, then everything looks like a nail.” That is an example of a practitioner not having (or understanding that there are) knowledge options. Having knowledge options allows individuals and teams to pivot at the last possible moment when the option has the highest value. Knowledge options help you make decisions with the best possible knowledge, which will improve the outcome of the decision. Knowledge options are an acknowledgment that being consciously incompetent has value if you know how long it takes to move from consciously incompetent to consciously competent. For example, if a new piece of functionality required learning Ruby and that it would take two months of study to become proficient you would be able to decide when the piece of work could be done based on your capability in Ruby.   

The blog entry goes on further to provide advice on finding the right mentor. The advice is to find an author that has published a good book on the subject and then identify the community that generally exists around the author’s work. The practitioners that are active in the community often have the time and experience to act as mentors. I use a variant of this process to find and engage people for Software Process and Measurement Cast interviews.

Once Rose is ensconced in the Cantina of Learning, the journey to redemption begins. Rose meets Jon who tells Rose about work visualization and stand-ups.  Visualization helps the team know where they are and how they can coordinate their own activities. The process of visualizing the flow of work identifies all the work in process (WIP).  A side benefit of identifying WIP is that you can see when a person is working on more than one piece of work at a time.  WOrking on more than one thing at a time is multitasking. Multitasking actually hides work sitting in queue (waiting).  Work that is sitting slows the process of delivery. Rose’s role is to help unblock tasks and to find and break dependencies between tasks.  Dependencies are blockages waiting to happen. Visualization is perhaps one of the most basic techniques in Agile and lean.  When work is visualized teams can self-organize and self-manage.  The value of visualizing work is huge; however, the urgency around the concept has waned as tools have replaced card walls and paper Kanban board which reduces the effectiveness of the technique.

The chapter’s action culminates with Jon introducing Rose to Liz.  Liz builds on visualization by reinforcing the ideas behind real options.  Liz points out that each option has three possible solutions.  Using options as the decision process is a tool to manage uncertainty. The three possible solutions are:

  1. Postpone the decision and collect more information.
  2. Choose the option that’s easiest to change.
  3. Invest in a different approach that allows change to be easier.

There is no one, best solution.  Projects use all three solutions to maximize value based on the context.  Visualization provides a platform for the team to know where work is in the development life cycle.  The use of options to evaluate generate alternate paths for a decision reduces uncertainty. The combination of these tools gives the project the power to change direction quickly to maximize the value it can deliver. It is important that decisions are made consciously, which means examining each decision with an eye to which option you intend to exercise. 

In order to use the new tools, Susan realizes she needs a coach.  Liz points to a person she knows that happens to be walking the table by as the type of person that she should hire.  It turns out to be Gary, one the leads currently on her team. Gary agrees to act as Rose’s real options coach. 

Chapter 3 ends with Rose writing to Susan and a blog entry from Liz.  The letter introduces the concept of technical debt using the metaphor of a group of roommates that let dishes build up in a dirty kitchen.  After a time, it becomes so hard to clean the kitchen that no one even makes the effort.  Paying down technical debt (refactoring) is important to increase efficiency and makes it easier to respond quickly to change.  Liz’s blog discusses the value of change and reprioritization of the backlog by the customer (or product owner).  The ability to periodically change and reprioritize the backlog is a form of an option. Agile and lean techniques provide a set of tools so the team works on the top priorities, delivers functionality that generates feedback and allows the customer to exercises his/her options by reprioritizing based on functional code.


Categories: Process Management

Learn about Android Development Patterns over Coffee with Joanna Smith

Google Code Blog - Fri, 04/15/2016 - 19:25

Posted by Laurence Moroney, developer advocate

One of the great benefits of Android development is in the flexibility provided by the sheer number of APIs available in the framework, support libraries, Google Play services and elsewhere. While variety is the spice of life, it can lead to some tough decisions when developing -- and good guidance about repeatable patterns for development tasks is always welcome!

With that in mind, Joanna Smith and Ian Lake started Android Development Patterns to help developers not just know how to use an API but also which APIs to choose to begin with.

You can learn more about Android Development Patterns by watching the videos on YouTube, reading this blog post, or checking out the Google Developers page on Medium.

Categories: Programming

LDAP server setup and client authentication

Agile Testing - Grig Gheorghiu - Fri, 04/15/2016 - 19:24
We recently bought at work a CloudBees Jenkins Enterprise license and I wanted to tie the user accounts to a directory service. I first tried to set up Jenkins authentication via the AWS Directory Service, hoping it will be pretty much like talking to an Active Directory server. That proved to be impossible to set up, at least for me. I also tried to have an LDAP proxy server talking to the AWS Directory Service and have Jenkins authenticate against the LDAP proxy. No dice. I ended up setting up a good old-fashioned LDAP server and managed to get Jenkins working with it. Here are some of my notes.

OpenLDAP server setup
I followed this excellent guide from Digital Ocean. The server was an Ubuntu 14.04 EC2 instance in my case. What follows in terms of the server setup is taken almost verbatim from the DO guide.

Set the hostname

# hostnamectl set-hostname my-ldap-server

Edit /etc/hosts and make sure this entry exists:
LOCAL_IP_ADDRESS my-ldap-server.mycompany.com my-ldap-server
(it makes a difference that the FQDN is the first entry in the line above!)

Make sure the following types of names are returned when you run hostname with different options:


# hostnamemy-ldap-server
# hostname -fmy-ldap-server.mycompany.com
# hostname -dmycompany.com

Install slapd

# apt-get install slapd ldap-utils# dpkg-reconfigure slapd
(here you specify the LDAP admin password)

Install the SSL Components

# apt-get install gnutls-bin ssl-cert

Create the CA Template
# mkdir /etc/ssl/templates
# vi /etc/ssl/templates/ca_server.conf# cat /etc/ssl/templates/ca_server.confcn = LDAP Server CAcacert_signing_key

Create the LDAP Service Template

# vi /etc/ssl/templates/ldap_server.conf# cat /etc/ssl/templates/ldap_server.conforganization = "My Company"cn = my-ldap-server.mycompany.comtls_www_serverencryption_keysigning_keyexpiration_days = 3650

Create the CA Key and Certificate

# certtool -p --outfile /etc/ssl/private/ca_server.key# certtool -s --load-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ca_server.conf --outfile /etc/ssl/certs/ca_server.pem
Create the LDAP Service Key and Certificate

# certtool -p --sec-param high --outfile /etc/ssl/private/ldap_server.key# certtool -c --load-privkey /etc/ssl/private/ldap_server.key --load-ca-certificate /etc/ssl/certs/ca_server.pem --load-ca-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ldap_server.conf --outfile /etc/ssl/certs/ldap_server.pem

Give OpenLDAP Access to the LDAP Server Key

# usermod -aG ssl-cert openldap# chown :ssl-cert /etc/ssl/private/ldap_server.key# chmod 640 /etc/ssl/private/ldap_server.key

Configure OpenLDAP to Use the Certificate and Keys
IMPORTANT NOTE: in modern versions of slapd, configuring the server is not done via slapd.conf anymore. Instead, you put together ldif files and run LDAP client utilities such as ldapmodify against the local server. The Distinguished Name of the entity you want to modify in terms of configuration is generally dn: cn=config but it can also be the LDAP database dn: olcDatabase={1}hdb,cn=config.
# vi addcerts.ldif# cat addcerts.ldifdn: cn=configchangetype: modifyadd: olcTLSCACertificateFileolcTLSCACertificateFile: /etc/ssl/certs/ca_server.pem-add: olcTLSCertificateFileolcTLSCertificateFile: /etc/ssl/certs/ldap_server.pem-add: olcTLSCertificateKeyFileolcTLSCertificateKeyFile: /etc/ssl/private/ldap_server.key

# ldapmodify -H ldapi:// -Y EXTERNAL -f addcerts.ldif# service slapd force-reload# cp /etc/ssl/certs/ca_server.pem /etc/ldap/ca_certs.pem# vi /etc/ldap/ldap.conf
* set TLS_CACERT to following:TLS_CACERT /etc/ldap/ca_certs.pem
# ldapwhoami -H ldap:// -x -ZZAnonymous

Force Connections to Use TLS
Change olcSecurity attribute to include 'tls=1':

# vi forcetls.ldif# cat forcetls.ldifdn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcSecurityolcSecurity: tls=1

# ldapmodify -H ldapi:// -Y EXTERNAL -f forcetls.ldif# service slapd force-reload# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL dn(shouldn’t work)
# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL -Z dn(should work)

Disallow anonymous bind
Create user binduser to be used for LDAP searches:


# vi binduser.ldif# cat binduser.ldifdn: cn=binduser,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: binduseruid: binduseruidNumber: 2000gidNumber: 200homeDirectory: /home/binduserloginShell: /bin/bashgecos: suseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1

# ldapadd -x -W -D "cn=admin,dc=mycompany,dc=com" -Z -f binduser.ldifEnter LDAP Password:adding new entry "cn=binduser,dc=mycompany,dc=com"
Change olcDissalows attribute to include bind_anon:


# vi disallow_anon_bind.ldif# cat disallow_anon_bind.ldifdn: cn=configchangetype: modifyadd: olcDisallowsolcDisallows: bind_anon

# ldapmodify -H ldapi:// -Y EXTERNAL -ZZ -f disallow_anon_bind.ldif# service slapd force-reload
Also disable anonymous access to frontend:

# vi disable_anon_frontend.ldif# cat disable_anon_frontend.ldifdn: olcDatabase={-1}frontend,cn=configchangetype: modifyadd: olcRequiresolcRequires: authc

# ldapmodify -H ldapi:// -Y EXTERNAL -f disable_anon_frontend.ldif# service slapd force-reload

Create organizational units and users
Create helper scripts:

# cat add_ldap_ldif.sh
#!/bin/bash

LDIF=$1

ldapadd -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat modify_ldap_ldif.sh#!/bin/bash

LDIF=$1

ldapmodify -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat set_ldap_pass.sh#!/bin/bash

USER=$1PASS=$2

ldappasswd -s $PASS -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -x "uid=$USER,ou=users,dc=mycompany,dc=com" -Z
Create ‘mypeople’ organizational unit:


# cat add_ou_mypeople.ldifdn: ou=mypeople,dc=mycompany,dc=comobjectclass: organizationalunitou: usersdescription: all users
# ./add_ldap_ldif.sh add_ou_mypeople.ldif
Create 'groups' organizational unit:


# cat add_ou_groups.ldifdn: ou=groups,dc=mycompany,dc=comobjectclass: organizationalunitou: groupsdescription: all groups

# ./add_ldap_ldif.sh add_ou_groups.ldif
Create users (note the shadow attributes set to -1, which means they will be ignored):


# cat add_user_myuser.ldifdn: uid=myuser,ou=mypeople,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: myuseruid: myuseruidNumber: 2001gidNumber: 201homeDirectory: /home/myuserloginShell: /bin/bashgecos: myuseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1
# ./add_ldap_ldif.sh add_user_myuser.ldif# ./set_ldap_pass.sh myuser MYPASS

Enable LDAPS
In /etc/default/slapd set:

SLAPD_SERVICES="ldap:/// ldaps:/// ldapi:///"

Enable debugging
This was a life saver when it came to troubleshooting connection issues from clients such as Jenkins or other Linux boxes. To enable full debug output, set olcLogLevel to -1:

# cat enable_debugging.ldifdn: cn=configchangetype: modifyadd: olcLogLevelolcLogLevel: -1
# ldapadd -H ldapi:// -Y EXTERNAL -f enable_debugging.ldif
# service slapd force-reload

Configuring Jenkins LDAP authentication
Verify LDAPS connectivity from Jenkins to LDAP server
In my case, the Jenkins server is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the Jenkins box pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the Jenkins server:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
Set up LDAPS client on Jenkins server (StartTLSdoes not work w/ Jenkins LDAP plugin!)
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on Jenkins server and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Add LDAP certificates to Java keystore used by Jenkins
As user jenkins:
$ mkdir .keystore$ cp /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/security/cacerts .keystore/(you may need to customize the above line in terms of the path to the cacerts file -- it is the one under your JAVA_HOME)

$ keytool --keystore /var/lib/jenkins/.keystore/cacerts --import --alias my-ldap-server.mycompany.com:636 --file /etc/ldap/ca_certs.pemEnter keystore password: changeitOwner: CN=LDAP Server CAIssuer: CN=LDAP Server CASerial number: 570bddb0Valid from: Mon Apr 11 17:24:00 UTC 2016 until: Tue Apr 11 17:24:00 UTC 2017Certificate fingerprints:....Extensions:....
Trust this certificate? [no]:  yesCertificate was added to keystore
In /etc/default/jenkins, set JAVA_ARGS to:
JAVA_ARGS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/var/lib/jenkins/.keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit"  
As root, restart jenkins:

# service jenkins restart
Jenkins settings for LDAP plugin
This took me a while to get right. The trick was to set the rootDN to dc=mycompany, dc=com and the userSearchBase to ou=mypeople (or to whatever name you gave to your users' organizational unit). I also tried to get LDAP groups to work but wasn't very successful.
Here is the LDAP section in /var/lib/jenkins/config.xml:
 <securityRealm class="hudson.security.LDAPSecurityRealm" plugin="ldap@1.11">    <server>ldaps://my-ldap-server.mycompany.com:636</server>    <rootDN>dc=mycompany,dc=com</rootDN>    <inhibitInferRootDN>true</inhibitInferRootDN>    <userSearchBase>ou=mypeople</userSearchBase>    <userSearch>uid={0}</userSearch> <groupSearchBase>ou=groups</groupSearchBase> <groupMembershipStrategy class="jenkins.security.plugins.ldap.FromGroupSearchLDAPGroupMembershipStrategy"> <filter>member={0}</filter> </groupMembershipStrategy>    <managerDN>cn=binduser,dc=mycompany,dc=com</managerDN>    <managerPasswordSecret>JGeIGFZwjipl6hJNefTzCwClRcLqYWEUNmnXlC3AOXI=</managerPasswordSecret>    <disableMailAddressResolver>false</disableMailAddressResolver>    <displayNameAttributeName>displayname</displayNameAttributeName>    <mailAddressAttributeName>mail</mailAddressAttributeName>    <userIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>    <groupIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>
 </securityRealm>

At this point, I was able to create users on the LDAP server and have them log in to Jenkins. With CloudBees Jenkins Enterprise, I was also able to use the Role-Based Access Control and Folder plugins in order to create project-specific folders and folder-specific groups specifying various roles. For example, a folder MyProjectNumber1 would have a Developers group defined inside it, as well as an Administrators group and a Readers group. These groups would be associated with fine-grained roles that only allow certain Jenkins operations for each group.
I tried to have these groups read by Jenkins from the LDAP server, but was unsuccessful. Instead, I had to populate the folder-specific groups in Jenkins with user names that were at least still defined in LDAP.  So that was half a win. Still waiting to see if I can define the groups in LDAP, but for now this is a workaround that works for me.
Allowing users to change their LDAP password
This was again a seemingly easy task but turned out to be pretty complicated. I set up another small EC2 instance to act as a jumpbox for users who want to change their LDAP password.
The jumpbox is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the jumpbox pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the jumpbox:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on the jumpbox and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Next, I followed this LDAP Client Authentication guide from the Ubuntu documentation.
# apt-get install ldap-auth-client nscd
Here I had to answer the setup questions on LDAP server FQDN, admin DN and password, and bind user DN and password. 
# auth-client-config -t nss -p lac_ldap
I edited /etc/auth-client-config/profile.d/ldap-auth-config and set:
[lac_ldap]nss_passwd=passwd: ldap filesnss_group=group: ldap filesnss_shadow=shadow: ldap filesnss_netgroup=netgroup: nis
I edited /etc/ldap.conf and made sure the following entries were there:
base dc=mycompany,dc=comuri ldaps://my-ldap-server.mycompany.combinddn cn=binduser,mycompany,dc=combindpw BINDUSERPASSrootbinddn cn=admin,mycompany,dc=comport 636ssl ontls_cacertfile /etc/ldap/ca_certs.pemtls_cacertdir /etc/ssl/certs
I allowed password-based ssh logins to the jumpbox by editing /etc/ssh/sshd_config and setting:
PasswordAuthentication yes
# service ssh restart

IMPORTANT: On the LDAP server, I had to allow users to change their own password by adding this ACL:
# cat set_userpassword_acl.ldif
dn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcAccessolcAccess: {0}to attrs=userpassword by dn="cn=admin,dc=mycompany,dc=com" write by self write by anonymous auth by users none
Then:
# ldapmodify -H ldapi:// -Y EXTERNAL -f set_userpassword_acl.ldif

At this point, users were able to log in via ssh to the jumpbox using a pre-set LDAP password, and change their LDAP password by using the regular Unix 'passwd' command.
I am still fine-tuning the LDAP setup on all fronts: LDAP server, LDAP client jumpbox and Jenkis server. The setup I have so far allows me to have a single sign-on account for users to log in to Jenkins. Some of my next steps is to use the same user LDAP accounts  for authentication and access control into MySQL and other services.

Stuff The Internet Says On Scalability For April 15th, 2016

Hey, it's HighScalability time:


What happens when Beyoncé meets eCommerce? Ring the alarm.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • $14 billion: one day of purchases on Alibaba; 47 megawatts: Microsoft's new data center space for its MegaCloud; 50%: do not speak English on Facebook; 70-80%: of all Intel servers shipped will be deployed in large scale datacenters by 2025; 1024 TB: of storage for 3D imagery currently in Google Earth; $7: WeChat average revenue per user; 1 trillion: new trees; 

  • Quotable Quotes:
    • @PicardTips: Picard management tip: Know your audience. Display strength to Klingons, logic to Vulcans, and opportunity to Ferengi.
    • Mark Burgess: Microservices cannot be a panacea. What we see clearly from cities is that they can be semantically valuable, but they can be economically expensive, scaling with superlinear cost. 
    • ethanpil: I'm crying. Remember when messaging was built on open platforms and standards like XMPP and IRC? The golden year(s?) when Google Talk worked with AIM and anyone could choose whatever client they preferred?
    • @acmurthy: @raghurwi from @Microsoft talking about scaling Hadoop YARN to 100K+ clusters. Yes, 100,000 
    • @ryanbigg: Took a Rails view rendering time from ~300ms to 50ms. Rewrote it in Elixir: it’s now 6-7ms. #MyElixirStatus
    • Dmitriy Samovskiy: In the past, our [Operations] primary purpose in life was to build and babysit production. Today operations teams focus on scale.
    • @Agandrau: Sir Tim Berners-Lee thinks that if we can predict what the internet will look like in 20 years, than we are not creative enough. #www2016
    • @EconBizFin: Apple and Tesla are today’s most talked-about companies, and the most vertically integrated
    • Kevin Fishner: Nomad was able to schedule one million containers across 5,000 hosts in Google Cloud in under five minutes.
    • David Rosenthal: The Web we have is a huge success disaster. Whatever replaces it will be at least as big a success disaster. Lets not have the causes of the disaster be things we knew about all along.
    • Kurt Marko: The days of homogeneous server farms with racks and racks of largely identical systems are over.
    • Jonathan Eisen: This is humbling, we know virtually nothing right now about the biology of most of the tree of life.
    • @adrianco: Google has a global network IP model (more convenient), AWS regional (more resilient). Choices...
    • @jason_kint: Stupid scary stats in this. Ad tech responsible for 70% of server calls and 50% of your mobile data plan.
    • apy: I found myself agreeing with many of Pike’s statements but then not understanding how he wound up at Go. 
    • @TomBirtwhistle: The head of Apple Music claims YouTube accounts for 40% of music consumption yet only 4% of online industry revenue 
    • @0x604: Immutable Laws of Software: Anyone slower than you is over-engineering, anyone faster is introducing technical debt
    • surrealvortex: I'm currently using flame graphs at work. If your application hasn't been profiled recently, you'll usually get lots of improvement for very little effort. Some 15 minutes of work improved CPU usage of my team's biggest fleet by ~40%. Considering we scaled up to 1500 c3.4xlarge hosts at peak in NA alone on that fleet, those 15 minutes kinda made my month :)
    • @cleverdevil: Did you know that Virtual Machines spin up in the opposite direction in the southern hemisphere? Little known fact.
    • ksec: Yes, and I think Intel is not certain to win, just much more likely. The Power9 is here is targeting 2H 2017 release. Which is actually up against Intel Skylake/Kabylake Xeon Purley Platform in similar timeframe.
    • @jon_moore: Platforms make promises; constraints are the contracts that allow platforms to do their jobs. #oreillysacon
    • @CBILIEN: Scaling data platforms:compute and storage have to be scaled independently #HS16Dublin

  • A morning reverie. Chopped for programmers. Call it Crashed. You have three rounds with four competitors. Each round is an hour. The competitors must create a certain kind of program, say a game, or a productivity tool, anything really, using a basket of three selected technologies, say Google Cloud, wit.ai, and Twilio. Plus the programmer can choose to use any other technologies from the pantry that is the Internet. The program can take any form the programmer chooses. It could be a web app, iOS or Android app, an Alexa skill, a Slack bot, anything, it's up to the creativity of the programmer. The program is judged by an esteemed panel based on creativity, quality, and how well the basket technologies are highlighted. When a programmer loses a round they have been Crashed. The winner becomes the Crashed Champion. Sound fun?

  • Jeff Dean when talking about deep learning at Google makes it clear a big part of their secret sauce is being able to train neural nets at scale using their bespoke distributed infrastructure. Now Google has released Tensor Flow with distributed computing support. It's not clear if this is the same infrastructure Google uses internally, but it seems to work: using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Also, the tensorflow playground is a cool way to visualize what's going on inside.

  • Christopher Meiklejohn with an interesting history of the Remote Procedure Call. It started way back in 1974: RFC 674, “Procedure Call Protocol Documents, Version 2”. RFC 674 attempts to define a general way to share resources across all 70 nodes of the Internet

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Here's The Programming Game You Never Asked For

Coding Horror - Jeff Atwood - Fri, 04/15/2016 - 10:48

You know what's universally regarded as un-fun by most programmers? Writing assembly language code.

As Steve McConnell said back in 1994:

Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C. You save time when you don't need to have an awards ceremony every time a C statement does what it's supposed to.

Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we've gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to "gee, we kinda stopped caring about CPU performance altogether five years ago", there hasn't been much need for the kind of hand-tuned performance you get from assembly. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.

So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?

And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – was to read a (blessedly short 14 page) photocopied reference manual and fix its corrupted boot sequence … using assembly language?

Well now, by God, it's time to learn us some assembly and get to the bottom of this mystery, isn't it? As its creator notes, this is the assembly language programming game you never asked for!

I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he's not already working full time with us programming Discourse. Programmers gotta … program.

Of course there's a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even "simple" programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky's Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:

Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers I’ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn “educational” game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.

Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Plato’s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the game—and it was one serious challenge—was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: “I swore to myself,” he told me, “that as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.”

I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other games. I got a little turned off by the way it treated programming as electrical engineering; messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I probably didn't have and I wasn't sure I even wanted.

I'll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation, like TIS-100.

A more gentle example of a modern programming game is Tomorrow Corporation's excellent Human Resource Machine.

It has exactly the irreverent sense of humor you'd expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. If you've ever wanted to find out if someone is truly interested in programming, recommend this game to them and see. It starts with only 2 instructions and slowly widens to include 11. Corporate drudgery has never been so … er, fun?

I'm thinking about this because I believe there's a strong connection between programming games and being a talented software engineer. It's that essential sense of play, the idea that you're experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:

Joel implied that good programmers love programming so much they'd do it for no pay at all. I won't go quite that far, but I will note that the best programmers I've known have all had a lifelong passion for what they do. There's no way a minor economic blip would ever convince them they should do anything else. No way. No how.

I'd rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this interview about demonstrating competency in a certain technical skill that's worth a certain amount of money, or showing me how you can improvise and have fun?

That's why I was so excited when Patrick, Thomas, and Erin founded Starfighter.

If you want to know how competent a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it's rarely extended to programming, until now. Their first simulation is StockFighter.

Participants are given:

  • An interactive trading blotter interface
  • A real, functioning set of limit-order-book venues
  • A carefully documented JSON HTTP API, with an API explorer
  • A series of programming missions.

Participants are asked to:

  • Implement programmatic trading against a real exchange in a thickly traded market.
  • Execute block-shopping trading strategies.
  • Implement electronic market makers.
  • Pull off an elaborate HFT trading heist.

This is a seriously next level hiring strategy, far beyond anything else I've seen out there. It's so next level that to be honest, I got really jealous reading about it, because I've felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.

I know we've talked about giving lip service to the idea of hiring the best, but if that's really what you want to do, the best programmers I've ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering of an existing system, even to the point of finding rare exploits.

Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It's the programming game we never asked for. But here we are, regardless.

An arbitrary programming game, particularly one that goes to great lengths to simulate a fictional system, is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I'd gladly hire a dozen people just like that any day, and set them loose on our very real programming project.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Optimize, Develop, and Debug with Vulkan Developer Tools

Android Developers Blog - Fri, 04/15/2016 - 10:30

Posted by Shannon Woods, Technical Program Manager

Today we're pleased to bring you a preview of Android development tools for Vulkan™. Vulkan is a new 3D rendering API which we’ve helped to develop as a member of Khronos, geared at providing explicit, low-overhead GPU (Graphics Processor Unit) control to developers. Vulkan’s reduction of CPU overhead allows some synthetic benchmarks to see as much as 10 times the draw call throughput on a single core as compared to OpenGL ES. Combined with a threading-friendly API design which allows multiple cores to be used in parallel with high efficiency, this offers a significant boost in performance for draw-call heavy applications.

Vulkan support is available now via the Android N Preview on devices which support it, including Nexus 5X and Nexus 6P. (Of course, you will still be able to use OpenGL ES as well!)

To help developers start coding quickly, we’ve put together a set of samples and guides that illustrate how to use Vulkan effectively.

You can see Vulkan in action running on an Android device with Robert Hodgin’s Fish Tornado demo, ported by Google’s Art, Copy, and Code team:


Optimization: The Vulkan API

There are many similarities between OpenGL ES and Vulkan, but Vulkan offers new features for developers who need to make every millisecond count.

  • Application control of memory allocation. Vulkan provides mechanisms for fine-grained control of how and when memory is allocated on the GPU. This allows developers to use their own allocation and recycling policies to fit their application, ultimately reducing execution and memory overhead and allowing applications to control when expensive allocations occur.
  • Asynchronous command generation. In OpenGL ES, draw calls are issued to the GPU as soon as the application calls them. In Vulkan, the application instead submits draw calls to command buffers, which allows the work of forming and recording the draw call to be separated from the act of issuing it to the GPU. By spreading command generation across several threads, applications can more effectively make use of multiple CPU cores. These command buffers can also be reused, reducing the overhead involved in command creation and issuance.
  • No hidden work. One OpenGL ES pitfall is that some commands may trigger work at points which are not explicitly spelled out in the API specification or made obvious to the developer. Vulkan makes performance more predictable and consistent by specifying which commands will explicitly trigger work and which will not.
  • Multithreaded design, from the ground up. All OpenGL ES applications must issue commands for a context only from a single thread in order to render predictably and correctly. By contrast, Vulkan doesn’t have this requirement, allowing applications to do work like command buffer generation in parallel— but at the same time, it doesn’t make implicit guarantees about the safety of modifying and reading data from multiple threads at the same time. The power and responsibility of managing thread synchronization is in the hands of the application.
  • Mobile-friendly features. Vulkan includes features particularly helpful for achieving high performance on tiling GPUs, used by many mobile devices. Applications can provide information about the interaction between separate rendering passes, allowing tiling GPUs to make effective use of limited memory bandwidth, and avoid performing off-chip reads.
  • Offline shader compilation. Vulkan mandates support for SPIR-V, an intermediate language for shaders. This allows developers to compile shaders ahead of time, and ship SPIR-V binaries with their applications. These binaries are simpler to parse than high-level languages like GLSL, which means less variance in how drivers perform this parsing. SPIR-V also opens the door for third parties to provide compilers for specialized or cross-platform shading languages.
  • Optional validation. OpenGL ES validates every command you call, checking that arguments are within expected ranges, and objects are in the correct state to be operated upon. Vulkan doesn’t perform any of this validation itself. Instead, developers can use optional debug tools to ensure their calls are correct, incurring no run-time overhead in the final product.

Debugging: Validation Layers

As noted above, Vulkan’s lack of implicit validation requires developers to make use of tools outside the API in order to validate their code. Vulkan’s layer mechanism allows validation code and other developer tools to inspect every API call during development, without incurring any overhead in the shipping version. Our guides show you how to build the validation layers for use with the Android NDK, giving you the tools necessary to build bug-free Vulkan code from start to finish.


Develop: Shader toolchain
The Shaderc collection of tools provides developers with build-time and run-time tools for compiling GLSL into SPIR-V. Shaders can be compiled at build time using glslc, a command-line compiler, for easy integration into existing build systems. Or, for shaders which are generated or edited during execution, developers can use the Shaderc library to compile GLSL shaders to SPIR-V via a C interface. Both tools are built on top of Khronos’s reference compiler.


Additional Resources

The Vulkan ecosystem is a broad one, and the resources to get you started don’t end here. There is a wealth of material to explore, including:

Categories: Programming

Android Studio 2.0

Android Developers Blog - Fri, 04/15/2016 - 04:54

Posted by Jamal Eason, Product Manager, Android

Android Studio 2.0 is the fastest way to build high quality, performant apps for the Android platform, including phones and tablets, Android Auto, Android Wear, and Android TV. As the official IDE from Google, Android Studio includes everything you need to build an app, including a code editor, code analysis tools, emulators and more. This new and stable version of Android Studio has fast build speeds and a fast emulator with support for the latest Android version and Google Play Services.

Android Studio is built in coordination with the Android platform and supports all of the latest and greatest APIs. If you are developing for Android, you should be using Android Studio 2.0. It is available today as a easy download or update on the stable release channel.

Android Studio 2.0 includes the following new features that Android developer can use in their workflow :

  • Instant Run - For every developer who loves faster build speeds. Make changes and see them appear live in your running app. With many build/run accelerations ranging from VM hot swapping to warm swapping app resources, Instant Run will save you time every day.
  • Android Emulator - The new emulator runs ~3x faster than Android’s previous emulator, and with ADB enhancements you can now push apps and data 10x faster to the emulator than to a physical device. Like a physical device, the official Android emulator also includes Google Play Services built-in, so you can test out more API functionality. Finally, the new emulator has rich new features to manage calls, battery, network, GPS, and more.
  • Cloud Test Lab Integration - Write once, run anywhere. Improve the quality of your apps by quickly and easily testing on a wide range of physical Android devices in the Cloud Test Lab right from within Android Studio.
  • App Indexing Code Generation & Test - Help promote the visibility your app in Google Search for your users by adding auto-generated URLS with the App Indexing feature in Android Studio. With a few click you can add indexable URL links that you can test all within the IDE.
  • GPU Debugger Preview - For those of you developing OpenGL ES based games or apps, you can now see each frame and the GL state with the new GPU debugger. Uncover and diagnosis GL rendering issues by capturing and analyzing the GPU stream from your Android device.
  • IntelliJ 15 Update - Android Studio is built on the world class Intellij coding platform. Check out the latest Intellij features here.

Deeper Dive into the New Features

Instant Run

Today, mobile platforms are centered around speed and agility. And yet, building for mobile can sometimes feel clunky and slow. Instant Run in Android Studio is our solution to keep you in a fast and fluid development flow. The feature increases your developer productivity by accelerating your edit, build, run cycles. When you click on the Instant Run button (), Instant Run will analyze the changes you have made and determine how it can deploy your new code in the fastest way.

New Instant Run Buttons

Whenever possible, it will inject your code changes into your running app process, avoiding re-deployment and re-installation your APK. For some types of changes, an activity or app restart is required, but your edit, build and run cycles should still be generally much faster than before. Instant Run works with any Android Device or emulator running API 14 (Ice Cream Sandwich) or higher.

Since previewing Instant Run at the end of last year, we’ve spent countless hours incorporating your feedback and refining for the stable release. Look for even more acceleration in future releases because build speeds can never be too fast. To learn how you can make the most out of Instant Run in your app development today, please check out our Instant Run documentation.

Android Emulator

The new Android Emulator is up to 3x faster in CPU, RAM, & I/O in comparison to the previous Android emulator. And when you're ready to build, ADB push speeds are a whopping 10x faster! In most situations, developing on the official Android Emulator is faster than a real device, and new features like Instant Run will work best with the new Android emulator.

In addition to speed and performance, the Android Emulator has a brand user interface and sensor controls. Enhanced since the initial release, with the emulator you can drag and drop APKs for quick installation, resize and rescale the window, use multi-touch actions (pinch & zoom, pan, rotate, tilt) and much more.

Android Emulator User Interface: Toolbar & Extend Controls Panel

Trying out the new emulator is as easy as updating your SDK Tools to 25.1.1 or higher, create a fresh Android Virtual Device using one of the recommended x86 system images and you are ready to go. Learn more about the Android Emulator by checking out the documentation.

Cloud Test Lab

Cloud Test Lab is a new service that allows you to test your app across a wide range of devices and device configurations at scale in the cloud. Once you complete your initial testing with your Android Emulator or Android device, Cloud Test Lab is a great extension to your testing process that provides you to run through a collection of tests against a portfolio of physical devices hosted in Google’s data centers. Even if you do not have tests explicitly written, Cloud Test Lab can perform a basic set of tests to ensure that your app does not crash.

The new interface in Android Studio allows you to configure the portfolio of tests you want to run on Cloud Test Lab, and allows you to also see the results of your tests. To learn more about the service go here.

Setup for Cloud Test Lab

App Indexing

It is now easier for your users to find your app in Google Search with the App Indexing API. Android Studio 2.0 helps you to create the correct URL structure in your app code and add attributes in your AndroidManifest.xml file that will work the Google App Indexing service. After you add the URLs to your app, can you test and validate your app indexing code as shown here:

Google App Indexing Testing

Check out this link for more details about app indexing support in Android Studio.

GPU Debugger Preview

If you are developing OpenGL ES games or graphics-intensive apps, you have a new GPU debugger with Android Studio 2.0. Although the GPU debugger is a preview, you can step through your app frame by frame to identify and debug graphics rendering issues with rich information about the GL state. For more details on how to setup your Android device and app to work with the tool, check out the tech documentations here.

GPU Debugger Preview

What's Next

Update

If you are using a previous version of Android Studio, you can check for updates on the Beta channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). If you need a new copy of Android Studio, you can download it here. If you developing for the N Developer Preview, check out this additional setup instructions.

Set Up Instant Run & Android Emulator

After you update to or download Android Studio 2.0, you should upgrade your projects to use Instant Run, and create a fresh Android Virtual Device (AVD) for the new Android emulator and you are on your way to a fast Android development experience.

Using Instant Run is easy. For each of your existing projects you will see a quick prompt to update your project to the new gradle plugin version (com.android.tools.build:gradle:2.0.0).

Prompt to update your gradle version in your project

For all new app projects in Android Studio 2.0, Instant Run is on by default. Check out the documentation for more details.

We are already hard at work developing the next release of Android Studio. We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our new Google+ page or on Twitter.

Categories: Programming

Android N Developer Preview 2, out today!

Android Developers Blog - Fri, 04/15/2016 - 04:49

Posted by Dave Burke, VP of Engineering

Last month, we released the first Developer Preview of Android N, to give you a sneak peek at our next platform. The feedback you’ve shared to-date has helped us catch bugs and improve features. Today, the second release in our series of Developer Previews is ready for you to continue testing against your apps.

This latest preview of Android N fixes a few bugs you helped us identify, such as not being able to connect to hidden Wi-Fi networks (AOSP 203116), Multiwindow pauses (AOSP 203424), and Direct Reply closing an open activity (AOSP 204411), to name just a few. We’re still on the hunt for more; please continue to share feedback, either in the N Developer Preview issue tracker or in the N preview community.


What’s new:

Last month’s Developer Preview introduced a host of new features, like Multi-window, bundled notifications and more. This preview builds on those and includes a few new ones:

  • Vulkan: Vulkan is a new 3D rendering API which we’ve helped to develop as a member of Khronos, geared at providing explicit, low-overhead GPU (Graphics Processor Unit) control to developers and offers a significant boost in performance for draw-call heavy applications. Vulkan’s reduction of CPU overhead allows some synthetic benchmarks to see as much as 10 times the draw-call throughput on a single core as compared to OpenGL ES. Combined with a threading-friendly API design which allows multiple cores to be used in parallel with high efficiency, this offers a significant boost in performance for draw-call heavy applications. With Android N, we’ve made Vulkan a part of the platform; you can try it out on supported devices running Developer Preview 2. Read more here. Vulkan Developer Tools blog here.
  • Launcher shortcuts: Now, apps can define shortcuts which users can expose in the launcher to help them perform actions quicker. These shortcuts contain an Intent into specific points within your app (like sending a message to your best friend, navigating home in a mapping app, or playing the next episode of a TV show in a media app).

    An application can publish shortcuts with ShortcutManager.setDynamicShortcuts(List) and ShortcutManager.addDynamicShortcut(ShortcutInfo), and launchers can be expected to show 3-5 shortcuts for a given app.

  • Emoji Unicode 9 support: We are introducing a new emoji design for people emoji that moves away from our generic look in favor of a more human-looking design. If you’re a keyboard or messaging app developer, you should start incorporating these emoji into your apps. The update also introduces support for skin tone variations and Unicode 9 glyphs, like the bacon, selfie and face palm. You can dynamically check for the new emoji characters using Paint.hasGlyph().

New human emoji

New activity emoji

  • API changes: This update includes API changes as we continue to refine features such as multi-window support (you can now specify a separate minimum height and minimum width for an activity), notifications, and others. For details, take a look at the diff reports available in the downloadable API reference package.

  • Bug fixes: We’ve resolved a number of issues throughout the system, including these fixes for issues that you’ve reported through the public issue tracker. Please continue to let us know what you find and follow along with the known issues here.

How to get the update:

The easiest way to get this and later preview updates is by enrolling your devices in the Android Beta Program. Just visit g.co/androidbeta and opt-in your eligible Android phone or tablet -- you’ll soon receive this (and later) preview updates over-the-air. If you’ve already enrolled your device, you’ll receive the update shortly, no action is needed on your part. You can also download and flash this update manually. Developer Preview 2 is intended for developers and not as a daily driver; this build is not yet optimized for performance and battery life.

The N Developer Preview is currently available for Nexus 6, Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices, as well as General Mobile 4G [Android One] devices. For Nexus Player, the update to Developer Preview 2 will follow the other devices by several days.

To build and test apps with Developer Preview 2, you need to use Android Studio 2.1 -- the same version that was required for Developer Preview 1. You’ll need to check for SDK components updates (including build tools and emulator system images) for Developer Preview 2 -- see here for details.

Thanks so much for all of your feedback so far. Please continue to share feedback, either in the N Developer Preview issue tracker or in the N preview community. The sooner we’re able to get your feedback, the more of of it we will be able to incorporate in the next release of Android.

Categories: Programming

Metrics: Total Factor Productivity – The Really Hard Bit

Not quite a Google bus

Not quite a Google bus

Labor, raw material, and capital productivity are easy concepts to understand.  For example, labor productivity is the ratio of the products delivered per unit of effort.  Increasing the efficiency of labor will either increase the amount of product delivered or reduce the amount of labor needed.  Raw material or capital productivity follow the same pattern. The issue is that while labor, raw materials, and capital explain a lot of the variation in productivity, they do not explain it all. And in software product development other factors often contribute significantly to productivity improvement. Total factor productivity (TFP) is not a simple ratio of output to input, but rather is a measure that captures everything that is not captured as labor, capital or material productivity. Factors included in total factor productivity include attributes such as changes in general knowledge, the use of particular organizational structures, management techniques, or returns on scale. The components in TFP are often the sources of productivity changes in software development. 

Total factor productivity seeks to account for all of the inputs used to create a product leaving a residual impact on productivity that isn’t directly traceable to inputs.  Conceptually I find it easier to consider residual as the effect of non-direct inputs. The formula typically to calculate TFP, the Solow residual, is:

Solow residual

Y = Total Output

A = Total Factor Productivity

K = Capital Input

L = Labor Input

(The superscripts are the percentage contribution of capital and labor to productivity).

Calculating TFP is like removing parts of a picture to identify and appreciate what remains. Very, very, very few software development and maintenance organizations take the time to calculate TFP, but everyone involved in managing software development or actively involved in making and assessing change must understand TFP conceptually and the factors that it represents. The factors that comprise TFP are often very effective levers for changing software development and maintenance productivity. Several of factors that contribute to TFP include technological change, general improvements in social well-being, changes in management approaches, and reallocating resources from low productivity endeavors.   Often we can measure the impacts of the changes in these factors in the other types of productivity indirectly.

Technological change.   Most of us have lived through the period governed by Moore’s Law, and even if we did not personally get access to new, faster processor every year we were impacted by the increase of processing power.  Networks became faster and we can find new ideas to solve problems on the internet, which leads to more and more innovation in how work is done, therefore, increasing productivity. Over the years, many in the software measurement field have noted a constant upward drift attributed to more efficient programming languages.  For example, the venerable coding language COBOL, developed in 1959 has continued to evolve (IBM’s current version is COBOL for x/OS ).  Each version made the language more powerful than the version before it. Technological change is a major contributor to productivity improvement in software development and maintenance.

General improvements in social well-being.   These types of changes include factors such as political stability, transportation infrastructure or general improvements in a country or area’s educational system.  Several organizations I talk with regularly source work in the Ukraine.  During the height of the recently political problems, they saw a significant drop in both amount and quality of functionality delivered.  The same people were working on code and everyone was at work, however, the reduction social well-being disrupted productivity. At an organizational level, companies often work hard to improve the social well-being of their employees and families. Google’s buses to move employees in and out of the core of San Francisco is a reflection of this the need to improve productivity through increased social well-being.

Changes in management approaches.  Why does Scrum typically lead to higher productivity?  The humans on teams are not generally improved nor is the capital or raw materials; however the change in management approach embodied in Scrum provides a basis for higher performance and increased productivity. For example, embracing Scrum as a management structure allows team members to make decisions faster which improves the efficiency of the team.

Reallocating resources from low productivity endeavors. Business Process Management (BPM) and Business Process Reengineering (BPR) are fields that improve work processes.  They represent a field of operations research that optimizes processes by identifying what an organization does efficiently.  Reducing low productivity activities allows organizations to spend money, effort and other resources on delivering the products and services they are efficient at delivering.  This reallocation is one the reasons continuous process improvement is powerful.

TFP measures the impact of all of these factors and others.  Unfortunately measuring by subtraction is far less straightforward than just measuring labor productivity and pretending we have covered all bases. In software development, TFP represents many of the levers that organizations have to influence productivity. However, labor and capital productivity are often used as proxies to measure the impact of these changes.  While the path from a change in management structure may not be direct, we can at least see the ripples caused by the change in other more straightforward measures.  Interpreting the impact of indirect changes requires more careful analysis and often more explanation.

Next: Productivity Gone Wild


Categories: Process Management

Travel through space with the Project Tango app, Solar Simulator

Google Code Blog - Thu, 04/14/2016 - 22:12

Posted by Jason Guo, Developer Programs Engineer, Project Tango

Since most of us haven’t been to space, it’s often hard to grasp concepts like the vastness of the Solar System or the size of the planets. To make these concepts more tangible, three graduate students at San Francisco State University (SFSU)--Jason Burmark, Moses Lee and Omar Shaikh--have created Solar Simulator, a new app for Project Tango. The app lets people take a virtual walk through space to understand the size and scale of our solar system.

Created with the Unity SDK, the application lays out our solar system’s planets in their relative distances from each other and draws 3D models of them in their relative sizes. The app leverages Project Tango’s motion-tracking API to track your movements as you walk, so you can better understand the planets and their distance in space.

If you like what you see, you can create your own solar system at home. Just follow the six steps below:

  1. Download the Tango Unity SDK.
  2. Create a new Unity project and import the Tango SDK package into the project. If you don’t already have the Tango SDK, you can download it here.
  3. Assuming that you are building a solar simulation, place a sphere at (0, 0, 2) to simulate a planet floating in space. The screen will look like this:
  4. Next, replace the Main Camera with the Tango AR Camera and connect the Tango Manager through the prefabs. To do this, first remove the Main Camera gameobject from the scene. Then drag in the Tango AR Camera and Tango Manager from the TangoPrefabs folder under Project. The scene hierarchy will look like this:
  5. On Tango Manage gameobject, there are several Tango startup configurations such as knobs to configure how Tango will run in the application session, i.e, turning on/off depth, or motion tracking. In this case, check the boxes to turn Auto-connect to service, Enable motion tracking (with Auto Reset), and Enable video overlay (with TextureID method).
  6. To get your code ready for AR on a Tango-enabled device, build and run the project. To do this, follow the “Change the Build Settings” and “Build and run” sections in this tutorial.

Here is what the final scene should look like from the device:

If you want a guided tour of the planets with Solar Simulator, developers Jason, Moses, and Omar will be demoing their app at San Francisco’s California Academy of Sciences’ NightLife tonight at 6:30PM PT. You can also download Solar Simulator on your Project Tango Development Kit.

Categories: Programming

How To Create Your Blog In Less Than 10 Minutes Using WordPress

Making the Complex Simple - John Sonmez - Thu, 04/14/2016 - 16:58

Creating a blog is one of the most important parts of your career, as a programmer. If you’re a follower of Simple Programmer, you probably know how much I give emphasis about creating a blog and marketing yourself online. Creating a blog was one of the best things I did for my career. The benefits […]

The post How To Create Your Blog In Less Than 10 Minutes Using WordPress appeared first on Simple Programmer.

Categories: Programming

Growing Eddystone with Ephemeral Identifiers: A Privacy Aware & Secure Open Beacon Format

Google Code Blog - Thu, 04/14/2016 - 16:01

Posted by Nirdhar Khazanie, Product Manager and Yossi Matias, VP Engineering


Last July, we launched Eddystone, an open and extensible Bluetooth Low Energy (BLE) beacon format from Google, supported by Android, iOS, and Chrome. Beacons mark important places and objects in a way that your phone can understand. To do this, they typically broadcast public one-way signals ‒ such as an Eddystone-UID or -URL.

Today, we're introducing Ephemeral IDs (EID), a beacon frame in the Eddystone format that gives developers more power to control who can make use of the beacon signal. Eddystone-EID enables a new set of use cases where it is important for users to be able to exchange information securely and privately. Since the beacon frame changes periodically, the signal is only useful to clients with access to a resolution service that maps the beacon’s current identifier to stable data. In other words, the signal is only recognizable to a controlled set of users. In this post we’ll provide a bit more detail about this feature, as well as Google’s implementation of Eddystone-EID with Google Cloud Platform’s Proximity Beacon API and the Nearby API for Android and CocoaPod for iOS.

Technical Specifications

To an observer of an Eddystone-EID beacon, the AES-encrypted eight byte beacon identifier changes pseudo-randomly with an average period that is set by the developer ‒ over a range from 1 second to just over 9 hours. The identifier is generated using a key and timer running on the beacon. When the beacon is provisioned, or set up, the key is generated and exchanged with a resolution service such as Proximity Beacon API using an Elliptic Curve Diffie-Hellman key agreement protocol, and the timer is synchronized with the service. This way, only the beacon and the service that it is registered with have access to the key. You can read more about the technical details of Eddystone-EID from the specification ‒ including the provisioning process ‒ on GitHub, or from our recent preprint.

An Eddystone-EID contains measures designed to prevent a variety of nuanced attacks. For example, the rotation period for a single beacon varies slightly from identifier to identifier, meaning that an attacker cannot use a consistent period to identify a particular beacon. Eddystone-EID also enables safety features such as proximity awareness, device authentication, and data encryption on packet transmission. The Eddystone-TLM frame has also been extended with a new version that broadcasts battery level also encrypted with the shared key, meaning that an attacker cannot use the battery level as an identifying feature either.

When correctly implemented and combined with a service that supports a range of access control checks, such as Proximity Beacon API, this pattern has several advantages:
  • The beacon’s location cannot be spoofed, except by a real-time relay of the beacon signal. This makes it ideal for use cases where a developer wishes to enable premium features for a user at a location.
  • Beacons provide a high-quality and precise location signal that is valuable to the deployer. Eddystone-EID enables deployers to decide which developers/businesses can make use of that signal.
  • Eddystone-EID beacons can be integrated into devices that users carry with them without leaving users vulnerable to tracking.
Integrating Seamlessly with the Google Beacon Platform

Launching today on Android and iOS, is a new addition to the wider Google beacon platform: Beacon Tools. Beacon Tools allows you to provision and register an Eddystone-EID beacon, as well as associate content with your beacon through the Google Cloud Platform.

In addition to Eddystone-EID and the new encrypted version of the previously available Eddystone-TLM, we’re also adding a common configuration protocol to the Eddystone family. The Eddystone GATT service allows any Eddystone beacon to be provisioned by any tool that supports the protocol. This encourages the development of an open ecosystem of beacon products, both in hardware and software, removing restrictions for developers.

Eddystone-EID Support in the Beacon Industry

We’re excited to have worked with a variety of industry players as Eddystone-EID develops. Over the past year, Eddystone manufacturers in the beacon space have grown from 5 to over 25. The following 15 manufacturers will be supporting Eddystone-EID, with more to follow:

Accent Systems Bluvision Reco/Perples Beacon Inside Estimote Sensoro Blesh Gimbal Signal360 BlueBite Nordic Swirl Bluecats Radius Networks Zebra


In addition to beacon manufacturers, we’ve been working with a range of innovative companies to demonstrate Eddystone-EID in a variety of different scenarios.
  • Samsonite and Accent Systems have developed a suitcase with Eddystone-EID where users can securely keep track of their personal luggage.
  • K11 is a Hong Kong museum and retail experience using Sensoro Eddystone-EID beacons for visitor tours and customer promotions.
  • Monumental Sports in Washington, DC, uses Radius Networks Eddystone-EID beacons for delivering customer rewards during Washington Wizards and Capitals sporting events.
  • Sparta Digital has produced an app called Buzzin that uses Eddystone-EID beacons deployed in Manchester, UK to enable a more seamless transit experience.
You can get started with Eddystone-EID by creating a Google Cloud Platform project and purchasing compatible hardware through one of our manufacturers. Best of all, Eddystone-EID works transparently to beacon subscriptions created through the Google Play Services Nearby Messages API, allowing you to run combined networks of Eddystone-EID and Eddystone-UID transparently in your client code!
Categories: Programming

What Are Good Ways Of Investing Money?

Making the Complex Simple - John Sonmez - Thu, 04/14/2016 - 13:00

In today’s video, I’ve answered a question about investing money. What are good ways of investing money? Should you focus on real estate, building a business or getting a degree? Are these options even viable for a normal citizen? Besides that, I give you some tips on other strategies you could use to increase your […]

The post What Are Good Ways Of Investing Money? appeared first on Simple Programmer.

Categories: Programming

Before looking to any solution to a problem, determine the Root Cause

Herding Cats - Glen Alleman - Thu, 04/14/2016 - 00:30

It is common to use the phrase 

If we feel pain X. Let's explore solutions? & possibly "Solutions A & B might be a starting point?"

This approach seeks a solution to the symptom not a solution to the problem

This is the false notion used by #NoEstimates advocates, with the conjecture that estimates are the smell of dysfunction. Without stating what the dysfunction is, then makes the statement that Not Estimating will fix it. So in the end nothing is fixed, the dysfunction is not identified, the Root Cause is not identified, and therefore is it not possible to claim Not Estimating will fix anything, for the simple fact that the problem has not been defined. It's a solution - at doubtful solution - looking for an undefined problem to solve. A lose-lose situation. 

So don't fall for the fallacy of the smell and most of all don't fall for the fallacy, we're just exploring and I have no recommended solutions.

The only way to provide an effective solution to any problem, is to determine the Root Cause to that problem and confirm that the proposed solution will in fact prevent the problem from recurring. If you want to learn how to do this - and not follow the naive 5 Whys - read this. Then if you're working in a domain that does Root Cause analysis buy the Reality Charting tool. It's saved us from catastrophe several times. 

Apollo

 

Related articles The Dysfunctional Approach to Using "5 Whys" Are Estimates Really The Smell of Dysfunction? Root Cause of Project Failure Turning Editorials into Root Cause Analysis Estimating and Making Decisions in Presence of Uncertainty The False Notion of "we haven't seen this before" When the Solution to Bad Management is a Bad Solution
Categories: Project Management