Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=8' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

SQLite and Android N

Eric.Weblog() - Eric Sink - Wed, 06/15/2016 - 19:00
TLDR

The upcoming release of Android N is going to cause problems for many apps that use SQLite. In some cases, these problems include an increased risk of data corruption.

History

SQLite is an awesome and massively popular database library. It is used every day by billions of people. If you are keeping a list of the Top Ten Coolest Software Projects Ever, SQLite should be on the list.

Many mobile apps use SQLite in one fashion or another. Maybe the developers of the app used the SQLite library directly. Or maybe they used another component or library that builds on SQLite.

SQLite is a library, so the traditional way to use it is to just link it into your application. For example, on a platform like Windows Phone 8.1, the app developer simply bundles the SQLite library as part of their app.

But iOS and Android have a SQLite library built-in to the platform. This is convenient, because developers do not need to bundle a SQLite library with their software.

However

The SQLite library that comes with Android is actually not intended to be used except through the android.database.sqlite Java classes. If you are accessing this library directly, you are actually breaking the rules.

And the problem is

Beginning with Android N, these rules are going to be enforced.

If your app is using the system SQLite library without using the Java wrapper, it will not be compatible with Android N.

Does your app have this problem?

If your app is breaking the rules, you *probably* know it. But you might not.

I suppose most Android developers use Java. Any app which is only using android.database.sqlite should be fine.

But if you are using Xamarin, it is rather more likely that your app is breaking the rules. Many folks in the Xamarin community tend to assume that "SQLite is part of the platform, so you can just call it".

Xamarin.Android 6.1 includes a fix for this problem for Mono.Data.Sqlite (see their release notes).

However, that is not the only way of accessing SQLite in the .NET/Xamarin world. In fact, I daresay it is one of the less common ways.

Perhaps the most popular SQLite wrapper is sqlite-net (GitHub). If you are using this library on Android and not taking the extra steps to bundle a SQLite library, your app will break on Android N.

Are you using Akavache? Or Couchbase Lite? Both of these libraries use SQLite under the hood (by way of SQLitePCL.raw, which I maintain), so your app will need to be updated to work on Android N.

There are probably dozens of other examples. GitHub says the sqlite-net library has 857 forks. Are you using one of those? Do you use the MvvmCross SQLite plugin? Do any of the components or libraries in your app make use of SQLite without you being aware of it?

And the Xamarin community is obviously not the whole story. There are dozens of other ways to build mobile apps. I can think of PhoneGap/Cordova, Alpha Anywhere, Telerik NativeScript, and Corona, just off the top of my head. How many of these environments (or their surrounding ecosystems) provide (perhaps accidentally) a rule-breaking way to access the Android system SQLite? I don't know.

What I *do* know is that even Java developers might have a problem.

It's even worse than that

Above, I said: "Any app which is only using android.database.sqlite should be fine." The key word here is "only". If you are using the Java classes but also have other code (perhaps some other library) that accesses the system SQLite, then you have the problems described above. But you also have another problem.

To fix this, you are going to have to modify that "other code" to stop accessing the system SQLite library directly. One way to do this is to change the other code to call through android.database.sqlite. But that might be a lot of work. Or that other code might be a 3rd party library that you do not maintain. So you are probably interested in an easier solution.

Why not just bundle another instance of the SQLite library into your app? This is what people who use sqlite-net on Xamarin will need to do, so it should make sense in this case too, right? Unfortunately, no.

What will happen here is that your android.database.sqlite code will continue using the system SQLite library, and your "other code" will use the second instance of the SQLite library that you bundled with your app. So your app will have two instances of the SQLite library. And this is Very Bad.

The Multiple SQLite Problem

Basically, having multiple copies of SQLite linked into the same appliication can cause data corruption. For more info, see this page on sqlite.org. And also the related blog entry I wrote back in 2014.

You really, really do not want to have two instances of the SQLite library in your app.

Zumero

One example of a library which is going to have this problem is our own Zumero Client SDK. The early versions of our sync library bundled a copy of the SQLite library, to follow the rules. But later, to avoid possible data corruption from The Multiple SQLite Problem, we changed it to call the system SQLite directly. So, although I might like to claim we did it for a decent reason, our library breaks the rules, and we did it knowingly. All Android apps using Zumero will need to be updated for Android N. A new release of the Zumero Client SDK, containing a solution to this problem, is under development and will be released soon-ish.

Informed consent?

I really cannot recommend that you have two instances of the SQLite library in your app. The possibility of corruption is quite real. One of our developers created an example project to demonstrate this.

But for the sake of completeness, I will mention that it might be possible to prevent the corruption by ensuring that only one instance of the SQLite library is accessing a SQLite file at any given time. In other words, you could build your own layer of locking on top of any code that uses SQLite.

Only you can decide if this risk is worth it. I cannot feel good about sending anyone down that path.

Stop using android.database.sqlite?

It also makes this blog entry somewhat more complete for me to mention that changing your "other code" to go through android.database.sqlite is not your only option. You might prefer to leave your "other code" unchanged and rewrite the stuff that uses android.database.sqlite, ending up with both sets of code using one single instance of SQLite that is bundled with your app.

A Lament

Life was better when there were two kinds of platforms, those that include SQLite, and those that do not. Instead, we now have this third category of platforms that "previously included SQLite, but now they don't, but they kinda still do, but not really".

An open letter to somebody at Google

It is so tempting to blame you for this, but that that would be unfair. I fully admit that those of us who broke the rules have no moral high ground at all.

But it also true that because of the multiple SQLite problem, and the sheer quantity of apps that use the Android system SQLite directly, enforcing the rules now is the best way to maximize the possibility of Android apps that break or experience data corruption.

Would it really be so bad to include libsqlite in the NDK?

 

Android Developer Story: Sendy uses Google Play features to build for the next billion users

Android Developers Blog - Wed, 06/15/2016 - 18:25
Posted by Lily Sheringham, Google Play team

Sendy is a door to door on-demand couriering platform founded in Nairobi, Kenya. It connects customers and logistics providers, providing two unique apps, one for the driver and one for the customer. Watch CEO & Co-founder, Meshack Alloys, and Android Developer, Jason Rogena, explain how they use Developer Console features, such as alpha and beta testing, as well as other tips and best practices, to build for the next billion users.

Learn more about building for billions and get more tips to grow your games business by opting-in to the Playbook app beta and download the Playbook app in the Google Play Store.

Categories: Programming

The Image Optimization Technology that Serves Millions of Requests Per Day

This article will touch upon how Kraken.io built and scaled an image optimization platform which serves millions of requests per day, with the goal of maintaining high performance at all times while keeping costs as low as possible. We present our infrastructure as it is in its current state at the time of writing, and touch upon some of the interesting things we learned in order to get it here.

Let’s make an image optimizer

You want to start saving money on your CDN bills and generally speed up your websites by pushing less bytes over the wire to your user’s browser. Chances are that over 60% of your traffic are images alone.

Using ImageMagick (you did read ImageTragick, right?) you can slash down the quality of a JPEG file with a simple command:

$ convert -quality 70 original.jpg optimized.jpg

$ ls -la

-rw-r--r--  1 matylla  staff  5897 May 16 14:24 original.jpg

-rw-r--r--  1 matylla  staff  2995 May 16 14:25 optimized.jpg

Congratulations. You’ve just brought down the size of that JPEG by ~50% by butchering it’s quality. The image now looks like Minecraft. It can’t look like that - it sells your products and services. Ideally, images on the Web should have outstanding quality and carry no unnecessary bloat in the form of excessively high quality or EXIF metadata.

You now open your favourite image-editing software and start playing with Q levels while saving a JPEG for the Web. It turns out that this particular image you test looks great at Q76. You start saving all your JPEGs with quality set to 76. But hold on a second… Some images look terrible even with Q80 while some would look just fine even at Q60.

Ok. You decide to automate it somehow - who wants to manually test the quality of millions of images you have the “privilege” of maintaining. So you create a script that generates dozens of copies of an input image at different Q levels. Now you need a metric that will tell you which Q level is perfect for a particular image. MSE? SSIM? MS-SSIM? PSNR? You’re so desperate that you even start calculating and comparing perceptual hashes of different versions of your input image.

Some metrics perform better than others. Some work well for specific types of images. Some are blazingly fast while the others take a long time to complete. You can get away by reducing the number of loops in which you process each image but then chances are that you miss your perfect Q level and the image will either be heavier than it could be or quality degradation will be too high.

And what about product images against white backgrounds? You really want to reduce ringing/haloing artifacts around the subject. What about custom chroma-subsampling settings on per-image basis? That red dress against white background looks all washed-out now. You’ve learned that stripping EXIF metadata will bring the file size down a bit but you’ve also removed Orientation tag and now your images are all rotated incorrectly.

And that’s only the JPEG format. For your PNGs probably you’d want to re-compress your 7-Zip or Deflate compressed images with something more cutting-edge like Google’s Zopfli. You spin up your script and watch the fan on your CPU start to melt...

Categories: Architecture

Software Development Process Improvement Opportunities

Herding Cats - Glen Alleman - Wed, 06/15/2016 - 15:56

 When we hear about all the suggested ways to improve the effectiveness of our development effort, if we're to going work on improvements, let's go where the REAL money is. 

Here's the IT budget for the Federal Government. This is larger than all the IT systems found everywhere else in the world, plus all their custom built IT stuff. This is not the embedded systems. These are business systems.

So if we're going to make improvements in the spend of IT for better value, let's start here.

Screen Shot 2016-06-13 at 1.55.39 PM

 

 

Related articles Estimating and Making Decisions in Presence of Uncertainty Intentional Disregard for Good Engineering Practices? What's in a Domain?
Categories: Project Management

Productive Agile Teams:  I, T, E and M Shaped People

Pi(e)-shaped person?

Pi(e)-shaped person?

Many Agile discussions talk about team members as generalizing specialists.¬† Generalizing specialists are individuals that have a specialty; however, they also have broad levels of experience that can be applied.¬† Tim Brown of IDEO coined term ‚ÄėT-shaped people‚Äô (or skills) to describe this combination of specialization and experience.¬† There are a number of other letter- or symbol-based metaphors, sort of an alphabet soup of metaphors, that describe the type of person you might find in a team.

  • Dash-shaped people are generalists.¬† They have a breadth of experience, but little depth.¬† My mother used to describe these type of people as “a mile wide and inch deep.” In baseball, a generalist would be a utility player; highly valued because they fill in many positions, but not a starter.
  • I-shaped people are specialists. Specialists have a single specialty or focus.¬† My mother often called this type of person ‚Äúan inch wide and mile deep.‚Ä̬† Considering the concept of staff liquidity from Commitment, an I-shaped person is the most limited of the letters.¬† For example, a person that is DBA that specializes only in NoSQL databases will be less valuable if the team needs help with business analysis.
  • T-shaped people represent the classic agile team member.¬† T-shape people have a specialty, and in addition, they have a wider breadth of experience with other skills.¬† T-shape people have a focus but can fill in when bottlenecks are recognized. ¬† ¬†
  • M-shaped people have¬†multiple specialties.¬† From the point of view of flexibility, a person with more than one specialty can be applied more flexibly than someone with single specialty. Each additional specialty shifts our mental picture from a letter to a comb.
  • Pi-shaped people combine breadth with multiple specialties (combining T and M-shaped people).
  • E-shaped people are people that combine experience, expertise, exploration, and execution. However, a lot of emphases is placed on the last E: execution. E-shaped people translate ideas into reality.¬† I would suggest that Leonardo da Vinci was an E-shaped person.

Understanding where the individuals on a team fall in the alphabet soup is a powerful input for applying the concept of staff liquidity.  A team applying staff liquidity will always allocate the people with the least options (the fewest things they can do or most specialized) to work first.  These are generally the I-shaped people. Those with more options fill in the gaps in capability after the first wave of allocation and are available to react when things happen. The allocation process provides the team with the most flexibility. Even in high-performing teams, not everyone is an M, T, Pi, or E-Shaped person. In my experience, teams are a mix of generalists, specialists, and multi-talented superstars. Everyone on the team needs to recognize the mixture of talents and capabilities in order to manage the work and remove bottlenecks as the development process moves forward.


Categories: Process Management

Productive Agile Teams:  I, T, E and M Shaped People

Pi(e)-shaped person?

Pi(e)-shaped person?

Many Agile discussions talk about team members as generalizing specialists.¬† Generalizing specialists are individuals that have a specialty; however, they also have broad levels of experience that can be applied.¬† Tim Brown of IDEO coined term ‚ÄėT-shaped people‚Äô (or skills) to describe this combination of specialization and experience.¬† There are a number of other letter- or symbol-based metaphors, sort of an alphabet soup of metaphors, that describe the type of person you might find in a team.

  • Dash-shaped people are generalists.¬† They have a breadth of experience, but little depth.¬† My mother used to describe these type of people as “a mile wide and inch deep.” In baseball, a generalist would be a utility player; highly valued because they fill in many positions, but not a starter.
  • I-shaped people are specialists. Specialists have a single specialty or focus.¬† My mother often called this type of person ‚Äúan inch wide and mile deep.‚Ä̬† Considering the concept of staff liquidity from Commitment, an I-shaped person is the most limited of the letters.¬† For example, a person that is DBA that specializes only in NoSQL databases will be less valuable if the team needs help with business analysis.
  • T-shaped people represent the classic agile team member.¬† T-shape people have a specialty, and in addition, they have a wider breadth of experience with other skills.¬† T-shape people have a focus but can fill in when bottlenecks are recognized. ¬† ¬†
  • M-shaped people have¬†multiple specialties.¬† From the point of view of flexibility, a person with more than one specialty can be applied more flexibly than someone with single specialty. Each additional specialty shifts our mental picture from a letter to a comb.
  • Pi-shaped people combine breadth with multiple specialties (combining T and M-shaped people).
  • E-shaped people are people that combine experience, expertise, exploration, and execution. However, a lot of emphases is placed on the last E: execution. E-shaped people translate ideas into reality.¬† I would suggest that Leonardo da Vinci was an E-shaped person.

Understanding where the individuals on a team fall in the alphabet soup is a powerful input for applying the concept of staff liquidity.  A team applying staff liquidity will always allocate the people with the least options (the fewest things they can do or most specialized) to work first.  These are generally the I-shaped people. Those with more options fill in the gaps in capability after the first wave of allocation and are available to react when things happen. The allocation process provides the team with the most flexibility. Even in high-performing teams, not everyone is an M, T, Pi, or E-Shaped person. In my experience, teams are a mix of generalists, specialists, and multi-talented superstars. Everyone on the team needs to recognize the mixture of talents and capabilities in order to manage the work and remove bottlenecks as the development process moves forward.


Categories: Process Management

SE-Radio Episode 260: Haoyuan Li on Alluxio

Jeff Meyerson talks to Haoyuan Li about Alluxio, a memory-centric distributed storage system. The cost of memory and disk capacity are both decreasing every year‚Äďbut only the throughput of memory is increasing exponentially. This trend is driving opportunity in the space of big data processing. Alluxio is an open source, memory-centric, distributed, and reliable storage […]
Categories: Programming

SE-Radio Episode 260: Haoyuan Li on Alluxio

Jeff Meyerson talks to Haoyuan Li about Alluxio, a memory-centric distributed storage system. The cost of memory and disk capacity are both decreasing every year‚Äďbut only the throughput of memory is increasing exponentially. This trend is driving opportunity in the space of big data processing. Alluxio is an open source, memory-centric, distributed, and reliable storage […]
Categories: Programming

Sponsored Post: Gusto, Awake Networks, Spotify, Telenor Digital, Kinsta, Aerospike, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

  • Awake Networks is an early stage network security and analytics startup that processes, analyzes, and stores billions of events at network speed. We help security teams respond to intrusions with super-human  efficiency and provide macroscopic and microscopic insight into the networks they defend. We're looking for folks that are excited about building systems that handle scale in a constrained environment. We have many open-ended problems to solve around stream-processing, distributed systems, machine learning, query processing, data modeling, and much more! Please check out our jobs page to learn more.

  • Site Reliability Engineer Manager. We at Spotify are looking for an engineering leader (Chapter Lead) to manage the NYC part of the Site Reliability Engineering team. This team works with the infrastructure that powers the music service used by millions of users, built by hundreds of engineers. We create tools, develop infrastructure, and teach good practices to help Spotify engineers move faster. As a Chapter Lead your primary responsibility is to the people on your team: ensuring that the members are growing as engineers, doing valuable work, performing well, and generally having a great time at Spotify. Read more and apply here

  • Site Reliability Engineer. Spotify SREs design, code, and operate tools and systems to reduce the amount of time and effort necessary for our engineers to scale the world’s best music streaming product to 40 million users. We are strong believers in engineering teams taking operational responsibility for their products and work hard to support them in this. We work closely with engineers to advocate sensible, scalable, systems design and share responsibility with them in diagnosing, resolving, and preventing production issues. Read more and apply here

  • Backend Engineer. We at Spotify are looking for senior backend engineers to join our team of talented engineers that share a common interest in distributed backend systems, their scalability and continued development.  You will build the backend systems that power our application, scale highly distributed systems, and continuously improve our engineering practices. Read more and apply here

  • Security Engineer. The security team at Spotify is a distributed team supporting autonomous development teams with a focus on raising security awareness, sharing responsibility, and building tools. We aim to constantly improve the security posture for our fast-paced, rapidly-changing environment in a manner that will keep up with our scale. We’re knowledgeable in many domains of security and are willing to teach (and learn) from anyone at the company. Read more and apply here

  • Data Architect. You will be a key figure in a rapidly growing team, where the role will highly depend on you. You must have extensive experience in Cloud Computing and AWS and deeply understand databases and/or Information Architecture (PostgreSQL, Cassandra, MongoDB, Redis, etc.). And if you also know your way in the Hadoop ecosystem (including Spark and HDFS), Kafka, Cassandra and other big data technologies, this will be more than enough. You have an understanding of how to structure the data sources and data feeds of the Data Insights big data solution, plan for integration and maintenance of the data as well as have an eye on the logical design and on how the data flows through the different stages. Please apply here at Telenor Digital.

  • Data Engineers. You know Java, and possibly Clojure or Scala, are effective in a Linux terminal (shell scripting, configuration files, etc.), have experience with some SQL database, preferably PostgreSQL, have experience with Apache Kafka, Apache Spark, Elasticsearch. You enjoy automating things and building systems. Machine learning experience is considered a plus, and Continuous Integration + delivery is important to you, and writing tests a given. You are humble and passionate; you like to listen and can understand the viewpoints of others and strive to be a good dialog partner, but you can focus on delivery once a direction is decided. Please apply here at Telenor Digital.

  • Software Engineers, Analytics. You've got strong front-end developer skills: HTML, CSS, and Javascript, with knowledge of D3.js or other charting libraries - Clojurescript is a plus; have worked with various programming languages, like Java, Clojure, or Python; have experience with SQL (PostgreSQL). You have experience with Cloud Computing, especially with AWS, a deep foundation in computer science; data structures, algorithms and programming languages, as well as networking and concurrency; exposure to architectural patterns of a large, high-scale web applications; experience with shell scripting, configuration files, etc. and enjoy automating things and building systems. Please apply here at Telenor Digital.

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics
Fun and Informative Events
  • NoSQL Databases & Docker Containers: From Development to Deployment. What is Docker and why is it important to Developers, Admins and DevOps when they are using a NoSQL database? Find out in this on-demand webinar by Alvin Richards, VP of Product at Aerospike, the enterprise-grade NoSQL database. The video includes a demo showcasing the core Docker components (Machine, Engine, Swarm and Compose) and integration with Aerospike. See how much simpler Docker can make building and deploying multi-node, Aerospike-based applications!  

  • Discover the secrets of scalability in IT. The cream of the Amsterdam and Berlin tech scene are coming together during TechSummit, hosted by LeaseWeb for a great day of tech talk. Find out how to build systems that will cope with constant change and create agile, successful businesses. Speakers from SoundCloud, Fugue, Google, Docker and other leading tech companies will share tips, techniques and the latest trends in a day of interactive presentations. But hurry. Tickets are limited and going fast! No wonder, since they are only €25 including lunch and beer.
Cool Products and Services
  • Kinsta provides high speed, automatically scalable managed WordPress hosting services for businesses large and small. All servers run on Google Cloud and all individual sites are completely compartmentalized using the latest LXD technology. All sites include powerful SSH access and tools like Git and WP-CLI are available out-of-the-box.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Deadlines Require Time and Care For Success

Herding Cats - Glen Alleman - Tue, 06/14/2016 - 16:10

The latest thread in agile is ...

the continued paradigm of deadline-driven development is killing the benefits that Agile Software Development can bring.

It is suggested by Neil Killick that ...

... using genuine time constraints as a factor in the prioritisation of work rather than as a focus for its execution, the odds of meeting those "deadlines" are actually improved.

Not sure what a genuine time constraint is versus any time constraint. But the conjecture that Executives of software product and service companies always want stuff delivered faster, cheaper and better. Agile principles and methods are believed to be a way to achieve this ignores several fundamental principles of all work and especially software development work.

 All project work has uncertanty. Reducible uncertanty and irreducible uncertanty.

Agile does not remove this uncertanty. Agile is NOT a risk management process. Genuine constraints don't remove this uncertanty., I've spoken many times in the past about how to Manage in the Presence of Uncertainty.

These principles are always in place. More so on Agile projects, where emergent requirements are encouraged, which in turn drive uncertanty further to the right in the Probability Distribution Function of the possible range of durations, costs, and technical performance shortfalls.

When those uncertainties are not considered and handled, any project is going to have an increased chance of being late, over budget, and have technical issues. 

Setting genuine constraints may make this issue visible, but does not remove the risk to the project's probability of success. Only active risk management and the necessary margin can increase this probability.

The only protection for irreducible uncertanty is Margin and the only protection for reducible uncertainty is active risk management. Both of these activities require careful planning and execution of the plan, along with the estimate of the probability of occurrence of a reducible event and the statistical distribution of the naturally occurring variances and the probabilistic impact on the success of the project.

This is the Reason for Planning

It's been suggested I work in a unique domain, where deadlines and need dates are themselves unique. This is False.

No credible business, no matter the size, Doesn't have a need date for the Value produced by the software project. If there was no need date, the developers would show up whenever they wanted after spending whatever they wanted, with whatever they thought the customer needed.

Ignoring the simple time cost of money and the time phased Return of Investment of (Value-Cost)/Cost, any business that intends to stay in business is spending money for software - either developed or purchased - to provide some value to the business. Not having a need date for the production of that Value means the business is ignoring the core principle of retained earnings. Even non-profits and not-for-profit business (and I've worked there as well) have a time value of money economic model.

The End

  • No process can remove the irreducible uncertainties that create risk (naturally occurring variances) of project work - only margin can protect the project from these variances. Schedule margin, cost margin, technical margin.¬†If you're building software with Scrum, add more sprints to cover the margin. Or be prepared to deprecate the¬†needed Capabilities and the Features and Stories that implement those Capabilities.¬†You do know what needed business or mission¬†capabilities¬†will be produced for the money you're¬†spending¬†right?¬†NO? You're on a death march project from day one.¬†Genuine¬†constraints¬†aren't going to do anything for you. Nothing's going to help that project.
  • For reducible uncertainties that create risk (probability of occurrence uncertainties), specific actions need to be taken to reduce the risk from these probabilistic event. This can be¬†buying two, running extra tests on test beds, formal verification and validation, external surveillance‚Ć of the product (DB engineering, security, performance assessments).

So if you're going to produce value for your customer, that value is most always time sensitive other wise it's de minimis value. If it's time sensitive, there is a deadline. If there's a deadline, reducible and irreducible uncertanty and the risk it produces must be handled. 

Risk Management is How Adults Manage Projects - Tim Lister

† The naive notion that scrum teams are self contained and need no external support is only the case when there is little at risk for the resulting code. Cyber security, Database integrity, performance validation, operational integrity are external surveillance roles on any Software Intensive System of Systems. This is called Governance and guided by documents like ISO 12207, ITIL V3.1, COBIT, DoDAF, ToGAF.

Related articles Risk Management is How Adults Manage Projects Essential Reading List for Managing Other People's Money Agile Software Development in the DOD The Art of Systems Architecting Seven Principles of Agile Software Development in the US DOD Deadlines Always Matter
Categories: Project Management

Software Economics

Herding Cats - Glen Alleman - Mon, 06/13/2016 - 19:37

Here's some thoughts on the economics of software development using other people's money, after 3 weeks of working a proposal for a major software intensive system of systems using Agile.

  • With the advent of Agile, the linear spend planning and delivery of capabilities was altered to iterative and incremental spend and delivery planning.
  • Time boxed, drip funding, fixed budget are ¬†funding profiles that might be added to the economic model once they've been verified and validated in practice to provide better approaches to managing funding in the presence of uncertainty.
  • Core business processes are still in effect, no matter the funding profiles
    • Money consumed provides capabilities produced.
    • Capabilities enables value from the use of money.
    • Capability defined by the¬†user¬†through some form of¬†value assessment, not by the provider.
    • Emergent needs can be addressed.
  • When we merge business value with development cost without monetizing that business value, we've lost the ability to make economic decisions.
    • Both the business value and the development cost operate in the presence of uncertainty.
    • This uncertainty is always present.
    • To make those economic decisions, we need to estimate both business value and development cost.
    • There is no way out of this in any credible development environment.
  • Monetized value allows the decision process to use ROI, IRR, Analysis of Alternatives.
  • Without monetized value cost and value decisions are simply¬†made up and are arbitrary.

So to come full circle 

Why Do We Need Estimates?

  • It's not the developers that need the estimates - they take the money and turn that money into value.
    • They should estimate if the needed value can be produced by that money.
    • But if the developers decided they don't need to estimate, then they'll be subject to the whims of management, just like Dilbert.
  • The developers are certainly closed to the work and have the information needed to best contribute to the estimate.
  • Estimates are primarily used to support decisions.
    • Product margin.
    • Cost target for business management.
    • ROI, IRR, AOA.
    • Staffing, release date, launch date - literally and figuratively.
  • Knowing the cost of a product or service produced is a fundamental piece of information needed by the business if they intend to stay in business.

It can't be any clearer than this - if you  don't know what something costs or is going to cost in the future, you can't make a business decision about it's value

When you read estimates are a waste, estimates are non-transferable, estimates are wrong, estimates are temporary think again. Go ask those paying if they need estimates to make decisions for the business. If they don't, then continue to spend your customers money. If they do, they may consider looking for someone who knows the difference between willful ignorance of how to estimate software development and someone who can provide that information needed to stay in business. On our proposal team, this would get you a 2nd place prize in the estimating capabilities contest - which is a one way trip home with weekends off.

Related articles Capabilities Based Planning First Then Requirements Incremental Delivery of Features May Not Be Desirable Quantifying the Value of Information Essential Reading List for Managing Other People's Money Decision Making Without Estimates? Economics of Software Development Risk Management is How Adults Manage Projects Process is King Deadlines Always Matter
Categories: Project Management

The Case for and Against Estimates, Part 5

If you’ve been following the conversation, I discussed in Part 1 how I like agile roadmaps and gross estimation and/or targets for projects and programs. In Part 2, I discussed when estimates might not be useful. In Part 3, I discussed how estimates can be useful. In Part 4, I discussed #noestimates. ¬†Let me summarize my thinking and what I do here.

This series started because Marcus Blankenship and I wrote two articles: Stay Agile With Discovery, which is how to help your client see benefit and value early from a small project first, before you have to estimate the whole darn thing; and Use Demos to Build Trust, how to help your client see how they might benefit from agile in their projects, even if they want an upfront estimate of “all” the work.

Let me clarify my position: I am talking about agile projects. I am not talking about waterfall, iterative, or even incremental approaches. I have used all those approaches, and my position in these posts are about agile projects and programs.

In addition, I have a ton of experience in commercial, for-profit organizations. I have that experience in IT, Engineering, R&D. I have very little experience in non-profit or government work. Yes, I have some, but those clients are not the bulk of my consulting business. As with everything I write (or anyone else does), you need to take your context into account. I see wide project and program variation in each client, never mind among clients.

That said, in agile, we want to work according to the agile principles (look past the manifesto at the principles). How can we welcome change? How can we show value? How can we work with our customer?

Many people compare software to construction. I don’t buy it. Here’s a little story.

In my neighborhood, the gas utility is replacing gas mains.The project is supposed to about three months or so. We received a letter in May, saying which streets they expected to work on and when. The reality is quite different.

They work on one street, and have to go around the corner to another street? Why? Because the mains and lines (water, gas, electric) are not where the drawings said they would be. Instead of a nice grid, they go off at 45-degree angles, cross the street, come back, etc. Nothing is as the plans suggested it should be. During the day, I have no idea what streets will be open for me to drive on. The nice folks put everything back to some semblance of normal each night. They patch the roads they work on during each day.

And yet, they are on budget. Why? Because they accounted for the unknowns in the estimate. They padded the estimate enough so that the contractor would make money. They accounted for almost everything in their estimate. How could they do this? The company doing the work has seen these circumstances before. They knew the 50-year-old plans were wrong. They didn’t know how, but they’ve seen it all before, so they can manage their risks.

The learning potential in their work is small. They are not discovering new risks every day. Yes, they are working around local technical debt. They knew what to expect and they are managing their risks.

Here’s another story of a software project. This is also a three-to-four month project (order of magnitude estimate). The product hasn’t been touched in several years, much to the software team’s dismay. They have wanted to attack this project for a couple of years and they finally got the go-ahead.

Much has changed since they last touched this product. The build system, the automated testing system, the compiler—all of those tools have changed. The people doing the work have changed. The other products that interact with this product have changed.

The team is now working in an agile way. They deliver demonstrable product almost every day. They show the working product every week to senior management.

They are learning much more than they thought they would. When they created the estimate, they had assumptions about underlying services from other products. Well, some of those assumptions were not quite valid. They asked what was driving the project? They were told the date to release. Well, that changed to feature set. (See Estimating the Unknown, Part 1 for why that is a problem.)

They feel as if the project is a moving target. In some ways, it is. The changes arose partly because of what the team was able to demonstrate. The PO decided that because they could do those features over there and release those features earlier, they could reduce their Cost of Delay. Because they show value early and often, they are managing the moving target changes in an agile way. I think they will be able to settle down and work towards a target date once they get a few more features done and released.

Why is this team in such project turmoil? Here are some reasons:

  • Their assumptions about the product and its interactions were not correct. They had spent three days estimating “everything.” They knew enough to start. And, they uncovered more information as they started. I asked one of the team members if they could have estimated longer and learned more. He said, “Of course. It wasn’t worth more time to estimate. It was worth our time to deliver something useful and get some feedback. Sure, that feedback changed the order of the features, so we discovered some interesting things. But, getting the feedback was more useful than more estimation.” His words, not mine.
  • The tooling had changed, and the product had not changed to keep up with the tooling. The team had to reorganize how they built and tested just to get a working build before they built any features.
  • The technical debt accumulated in this product and across the products for the organization. Because the management had chosen projects by estimated duration in the past, they had not used CoD to understand the value of this project until now.

The team is taking one problem at a time, working that problem to resolution and going on to the next. They work in very small chunks. Will they make their estimate of 3-4 months? They are almost 3 months in. I don’t think so, and that’s okay. It’s okay because they are doing more work than they or their management envisioned when the project started. In this case, the feature set grew. It partly grew because the team discovered more work. It partly grew because the PO realized there was more value in other features not included in the original estimate.

In agile, the PO can take advantage of learning about features of more value. This PO works with the team every day. (The team works in kanban, demos in iterations.)

The more often we deliver value, the more often we can reflect on what the next chunk of value should be. You don’t have to work in kanban. This team likes to do so.

The kinds of learning this team gains for the software project is different that what the gas main people are learning in my neighborhood. Yes, the tools have changed since the gas mains were first installed. The scope of those changes are much less than even the tools changes for the software project.

The gas main project does “finish” something small every day, in the sense that the roads are safe for us to drive on when they go home at night. However, the patches are just that—patches for the road, not real paving. The software team finishes demonstrable value every day. If they had to stop the project at any time, they could. The software team is finishing. (To be fair to the gas people, it probably doesn’t make monetary sense to pave a little every day to done. And, we can get to done, totally done, in software.)

The software team didn’t pad the estimate. They said, “It’s possible to be done in 3 months. It’s more likely to be done in 4 months. At the outside, we think it will take 5 months.” And, here’s what’s interesting. If they had completed just what was in their original estimate, they might well be done by now. And, because it’s software, and because they deliver something almost every day, everyone—the PO, management, the team—see where there is more value and less value.

The software team’s roadmap has changed. The product vision hasn’t changed. Their release criteria have changed a little, but not a lot. They have changed what features they finish and the order in which they finish them. That’s because people see the product every day.

Because the software team, the PO and the management are learning every day, they can make the software product more valuable every day. The gas main people don’t make the project more valuable every day.

Is estimation right for you? Some estimation is almost always a good decision. If nothing else, the act of saying, “What will it take us to do this thing?” helps you see the risks and if/how you want to decompose that thing into smaller chunks.

Should you use Cost of Delay in making decisions about what feature to do first and what project to do first? I like it because it’s a measure of value, not cost. When I started to think about value, I made different decisions. Did I still want a gross estimate? Sure. I managed projects and ran departments where we delivered by feature. I had a ton of flexibility about what to do next.

Are #noestimates right for you? It depends on what your organization needs. I don’t need estimates in my daily work. If you work small and deliver value every day and have transparency for what you’re doing, maybe you don’t need them either.

Estimates are difficult. I find estimation useful, the estimates not so much. I find that looking at the cost and risks are one way to look at a project. Looking at value is another way.

I like asking if what my management wants is commitment or resilience. See When You Need to Commit. Many organizations want to use agile for resilience, and then they ask for long commitments. It’s worthwhile asking questions to see what your organization wants.

Here are my books that deal with managing projects, estimation, Cost of Delay and program management:

For me, estimates are not inherently good or bad. They are more or less useful. For agile projects, I don’t see the point of doing a lot of estimation. Why? Because we can change the backlog and finish different work. Do I like doing some estimation to understand the risks? Yes.

I don’t use cost as a way to evaluate projects in the project portfolio. I¬†prefer to look at some form of value rather than only use an estimate. For agile projects, this works, because we can see demonstrable product soon. We can change the project portfolio once we have seen delivered value.

Remember. my context is not yours. These ideas have worked for me on at least three projects. They might not work for you. On the other hand, maybe there is something you can use from here in  your next agile project or program.

Please do ask more questions in the comments. I might not do a post in a while. I have other writing to finish and these posts are way too long!

Categories: Project Management

The Failure Shirt: Agile Diplomats at the EU

NOOP.NL - Jurgen Appelo - Mon, 06/13/2016 - 15:23
The Failure Shirt

“We made a mistake. They are going to hate me tomorrow,” he said.

Imagine a room with 28 EU diplomats, twice that many specialists, dozens of translators, and one chairman who needs to tell everyone that his team has made a planning mistake and that everyone will suffer for it.

Needless to say, the chairman wasn’t looking forward to the next day. “I need them in a cooperative mode,” he said. “It is hard enough already to get agreements out of 28 countries. They will be very annoyed with us screwing up the planning. We’re not going to make much progress tomorrow.”

The Failure Hat

This problem made me think of people management in agile environments. Agile teams often have creative solutions to social problems, and one of those solutions immediately came to mind.

I told the chairman that, on some agile teams, if anyone has made a mistake for which the entire team has to suffer, that person wears the failure hat for a whole day. If someone accidentally destabilizes the product or “breaks the build” he or she is visually identified as the scapegoat, in a playful manner, so that everyone knows who did it. With a failure hat, people change from pointing fingers to poking fun.

Resentment and Vengeance

It is a human tendency to be resentful when other people make mistakes for which we have to suffer. In fact, vengeance is one of the sixteen basic desires of human beings, says behavioral psychologist Professor Steven Reiss. Even if there’s no immediate urge to hit back and retaliate for any (accidental) wrongdoings, we certainly feel it’s in our right to be pissed off and remain uncooperative, until the feelings of irritation have worn off. And this can take a while.

That’s why it’s rarely enough just to say, “I’m sorry”, however sincerely these words are spoken. The apology takes just one second of communication. But it can take hours for an annoyed person to say sincerely, “OK, I forgive you.” And on an agile team, or with a group of 28 diplomats, these can be costly uncooperative hours.

The Failure Shirt

I advised the chairman, “After you told them that you’re sorry, wear a silly hat, or a stupid shirt, for the rest of the day. Explain to them the meaning of the failure hat or failure shirt: You openly admit the mistake, and you allow everyone to point at you and laugh at you for a whole day, on the condition that you can immediately switch back into a collaborative mode.”

The next day, the chairman, who is well-known for his expensive suits and crisply tailored shirts, did exactly what I said. For the sake of the meeting, he sacrificed his dignity, admitted the mistake of his team, took off his jacket and stark white shirt in front of 80+ diplomats, specialists, and translators, and revealed the most ridiculous colored T-shirt that anyone had ever worn during an EU negotiation.

It was a huge success.

He received great applause, and laughs and cheers from everyone in the room. During the coffee breaks, half of the attendees took pictures and selfies with him and some congratulated him on his smart management move.

Most importantly, the rest of the day, the group enjoyed a cooperative and maybe even somewhat festive mood. People’s feelings of resentment and vengeance were satisfied: They could all see the chairman sitting there, suffering, in his silly shirt. Who wouldn’t smile at that? Let’s take another picture! The rest of the meeting was conducted in the failure shirt.

Afterward, the chairman said to me, “We made significant progress today, and I am now ridiculously popular. You’re my secret weapon!”

I felt extremely pleased that better management practices can work in any context. And also happy that I had an opportunity to assist in the career of my husband.

p.s. Make sure that wearing the shirt or hat feels somewhat embarrassing to the guilty person. I wouldn’t feel guilty in a colorful T-shirt. In fact, it was my shirt.

My new book Managing for Happiness is available from June 2016. PRE-ORDER NOW!

Managing for Happiness cover (front)

The post The Failure Shirt: Agile Diplomats at the EU appeared first on NOOP.NL.

Categories: Project Management

Software Development Linkopedia June 2016

From the Editor of Methods & Tools - Mon, 06/13/2016 - 14:35
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about software bugs, rewriting code, project routines, improving retrospectives, acceptance testing problems, #noprojects, DevOps and Agile (or not) software architecture. Blog: Software has bugs. This is normal. Blog: When to Rewrite […]

SPaMCAST 398 ‚Äď Kevin Kruse, 15 Secrets Successful People Know About Time Management

SPaMCAST Logo

http://www.spamcast.net

Listen Now

Subscribe on iTunes                   Check out the podcast on Google Play Music

The Software Process and Measurement Cast 398 features our interview with bestselling author Kevin Kruse.  We discussed his new book, 15 Secrets Successful People Know About Time Management.  The ideas Kevin presents on managing time and more accurately managing focus are extremely useful and in some cases just a bit controversial. Surprising findings include:

  • ¬†¬†¬†¬†¬†¬†¬†¬†Most high achievers do NOT use to-do lists.
  • ¬†¬†¬†¬†¬†¬†¬†¬†The Harvard experiment that showed how 3 questions saved 8 hours a week.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Procrastination is cured by ‚Äútime traveling‚ÄĚ to defeat your future self.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Most high achievers practice a consistent morning ritual.
  • ¬†¬†¬†¬†¬†¬†¬†¬†How high achievers manage their email

If you haven’t bought a copy of 15 Secrets Successful People Know About Time Management, I would recommend that you start your personal program to improve your productivity by using the link in the show notes and buying a copy!

Kevin Kruse is an Inc 500 serial entrepreneur, New York Times bestselling author, and Forbes columnist. Kruse has been named a Top 100 Business Thought Leader by Trust Across America. Over the last 20 years, Kevin has started or co-founded several multi-million dollar companies which have won awards for both fast growth (Inc 500) as well as employee engagement (#4 Best Place to Work in PA). As a keynote speaker and performance coach, Kevin has worked with Fortune 500 CEOs, startup founders, US Marine Corps officers and non-profit leaders.

Contact Information:

twitter.com/Kruse

facebook.com/KruseAuthor

instagram.com/kevin__kruse

www.15TimeSecrets.com

www.KevinKruse.com

info@kevinkruse.com

Re-Read Saturday News

We concluded the read of Commitment ‚Äď Novel About Managing Project Risk by Maassen, Matts, and Geary. This week‚Äôs installment will addresses the epilogue (everybody lives happily ever after) and summarizes some of the key concepts that I have already found useful. ¬†Next week we will begin re-reading ¬†Kent Beck‚Äôs xP Explained, Second Edition. I originally read the first edition several years ago on flights traveling between clients. ¬†The book provides an important explanation for xP and the even today confronts us with the realization that Agile is more than just Scrum. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on using storytelling to jumpstart Agile efforts. Telling stories is a natural human activity from time immemorial that can be used to create a succinct and informative story to describe a business need or the future of an organization.  The essay provides an approach for using storytelling and suggests that sometimes the journey an organization must take to achieve a goal needs facilitation.

We will also have columns from the Software Sensi, Kim Pries and an entry from Gene Hughson’s Form Follows Function Blog.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

 

 


Categories: Process Management

SPaMCAST 398 ‚Äď Kevin Kruse, 15 Secrets Successful People Know About Time Management

Software Process and Measurement Cast - Sun, 06/12/2016 - 22:00

The Software Process and Measurement Cast 398 features our interview with bestselling author Kevin Kruse.  We discussed his new book, 15 Secrets Successful People Know About Time Management.  The ideas Kevin presents on managing time and more accurately managing focus are extremely useful and in some cases just a bit controversial. Surprising findings include:

  • ¬† ¬† ¬† ¬†¬†Most high achievers do NOT use to-do lists.
  • ¬†¬†¬†¬†¬†¬†¬†¬†The Harvard experiment that showed how 3 questions saved 8 hours a week.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Procrastination is cured by ‚Äútime traveling‚ÄĚ to defeat your future self.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Most high achievers practice a consistent morning ritual.
  • ¬†¬†¬†¬†¬†¬†¬†¬†How high achievers manage their email

If you haven’t bought a copy of 15 Secrets Successful People Know About Time Management, I would recommend that you start your personal program to improve your productivity by using the link in the show notes and buying a copy!

Kevin Kruse is an Inc 500 serial entrepreneur, New York Times bestselling author, and Forbes columnist. Kruse has been named a Top 100 Business Thought Leader by Trust Across America. Over the last 20 years Kevin has started or co-founded several multi-million dollar companies which have won awards for both fast growth (Inc 500) as well as employee engagement (#4 Best Place to Work in PA). As a keynote speaker and performance coach, Kevin has worked with Fortune 500 CEOs, startup founders, US Marine Corps officers and non-profit leaders.

 

Contact Information:

twitter.com/Kruse

facebook.com/KruseAuthor

instagram.com/kevin__kruse

www.15TimeSecrets.com

www.KevinKruse.com

info@kevinkruse.com

 

Re-Read Saturday News

We concluded the read of Commitment ‚Äď Novel About Managing Project Risk by Maassen, Matts, and Geary. This week‚Äôs installment will addresses the epilogue (everybody lives happily ever after) and summarizes some of the key concepts that I have already found useful. ¬†Next week we will begin re-reading ¬†Kent Beck‚Äôs xP Explained, Second Edition. I originally read the first edition several years ago on flights traveling between clients. ¬†The book provides an important explanation for xP and the even today confronts us with the realization that Agile is more than just Scrum. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday.

 

Next SPaMCAST

In the next Software Process and Measurement Cast will feature our essay on using storytelling to jumpstart Agile efforts. Telling stories is a natural human activity from time immemorial that can be used to create a succinct and informative story to describe a business need or the future of an organization.  The essay provides an approach for using storytelling and suggests that sometimes the journey an organization must take to achieve a goal needs facilitation.

We will also have columns from the Software Sensi, Kim Pries and an entry from Gene Hughson’s Form Follows Function Blog.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Re-Read Saturday: Commitment ‚Äď Novel about Managing Project Risk, Part 7

Picture of the book cover

Commitment

Today we conclude our read of Commitment ‚Äď Novel about Managing Project Risk with a few highlights. ¬†¬†

The novel Commitment presents Rose’s evolution from an adjunct of a traditional project manager into an Agile leader. Rose is ripped out of her safe place and presented with an adventure she is reticent to take.  The project she is thrust into leading is failing and Rose can either take the fall or change. Real options and Agile techniques are introduced as a path forward for both Rose and the team. In the novel, Agile concepts such as self-organization are at odds with how things are done.  When a change is introduced that clashes with how we do things, it generates cognitive dissonance. Coaching and mentoring are methods for sorting out the problems caused when dissonance disrupts and organization.

One the hardest changes Rose has to address during the novel is that the job is not really done until the work is delivered to the customer. And as we find out later in the story, the job is not done until the customer uses what has been delivered. Many software projects fall prey to this problem because developers and testers are incentivized to complete their portion of the work so they can begin the next project. In many cases as soon as work is thrown over the wall it disappears from short-term memory. Throwing work over the wall breaks or delays the feedback cycle which makes rework more costly.  In the novel we see this problem occur twice, once between development and testing and later between the whole team and the customer who was afraid to implement the code every two weeks. Completing work and generating feedback are critical to making decisions

The novel’s explanation of staff liquidity was excellent. The process of staff liquidity begins by allocating the people with the least options (the fewest things they can do or most specialized) to work.  In self-managing teams, this requires that team members have a good deal of team and self-knowledge (see Johari’s Window). Those with more options fill in the gaps in capability after the first wave of allocation and are available to react when things happen. Allocating personnel with the most options last provides the team with the most flexibility. It should be noted that just because someone has a large amount of experience, that might not translate to options. Expertise and experience in one capability (for example, a senior person that can only test) has very few options, and therefore, have to be allocated early. Steven Adams connotes staff liquidity to T-shaped people. T-shaped people have a depth of expertise in one or few areas but have shallower expertise outside his or her specialty.  A T-shaped person enjoys learning and will have a good handle on their learning lead time. A team of T-shaped people and the use of staff liquidity increases the number of options a team has to deal with problems and changes as they are recognized.

In the epilogue of Commitment ‚Äď Novel about Managing Project Risk everyone lives happily ever after. ¬†At the end of the novel, I am left both with a better handle on a number of Agile and ¬†lean techniques and perhaps more importantly with the need to see the options possible so that we can discern the difference between making a commitment and when we actually have choices. In the end, options allow us to maximize the value we deliver as we navigate the world full of changing context.

Thanks to Steven Adams who recommended Commitment.  Steven re-read the book and provided great comments week in and week out (Steven’s blog). His comments filled in gaps and drew my eye to ideas that I had not put together.  

Next week we begin the re-read of be Kent Beck’s xP Explained, Second Edition.

Previous Installments:

Part 1 (Chapters 1 and 2)

Part 2 (Chapter 3)

Part 3 (Chapter 4)

Part 4 (Chapter 5)

Part 5(Chapter 6)

Part 6 (Chapter 7)

 


Categories: Process Management

Security "Crypto" provider deprecated in Android N

Android Developers Blog - Fri, 06/10/2016 - 20:10

Posted by Sergio Giro, software engineer

random_droid

If your Android app derives keys using the SHA1PRNG algorithm from the Crypto provider, you must start using a real key derivation function and possibly re-encrypt your data.

The Java Cryptography Architecture allows developers to create an instance of a class like a cipher, or a pseudo-random number generator, using calls like:

SomeClass.getInstance("SomeAlgorithm", "SomeProvider");

Or simply:

SomeClass.getInstance("SomeAlgorithm");

For instance,

Cipher.getInstance(‚ÄúAES/CBC/PKCS5PADDING‚ÄĚ);
SecureRandom.getInstance(‚ÄúSHA1PRNG‚ÄĚ);

On Android, we don’t recommend specifying the provider. In general, any call to the Java Cryptography Extension (JCE) APIs specifying a provider should only be done if the provider is included in the application or if the application is able to deal with a possible ProviderNotFoundException.

Unfortunately, many apps depend on the now removed ‚ÄúCrypto‚ÄĚ provider for an anti-pattern of key derivation.

This provider only provided an implementation of the algorithm ‚ÄúSHA1PRNG‚ÄĚ for instances of SecureRandom. The problem is that the SHA1PRNG algorithm is not cryptographically strong. For readers interested in the details, On statistical distance based testing of pseudo random sequences and experiments with PHP and Debian OpenSSL,Section 8.1, by Yongge Want and Tony Nicol, states that the ‚Äúrandom‚ÄĚ sequence, considered in binary form, is biased towards returning 0s, and that the bias worsens depending on the seed.

As a result, in Android N we are deprecating the implementation of the SHA1PRNG algorithm and the Crypto provider altogether. We’d previously covered the issues with using SecureRandom for key derivation a few years ago in Using Cryptography to Store Credentials Safely. However, given its continued use, we will revisit it here.

A common but incorrect usage of this provider was to derive keys for encryption by using a password as a seed. The implementation of SHA1PRNG had a bug that made it deterministic if setSeed() was called before obtaining output. This bug was used to derive a key by supplying a password as a seed, and then using the "random" output bytes for the key (where ‚Äúrandom‚ÄĚ in this sentence means ‚Äúpredictable and cryptographically weak‚ÄĚ). Such a key could then be used to encrypt and decrypt data.

In the following, we explain how to derive keys correctly, and how to decrypt data that has been encrypted using an insecure key. There’s also a full example, including a helper class to use the deprecated SHA1PRNG functionality, with the sole purpose of decrypting data that would be otherwise unavailable.

Keys can be derived in the following way:

  • If you're reading an AES key from disk, just store the actual key and don't go through this weird dance. You can get a SecretKey for AES usage from the bytes by doing:

    SecretKey key = new SecretKeySpec(keyBytes, "AES");

  • If you're using a password to derive a key, follow Nikolay Elenkov's excellent tutorial with the caveat that a good rule of thumb is the salt size should be the same size as the key output. It looks like this:
   /* User types in their password: */  
   String password = "password";  

   /* Store these things on disk used to derive key later: */  
   int iterationCount = 1000;  
   int saltLength = 32; // bytes; should be the same size
              as the output (256 / 8 = 32)  
   int keyLength = 256; // 256-bits for AES-256, 128-bits for AES-128, etc  
   byte[] salt; // Should be of saltLength  

   /* When first creating the key, obtain a salt with this: */  
   SecureRandom random = new SecureRandom();  
   byte[] salt = new byte[saltLength];  
   random.nextBytes(salt);  

   /* Use this to derive the key from the password: */  
   KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt,  
              iterationCount, keyLength);  
   SecretKeyFactory keyFactory = SecretKeyFactory  
              .getInstance("PBKDF2WithHmacSHA1");  
   byte[] keyBytes = keyFactory.generateSecret(keySpec).getEncoded();  
   SecretKey key = new SecretKeySpec(keyBytes, "AES");  

That's it. You should not need anything else.

To make transitioning data easier, we covered the case of developers that have data encrypted with an insecure key, which is derived from a password every time. You can use the helper class InsecureSHA1PRNGKeyDerivator in the example app to derive the key.

 private static SecretKey deriveKeyInsecurely(String password, int
 keySizeInBytes) {  
    byte[] passwordBytes = password.getBytes(StandardCharsets.US_ASCII);  
    return new SecretKeySpec(  
            InsecureSHA1PRNGKeyDerivator.deriveInsecureKey(  
                     passwordBytes, keySizeInBytes),  
            "AES");  
 }  

You can then re-encrypt your data with a securely derived key as explained above, and live a happy life ever after.

Note 1: as a temporary measure to keep apps working, we decided to still create the instance for apps targeting SDK version 23, the SDK version for Marshmallow, or less. Please don't rely on the presence of the Crypto provider in the Android SDK, our plan is to delete it completely in the future.

Note 2: Because many parts of the system assume the existence of a SHA1PRNG algorithm, when an instance of SHA1PRNG is requested and the provider is not specified we return an instance of OpenSSLRandom, which is a strong source of random numbers derived from OpenSSL.

Categories: Programming

One Week Remaining for Super-Early-Bird Registration for Product Owner and Writing Workshops

I have two online workshops starting in late August:

If you have been reading the estimations posts and are wondering, “How do I help my project deliver something every day or more often,” you should register for the workshop. We’ll discuss what your role is, how to plan for the future and the present, how to decide what features to do when, how to know when a feature is done, and tips and traps. See¬†Practical Product Owner Workshop: Deliver What Your Customers Need¬†for more details.

If you like the way I write (regardless of whether you agree with me), and you need to write more for your job or to help your business, take my¬†Non-Fiction Writing Workshop: Write Non-Fiction to Enhance Your Business and Reputation. That workshop is about building a writing habit, learning the separate parts of pre-writing, writing, editing, and publishing. We’ll address your specific writing challenges, concerns, and fears. I’m the only one who will read your work, so no worries about other people seeing your writing.

Super-early-bird registration for both workshops ends June 17, 2016. I hope you decide to join us.

Categories: Project Management

How Smart is Your City Transportation?

How easy is it to get around in your city from point A to point B?

Here’s an interesting article that rounds up some of the latest ideas:

Getting Around in European capitals: How smart is your city?

I really like this‚ÄĒtalk about impact:

Autolib’ has taken thousands of cars off the roads, brought down driving costs by 90% and is reducing pollution by millions of metric tons per year.

Dense city + mass transit creates opportunities.

According to the article, here are what some cities are doing:

  1. In London, Transport of London implemented a contactless payment system, so users can just ‚Äútouch in and out‚ÄĚ to pay. When you‚Äôre dealing with a billion commuters a year, that‚Äôs a big deal.¬†¬† Using Internet-of-Things, developers can use the sensors across London‚Äôs transport system, along with meaningful data in the Cloud, to build better transport apps that address technical incidents and protect passengers in new ways.
  2. In Paris, the Internet-of-Things made it possible to create Autolib, an electronic car-sharing solution. The fleet of electronic cars is managed centrally in the Cloud, allowing users to rent cars from kiosks and easily find charging stations. And users can easily find parking, too, with GPS-enabled parking.
  3. In Barcelona, they are using Internet-of-Things to improve Bicing, their bicycle sharing program. They can use sensors to monitor bicycle usage and detect issues between supply and demand. They can use that insight to distribute bikes better so that the bikes can be used in a more sustainable way. It’s smart logistics for bicycles in action.
  4. In Helsinki, they are using Internet-of-Things to get more value out of their 400 buses. By measuring acceleration, speed, engine temperature, fuel consumption, brake performance, and GPS location, they reduce fuel consumption, improve driver performance, and provide safer bus rides.

I also like how the article framed the challenge right up front by painting the scene of a common scenario where you have to stitch together various modes of transport to reach your destination:

“You just need to take Bus 2 for three stops,
then change to Bus 8 towards the City station,
walk for 10 minutes towards the docks,
then take Line 5 on the metro for 5 stops.
Then call a taxi.‚ÄĚ

You can imagine all the opportunities to reimagine how people get around, and how inclusive the design can be (whether that means helping blind people safely find their next stop, or helping somebody from out of town, navigate their way around.)

Depending on how big the city is and how far out the city is spread, there is still room for Uber and Lyft to help stitch the end-to-end mass transit journey together.

And I wonder how long before Amazon’s Now drivers, go from local residents that fulfill orders, to become another ride share option (do Uber drivers become Amazon Now or do Amazon Now become Uber drivers?).

Categories: Architecture, Programming