Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=4' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Stuff The Internet Says On Scalability For November 11th, 2016

Hey, it's HighScalability time:


Hacking recognition systems with fashion.


If you like this sort of Stuff then please support me on Patreon.
  • 9 teraflops: PC GPU performance for VR rendering; 1.75 million requests per second: DDoS attack from cameras; 5GB/mo: average data consumption in the US; ~59.2GB: size of Wikipedia corpus; 50%: slower LTE within the last year; 5.4 million: entries in Microsoft Concept Graph; 20 microseconds: average round-trip latencies between 250,000 machines using direct FPGA-to-FPGA messages (Microsoft); 1.09 billion: Facebook daily active mobile users; 300 minutes: soaring time for an AI controlled glider; 82ms: latency streaming game play on Azure; 

  • Quotable Quotes:
    • AORTA: Apple’s service revenue is now consistently greater than iPad and Mac revenue streams making it the number two revenue stream behind the gargantuan iPhone bucket.
    • @GeertHub: Apple R&D budget: $10 billion NASA science budget: $5 billion One explored Pluto, the other made a new keyboard.
    • Steve Jobs: tie all of our products together, so we further lock customers into our ecosystem
    • @moxie: I think these types of posts are also the inevitable result of people overestimating our organizational capacity based on whatever limited success Signal and Signal Protocol have had. It could be that the author imagines me sitting in a glass skyscraper all day, drinking out of champagne flutes, watching over an enormous engineering team as they add support for animated GIF search as an explicit fuck you to people with serious needs.
    • @jdegoes: Devs don't REALLY hate abstraction—they hate obfuscation. Abstraction discards irrelevant details, retaining an essence governed by laws.
    • @ewolff: There are no stateless applications. It just means state is on the client or in the database.
    • @mjpt777: Pushing simple logic down into the memory controllers is the only way to overcome the bandwidth bottleneck. I'm glad to see it begin.
    • @gigastacey: Moral of @0xcharlie car hacking talk appears to be don't put actuators on the internet w/out thinking about security. #ARMTechCon
    • @markcallaghan: When does MySQL become too slow for analytics? Great topic, maybe hard to define but IO-bound index nested loops join isn't fast.
    • @iAnimeshS: A year's computing on the old Macintosh portable can now be processed in just 5 seconds on the #NewMacBookPro. #AppleEvent
    • @neil_conway: OH: "My philosophy for writing C++ is the same as for using Git: 'I stay in my damn lane.'"
    • qnovo: Yet as big as this figure sounds, and it is big, only 3 gallons of gasoline (11 liters) pack the same amount of energy. Whereas the Tesla battery weighs about 1300 lbs (590 kg), 3 gallons of gasoline weigh a mere 18 lbs (8 kg). This illustrates the concept of energy density: a lithium-ion battery is 74X less dense than gasoline.
    • @kelseyhightower: I'm willing to bet developers spend more time reverse engineering inadequate API documentation than implementing business logic.
    • @sgmansfield: OH: our ci server continues to run out of inodes because each web site uses ~140,000 files in node_modules
    • @relix42: “We use maven to download half the internet and npm to get the other half…”
    • NEIL IRWIN: economic expansions do not die of old age—an old expansion like our current one is not likelier to enter a recession in the next year than a young expansion.
    • @popey: I am in 6 slack channels. 1.5GB RAM consumed by the desktop app. In 100+ IRC channels. 25MB consumed by irssi. The future is rubbish.
    • @SwiftOnSecurity: The only way to improve the security of these IoT devices is market forces. They must not be allowed to profit without fear of repercussions
    • The Ancient One: you think you know how the world works. What if I told you, through the mystic arts, we harness energy and shape reality?
    • @natpryce: "If you have four groups working on a compiler*, you'll get a four-pass compiler" *and you describe the problem in terms of passes
    • @PatrickMcFadin: Free cloud APIs are closing up as investors start looking for a return. Codebender is closing down 
    • We have quotes n the likes of which even god has never seen. Read the full article to them all.

  • The true program is the programmer. Ralph Waldo Emerson: “The true poem is the poet's mind; the true ship is the ship-builder. In the man, could we lay him open, we should see the reason for the last flourish and tendril of his work; as every spine and tint in the sea-shell preexist in the secreting organs of the fish.”

  • Who would have thought something like this was possible? A Regex that only matches itself. As regexes go it's not even all that weird looking. One of the comments asks for a proof of why it works. That would be interesting.

  • Docker in Production: A History of Failure. Generated a lot of heat and some light. Good comments on HN and on reddit and on reddit. A lot of the comments say yes, there a problems with Docker, but end up saying something like...tzaman: That's odd, we've been using Docker for about a year in development and half a year in production (on Google Container engine / Kubernetes) and haven't experienced any of the panics, crashes yet (at least not any we could not attribute as a failure on our end).

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Veterans Day

Herding Cats - Glen Alleman - Fri, 11/11/2016 - 15:41


For all of us who have served, are serving, and those who gave their lives in service of our country - honor them today

Categories: Project Management

My Core Values

NOOP.NL - Jurgen Appelo - Fri, 11/11/2016 - 14:00
My Core Values

I’m writing this the day after Donald Trump won the election to become the 45th President of the United States. It is a day many of us are sure to remember.

Others have written enough about Trump’s behaviors, ethics, and his style of communication. There is nothing for me to add. Maybe the best way of dealing with the current situation (and the next four years) is to reflect on our own personal values. Criticizing others is easy. But what about being more critical of ourselves?

So… I had a good look at myself, the things that motivate me, and the aspects of my work ethic that I value above everything else. After extensive deliberation, I came up with five core values. I call them my IICCC (double I, triple C):

Independence: I do everything to stay free and autonomous
Integrity: I treat everyone fairly and equally, without discrimination
Curiosity: Exploring and understanding the world has my highest priority
Creativity: Making new things and being innovative is equally important
Competence: When I do work I enjoy, I aim to become very good at it

That’s it! Those are my new core values.

Of course, defining your values is the easy part. The hard part is living them. That will require regular reflection and maybe some rewards for my good behaviors. Some chocolate cookies perhaps.

Let’s hope that Donald Trump is doing a similar exercise.

(This post is part of my soon-to-be-announced Agility Scales project.)

(c)2015 Nichole Burrows, Creative Commons 2.0

The post My Core Values appeared first on NOOP.NL.

Categories: Project Management

Scaling Agile: Scrum of Scrums – Anti-patterns Part 2


Different sort of pyramid syndrome

A Scrum of Scrums (SoS) is a mechanism to coordinate a group of teams so that they act as a team of teams. Powerful tools often have side effects that, if not countered, can do more harm than good. There are several “anti-patterns” that organizations adopt that negatively impact the value a SoS can deliver. In Scaling Agile: Scrum of Scrums: Anti-patterns Part 1 we explored The Soapbox, The Blame Game and Surrogate Project Manager, which three typical anti-patterns. Two other common anti-patterns are the Three Bears and Pyramid syndromes.

The Three Bears Syndrome. In the children’s version of the classic fairy tale, Goldilocks broke into the three bear’s house and sampled three bowls of porridge. One bowl was too hot, one bowl was too cold and one was just right. Many organizations determine what the right cadence is for SoS meetings for the organization regardless of context. The problem is that context is really important. Much like the story of The Three Bears, having too many meetings steals time from getting work done, while too few SoS meetings can keep work from being accomplished by delaying issue resolution and decisions. When the number of meetings is just right, work flows through the process with minimal delay caused by the need to wait for coordination or decisions. The most common version of this anti-pattern is the required single daily SoS. Many times organizations rigorously require SoS meetings on a daily basis because they believe that what is good for the daily stand-up/Scrum meeting is good for the SoS. Daily sounds like a good cadence, but some projects, for example projects with a handful of teams whose work is only loosely coupled, might not need a daily SoS. Alternately, projects with a large number of very tightly coupled teams late in the development cycle might need multiple SoS meetings on a daily basis. Another variation of this anti-pattern is seen in organizations that reduce the SoS cadence (usually coupled with lengthening the duration the meetings when they do occur) for projects with distributed teams. In real life, this is the opposite of what is needed, the complexity of projects with distributed teams typically demands more coordination, not less.

One possible solution is to empower the SoS to regulate itself.

  • Allow the SoS to self-regulate its own cadence based on the need of the project. A coach should facilitate the SoS in negotiating a minimum floor for when they will meet. The minimum should then be reviewed periodically so that people attending the SoS can change the meeting cadence as the need for decisions and project risk wax and wane.

Pyramid Syndrome. One of the more exciting features of the Scrum of Scrums technique is the ability to scale up the meetings up like a pyramid or a hierarchy. For example, a REALLY big project might have 10 to 20 teams. 20 teams with 7 members (Scrum teams are typically 5 – 9 people) which would equate to approximately 140 team members. 140 overall team members is still below the common interpretation of Dunbar’s Number. SoS meetings should be limited to the same size as a typical Scrum team (5-9) with smaller groups typically being better to ensure quick coordination. A project with 20 teams using SoS meetings of 7 people would require 3 SoS meetings for the teams and a 4th with a representative from each of the team-level SoS meetings. In some cases the ability to create layers of SoS meetings allows organizations to believe they can create megaprojects with hundreds of team members. Megaprojects and programs leveraging normal SoS techniques would need many layers of SoS meetings. Each meeting takes time and requires shuttling information between teams (with potential fidelity loss). A few years ago I observed an organization in which the some SoS attendees lost several hours a day to SoS meetings. Projects requiring SoS meeting of three or more levels are too large. One possible solution is simply to split the project or program into smaller chunks.

    • Split projects or programs up so that the number of team members involved stays below the approximate limit suggested by Dunbar’s number (150 people).

The Three Bears and Pyramid Syndromes are two additional anti-patterns that can plague Scrum of Scrums. None of the five anti-patterns we have explored are insurmountable. The solution for problems with how SoS meetings are work generally first requires diagnosing the problem and then coaching to help replace bad behaviors with good behaviors.

Categories: Process Management

Android Studio 2.2

Android Developers Blog - Thu, 11/10/2016 - 22:49

By Jamal Eason, Product Manager, Android

Android Studio 2.2 is available to download today. Previewed at Google I/O 2016, Android Studio 2.2 is the latest release of our IDE used by millions of Android developers around the world.

Packed with enhancements, this release has three major themes: speed, smarts, and Android platform support. Develop faster with features such as the new Layout Editor, which makes creating an app user interface quick and intuitive. Develop smarter with our new APK analyzer, enhanced Layout Inspector, expanded code analysis, IntelliJ’s 2016.1.3 features and much more. Lastly, as the official IDE for Android app development, Android Studio 2.2 includes support for all the latest developer features in Android 7.0 Nougat, like code completion to help you add Android platform features like Multi-Window support, Quick Settings API, or the redesigned Notifications, and of course, the built-in Android Emulator to test them all out.

In this release, we evolved the Android Frameworks and the IDE together to create the Constraint Layout. This powerful new layout manager helps you design large and complex layouts in a flat and streamlined hierarchy. The ConstraintLayout integrates into your app like a standard Android support library, and was built in parallel with the new Layout Editor.

Android Studio 2.2 includes 20+ new features across every major phase of the development process: design, develop, build, & test. From designing UIs with the new ConstraintLayout, to developing C++ code with the Android NDK, to building with the latest Jack compliers, to creating Espresso test cases for your app, Android Studio 2.2 is the update you do not want to miss. Here’s more detail on some of the top highlights:


  • Layout Editor: Creating Android app user interfaces is now easier with the new user interface designer. Quickly construct the structure of your app UI with the new blueprint mode and adjust the visual attributes of each widget with new properties panel. Learn more.

Layout Editor

  • Constraint Layout: This new layout is a flexible layout manager for your app that allows you to create dynamic user interfaces without nesting multiple layouts. It is backwards compatible all the way back to Android API level 9 (Gingerbread). ConstraintLayout works best with the new Layout Editor in Android Studio 2.2. Learn more.



  • Improved C++ Support: You can now use CMake or ndk-build to compile your C++ projects from Gradle. Migrating projects from CMake build systems to Android Studio is now seamless. You will also find C++ support in the new project wizard in Android Studio, plus a number of bug fixes to the C++ edit and debug experience. Learn more.

C++ Code Editing & CMake Support

  • Samples Browser: Referencing Android sample code is now even easier with Android Studio 2.2. Within the code editor window, find occurrences of your app code in Google Android sample code to help jump start your app development. Learn more.

Sample Code Menu


  • Instant Run Improvements: Introduced in Android Studio 2.0, Instant Run is our major, long-term investment to make Android development as fast and lightweight. Since launch, it has significantly improved the edit, build, run iteration cycles for many developers. In this release, we have made many stability and reliability improvements to Instant Run. If you have previously disabled Instant Run, we encourage you to re-enable it and let us know if you come across further issues. (Settings → Build, Execution, Deployment → Instant Run [Windows/Linux] , Preferences → Build, Execution, Deployment → Instant Run [OS X]). For details on the fixes that we have made, see the Android Studio 2.2 release notes.

Enable Instant Run

  • APK Analyzer: Easily inspect the contents of your APKs to understand the size contribution of each component. This feature can be helpful when debugging multi-dex issues. Plus, with the APK Analyzer you can compare two versions of an APK. Learn more.

APK Analyzer

  • Build cache (Experimental): We are continuing our investments to improve build speeds with the introduction of a new experimental build cache that will help reduce both full and incremental build times. Just add android.enableBuildCache=true to your gradle.properties file. Learn more.

Build Cache Setting


  • Virtual Sensors in the Android Emulator: The Android Emulator now includes a new set of virtual sensors controls. With the new UI controls, you can now test Android Sensors such as Accelerometer, Ambient Temperature, Magnetometer and more. Learn more.

Android Emulator Virtual Sensors

  • Espresso Test Recorder (Beta): The Espresso Test Recorder lets you easily create UI tests by recording interactions with your app; it then outputs the UI test code for you. You record your interactions with a device and add assertions to verify UI elements in particular snapshots of your app. Espresso Test Recorder then takes the saved recording and automatically generates a corresponding UI test. You can run the test locally, on your continuous integration server, or using Firebase Test Lab for Android. Learn more.
Espresso Test Recorder
  • GPU Debugger (Beta): The GPU Debugger is now in Beta. You can now capture a stream of OpenGL ES commands on your Android device and then replay it from inside Android Studio for analysis. You can also fully inspect the GPU state of any given OpenGL ES command to better understand and debug your graphical output. Lean more.
GPU Debugger

To recap, Android Studio 2.2 includes these major features and more:





Learn more about Android Studio 2.2 by reviewing the release notes and the preview blog post.

Getting Started


If you are using a previous version of Android Studio, you can check for updates on the Stable channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). You can also download Android Studio 2.2 from the official download page. To take advantage of all the new features and improvements in Android Studio, you should also update to the Android Gradle plugin version to 2.2.0 in your current app project.

Next Release

We would like to thank all of you in the Android Developer community for your work on this release. We are grateful for your contributions, your ongoing feedback which inspired the new features in this release, and your highly active use on canary and beta builds filing bugs. We all wanted to make Android Studio 2.2 our best release yet, with many stability and performance fixes in addition to the many new features. For our next release, look for even more; we want to work hard to address feedback and keep driving up quality and stability on existing features to make you productive.

We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our Google+ page or on Twitter.

What's New in Android Studio 2.2
Categories: Programming

Hackable Projects - Pillar 2: Debuggability

Google Testing Blog - Thu, 11/10/2016 - 20:34
By: Patrik Höglund

This is the second article in our series on Hackability; also see the first article.

“Deep into that darkness peering, long I stood there, wondering, fearing, doubting, dreaming dreams no mortal ever dared to dream before.” -- Edgar Allan Poe

Debuggability can mean being able to use a debugger, but here we’re interested in a broader meaning. Debuggability means being able to easily find what’s wrong with a piece of software, whether it’s through logs, statistics or debugger tools. Debuggability doesn’t happen by accident: you need to design it into your product. The amount of work it takes will vary depending on your product, programming language(s) and development environment.

In this article, I am going to walk through a few examples of how we have aided debuggability for our developers. If you do the same analysis and implementation for your project, perhaps you can help your developers illuminate the dark corners of the codebase and learn what truly goes on there.
Figure 1: computer log entry from the Mark II, with a moth taped to the page.
Running on Localhost Read more on the Testing Blog: Hermetic Servers by Chaitali Narla and Diego Salas

Suppose you’re developing a service with a mobile app that connects to that service. You’re working on a new feature in the app that requires changes in the backend. Do you develop in production? That’s a really bad idea, as you must push unfinished code to production to work on your change. Don’t do that: it could break your service for your existing users. Instead, you need some kind of script that brings up your server stack on localhost.

You can probably run your servers by hand, but that quickly gets tedious. In Google, we usually use fancy python scripts that invoke the server binaries with flags. Why do we need those flags? Suppose, for instance, that you have a server A that depends on a server B and C. The default behavior when the server boots should be to connect to B and C in production. When booting on localhost, we want to connect to our local B and C though. For instance:

b_serv --port=1234 --db=/tmp/fakedb
c_serv --port=1235
a_serv --b_spec=localhost:1234 --c_spec=localhost:1235

That makes it a whole lot easier to develop and debug your server. Make sure the logs and stdout/stderr end up in some well-defined directory on localhost so you don’t waste time looking for them. You may want to write a basic debug client that sends HTTP requests or RPCs or whatever your server handles. It’s painful to have to boot the real app on a mobile phone just to test something.

A localhost setup is also a prerequisite for making hermetic tests,where the test invokes the above script to bring up the server stack. The test can then run, say, integration tests among the servers or even client-server integration tests. Such integration tests can catch protocol drift bugs between client and server, while being super stable by not talking to external or shared services.
Debugging Mobile AppsFirst, mobile is hard. The tooling is generally less mature than for desktop, although things are steadily improving. Again, unit tests are great for hackability here. It’s really painful to always load your app on a phone connected to your workstation to see if a change worked. Robolectric unit tests and Espresso functional tests, for instance, run on your workstation and do not require a real phone. xcTestsand Earl Grey give you the same on iOS.

Debuggers ship with Xcode and Android Studio. If your Android app ships JNI code, it’s a bit trickier, but you can attach GDB to running processes on your phone. It’s worth spending the time figuring this out early in the project, so you don’t have to guess what your code is doing. Debugging unit tests is even better and can be done straightforwardly on your workstation.
When Debugging gets TrickySome products are harder to debug than others. One example is hard real-time systems, since their behavior is so dependent on timing (and you better not be hooked up to a real industrial controller or rocket engine when you hit a breakpoint!). One possible solution is to run the software on a fake clock instead of a hardware clock, so the clock stops when the program stops.

Another example is multi-process sandboxed programs such as Chromium. Since the browser spawns one renderer process per tab, how do you even attach a debugger to it? The developers have made it quite a lot easier with debugging flags and instructions. For instance, this wraps gdb around each renderer process as it starts up:

chrome --renderer-cmd-prefix='xterm -title renderer -e gdb --args'

The point is, you need to build these kinds of things into your product; this greatly aids hackability.
Proper LoggingRead more on the Testing Blog: Optimal Logging by Anthony Vallone

It’s hackability to get the right logs when you need them. It’s easy to fix a crash if you get a stack trace from the error location. It’s far from guaranteed you’ll get such a stack trace, for instance in C++ programs, but this is something you should not stand for. For instance, Chromium had a problem where renderer process crashes didn’t print in test logs, because the test was running in a separate process. This was later fixed, and this kind of investment is worthwhile to make. A clean stack trace is worth a lot more than a “renderer crashed” message.

Logs are also useful for development. It’s an art to determine how much logging is appropriate for a given piece of code, but it is a good idea to keep the default level of logging conservative and give developers the option to turn on more logging for the parts they’re working on (example: Chromium). Too much logging isn’t hackability. This article elaborates further on this topic.

Logs should also be properly symbolized for C/C++ projects; a naked list of addresses in a stack trace isn’t very helpful. This is easy if you build for development (e.g. with -g), but if the crash happens in a release build it’s a bit trickier. You then need to build the same binary with the same flags and use addr2line / ndk-stack / etc to symbolize the stack trace. It’s a good idea to build tools and scripts for this so it’s as easy as possible.
Monitoring and StatisticsIt aids hackability if developers can quickly understand what effect their changes have in the real world. For this, monitoring tools such as Stackdriver for Google Cloudare excellent. If you’re running a service, such tools can help you keep track of request volumes and error rates. This way you can quickly detect that 30% increase in request errors, and roll back that bad code change, before it does too much damage. It also makes it possible to debug your service in production without disrupting it.
System Under Test (SUT) SizeTests and debugging go hand in hand: it’s a lot easier to target a piece of code in a test than in the whole application. Small and focused tests aid debuggability, because when a test breaks there isn’t an enormous SUT to look for errors in. These tests will also be less flaky. This article discusses this fact at length.

Figure 2. The smaller the SUT, the more valuable the test.
You should try to keep the above in mind, particularly when writing integration tests. If you’re testing a mobile app with a server, what bugs are you actually trying to catch? If you’re trying to ensure the app can still talk to the server (i.e. catching protocol drift bugs), you should not involve the UI of the app. That’s not what you’re testing here. Instead, break out the signaling part of the app into a library, test that directly against your local server stack, and write separate tests for the UI that only test the UI.

Smaller SUTs also greatly aids test speed, since there’s less to build, less to bring up and less to keep running. In general, strive to keep the SUT as small as possible through whatever means necessary. It will keep the tests smaller, faster and more focused.
SourcesFigure 1: By Courtesy of the Naval Surface Warfare Center, Dahlgren, VA., 1988. - U.S. Naval Historical Center Online Library Photograph NH 96566-KN, Public Domain, https://commons.wikimedia.org/w/index.php?curid=165211

(Continue to Pillar 3: Infrastructure)
Categories: Testing & QA

Hackable Projects - Pillar 1: Code Health

Google Testing Blog - Thu, 11/10/2016 - 20:33
By: Patrik Höglund
IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base.

Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them.

According to Wiktionary, hackable is defined as:
hackable ‎(comparative more hackable, superlative most hackable)
  1. (computing) That can be hacked or broken into; insecure, vulnerable. 
  2. That lends itself to hacking (technical tinkering and modification); moddable.

Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.
In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer.

This is hackability:
  • Developing is easy
  • Fast build
  • Good, fast tests
  • Clean code
  • Easy running + debugging
  • One-click rollbacks
In contrast, what is not hackability?
  • Broken HEAD (tip-of-tree)
  • Slow presubmit (i.e. checks running before submit)
  • Builds take hours
  • Incremental build/link > 30s
  • Flakytests
  • Can’t attach debugger
  • Logs full of uninteresting information
The Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are:

Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”
   -- Augustus
Keeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter).
TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape).

Figure 1: the Testing Pyramid.
I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty varies by language as well. Writing unit tests in Go or Java is quite easy and natural, whereas in C++ it can be very difficult (and it isn’t exactly ingrained in C++ culture to write unit tests).

It’s important to educate your developers about unit tests. Sometimes, it is appropriate to lead by example and help review unit tests as well. You can have a large impact on a project by establishing a pattern of unit testing early. If tons of code gets written without unit tests, it will be much harder to add unit tests later.

What if you already have tons of poorly tested legacy code? The answer is refactoring and adding tests as you go. It’s hard work, but each line you add a test for is one more line that is easier to hack on.
Readable Code and Code ReviewAt Google, “readability” is a special committer status that is granted per language (C++, Go, Java and so on). It means that a person not only knows the language and its culture and idioms well, but also can write clean, well tested and well structured code. Readability literally means that you’re a guardian of Google’s code base and should push back on hacky and ugly code. The use of a style guide enforces consistency, and code review (where at least one person with readability must approve) ensures the code upholds high quality. Engineers must take care to not depend too much on “review buddies” here but really make sure to pull in the person that can give the best feedback.

Requiring code reviews naturally results in small changes, as reviewers often get grumpy if you dump huge changelists in their lap (at least if reviewers are somewhat fast to respond, which they should be). This is a good thing, since small changes are less risky and are easy to roll back. Furthermore, code review is good for knowledge sharing. You can also do pair programming if your team prefers that (a pair-programmed change is considered reviewed and can be submitted when both engineers are happy). There are multiple open-source review tools out there, such as Gerrit.

Nice, clean code is great for hackability, since you don’t need to spend time to unwind that nasty pointer hack in your head before making your changes. How do you make all this happen in practice? Put together workshops on, say, the SOLID principles, unit testing, or concurrency to encourage developers to learn. Spread knowledge through code review, pair programming and mentoring (such as with the Readability concept). You can’t just mandate higher code quality; it takes a lot of work, effort and consistency.
Presubmit Testing and LintConsistently formatted source code aids hackability. You can scan code faster if its formatting is consistent. Automated tooling also aids hackability. It really doesn’t make sense to waste any time on formatting source code by hand. You should be using tools like gofmt, clang-format, etc. If the patch isn’t formatted properly, you should see something like this (example from Chrome):

$ git cl upload
Error: the media/audio directory requires formatting. Please run
git cl format media/audio.

Source formatting isn’t the only thing to check. In fact, you should check pretty much anything you have as a rule in your project. Should other modules not depend on the internals of your modules? Enforce it with a check. Are there already inappropriate dependencies in your project? Whitelist the existing ones for now, but at least block new bad dependencies from forming. Should our app work on Android 16 phones and newer? Add linting, so we don’t use level 17+ APIs without gating at runtime. Should your project’s VHDL code always place-and-route cleanly on a particular brand of FPGA? Invoke the layout tool in your presubmit and and stop submit if the layout process fails.

Presubmit is the most valuable real estate for aiding hackability. You have limited space in your presubmit, but you can get tremendous value out of it if you put the right things there. You should stop all obvious errors here.

It aids hackability to have all this tooling so you don’t have to waste time going back and breaking things for other developers. Remember you need to maintain the presubmit well; it’s not hackability to have a slow, overbearing or buggy presubmit. Having a good presubmit can make it tremendously more pleasant to work on a project. We’re going to talk more in later articles on how to build infrastructure for submit queues and presubmit.
Single Branch And Reducing RiskHaving a single branch for everything, and putting risky new changes behind feature flags, aids hackability since branches and forks often amass tremendous risk when it’s time to merge them. Single branches smooth out the risk. Furthermore, running all your tests on many branches is expensive. However, a single branch can have negative effects on hackability if Team A depends on a library from Team B and gets broken by Team B a lot. Having some kind of stabilization on Team B’s software might be a good idea there. Thisarticle covers such situations, and how to integrate often with your dependencies to reduce the risk that one of them will break you.
Loose Coupling and TestabilityTightly coupled code is terrible for hackability. To take the most ridiculous example I know: I once heard of a computer game where a developer changed a ballistics algorithm and broke the game’s chat. That’s hilarious, but hardly intuitive for the poor developer that made the change. A hallmark of loosely coupled code is that it’s upfront about its dependencies and behavior and is easy to modify and move around.

Loose coupling, coherence and so on is really about design and architecture and is notoriously hard to measure. It really takes experience. One of the best ways to convey such experience is through code review, which we’ve already mentioned. Education on the SOLID principles, rules of thumb such as tell-don’t-ask, discussions about anti-patterns and code smells are all good here. Again, it’s hard to build tooling for this. You could write a presubmit check that forbids methods longer than 20 lines or cyclomatic complexity over 30, but that’s probably shooting yourself in the foot. Developers would consider that overbearing rather than a helpful assist.

SETIs at Google are expected to give input on a product’s testability. A few well-placed test hooks in your product can enable tremendously powerful testing, such as serving mock content for apps (this enables you to meaningfully test app UI without contacting your real servers, for instance). Testability can also have an influence on architecture. For instance, it’s a testability problem if your servers are built like a huge monolith that is slow to build and start, or if it can’t boot on localhost without calling external services. We’ll cover this in the next article.
Aggressively Reduce Technical DebtIt’s quite easy to add a lot of code and dependencies and call it a day when the software works. New projects can do this without many problems, but as the project becomes older it becomes a “legacy” project, weighed down by dependencies and excess code. Don’t end up there. It’s bad for hackability to have a slew of bug fixes stacked on top of unwise and obsolete decisions, and understanding and untangling the software becomes more difficult.

What constitutes technical debt varies by project and is something you need to learn from experience. It simply means the software isn’t in optimal form. Some types of technical debt are easy to classify, such as dead code and barely-used dependencies. Some types are harder to identify, such as when the architecture of the project has grown unfit to the task from changing requirements. We can’t use tooling to help with the latter, but we can with the former.

I already mentioned that dependency enforcement can go a long way toward keeping people honest. It helps make sure people are making the appropriate trade-offs instead of just slapping on a new dependency, and it requires them to explain to a fellow engineer when they want to override a dependency rule. This can prevent unhealthy dependencies like circular dependencies, abstract modules depending on concrete modules, or modules depending on the internals of other modules.

There are various tools available for visualizing dependency graphs as well. You can use these to get a grip on your current situation and start cleaning up dependencies. If you have a huge dependency you only use a small part of, maybe you can replace it with something simpler. If an old part of your app has inappropriate dependencies and other problems, maybe it’s time to rewrite that part.

(Continue to Pillar 2: Debuggability)
Categories: Testing & QA

Hackable Projects - Pillar 3: Infrastructure

Google Testing Blog - Thu, 11/10/2016 - 20:12
By: Patrik Höglund

This is the third article in our series on Hackability; also see the first and second article.

We have seen in our previous articles how Code Health and Debuggability can make a project much easier to work on. The third pillar is a solid infrastructure that gets accurate feedback to your developers as fast as possible. Speed is going to be major theme in this article, and we’ll look at a number of things you can do to make your project easier to hack on.

Build Systems Speed
Question: What’s a change you’d really like to see in our development tools?

“I feel like this question gets asked almost every time, and I always give the same answer:
 I would like them to be faster.”
        -- Ian Lance Taylor

Replace make with ninja. Use the gold linker instead of ld. Detect and delete dead code in your project (perhaps using coverage tools). Reduce the number of dependencies, and enforce dependency rules so new ones are not added lightly. Give the developers faster machines. Use distributed build, which is available with many open-source continuous integration systems (or use Google’s system, Bazel!). You should do everything you can to make the build faster.

Figure 1: “Cheetah chasing its prey” by Marlene Thyssen.

Why is that? There’s a tremendous difference in hackability if it takes 5 seconds to build and test versus one minute, or even 20 minutes, to build and test. Slow feedback cycles kill hackability, for many reasons:
  • Build and test times longer than a handful of seconds cause many developers’ minds to wander, taking them out of the zone.
  • Excessive build or release times* makes tinkering and refactoring much harder. All developers have a threshold when they start hedging (e.g. “I’d like to remove this branch, but I don’t know if I’ll break the iOS build”) which causes refactoring to not happen.

* The worst I ever heard of was an OS that took 24 hours to build!

How do you actually make fast build systems? There are some suggestions in the first paragraph above, but the best general suggestion I can make is to have a few engineers on the project who deeply understand the build systems and have the time to continuously improve them. The main axes of improvement are:
  1. Reduce the amount of code being compiled.
  2. Replace tools with faster counterparts.
  3. Increase processing power, maybe through parallelization or distributed systems.
Note that there is a big difference between full builds and incremental builds. Both should be as fast as possible, but incremental builds are by far the most important to optimize. The way you tackle the two is different. For instance, reducing the total number of source files will make a full build faster, but it may not make an incremental build faster. 
To get faster incremental builds, in general, you need to make each source file as decoupled as possible from the rest of the code base. The less a change ripples through the codebase, the less work to do, right? See “Loose Coupling and Testability” in Pillar 1 for more on this subject. The exact mechanics of dependencies and interfaces depends on programming language - one of the hardest to get right is unsurprisingly C++, where you need to be disciplined with includes and forward declarations to get any kind of incremental build performance. 
Build scripts and makefiles should be held to standards as high as the code itself. Technical debt and unnecessary dependencies have a tendency to accumulate in build scripts, because no one has the time to understand and fix them. Avoid this by addressing the technical debt as you go.

Continuous Integration and Presubmit Queues
You should build and run tests on all platforms you release on. For instance, if you release on all the major desktop platforms, but all your developers are on Linux, this becomes extra important. It’s bad for hackability to update the repo, build on Windows, and find that lots of stuff is broken. It’s even worse if broken changes start to stack on top of each other. I think we all know that terrible feeling: when you’re not sure your change is the one that broke things.
At a minimum, you should build and test on all platforms, but it’s even better if you do it in presubmit. The Chromium submit queue does this. It has developed over the years so that a normal patch builds and tests on about 30 different build configurations before commit. This is necessary for the 400-patches-per-day velocity of the Chrome project. Most projects obviously don’t have to go that far. Chromium’s infrastructure is based off BuildBot, but there are many other job scheduling systems depending on your needs.
Figure 2: How a Chromium patch is tested.
As we discussed in Build Systems, speed and correctness are critical here. It takes a lot of ongoing work to keep build, tests, and presubmits fast and free of flakes. You should never accept flakes, since developers very quickly lose trust in flaky tests and systems. Tooling can help a bit with this; for instance, see the Chromium flakiness dashboard.

Test Speed
Speed is a feature, and this is particularly true for developer infrastructure. In general, the longer a test takes to execute, the less valuable it is. My rule of thumb is: if it takes more than a minute to execute, its value is greatly diminished. There are of course some exceptions, such as soak tests or certain performance tests. 
Figure 3. Test value.
If you have tests that are slower than 60 seconds, they better be incredibly reliable and easily debuggable. A flaky test that takes several minutes to execute often has negative value because it slows down all work in the code it covers. You probably want to build better integration tests on lower levels instead, so you can make them faster and more reliable.
If you have many engineers on a project, reducing the time to run the tests can have a big impact. This is one reason why it’s great to have SETIs or the equivalent. There are many things you can do to improve test speed:
  • Sharding and parallelization. Add more machines to your continuous build as your test set or number of developers grows.
  • Continuously measure how long it takes to run one build+test cycle in your continuous build, and have someone take action when it gets slower.
  • Remove tests that don’t pull their weight. If a test is really slow, it’s often because of poorly written wait conditions or because the test bites off more than it can chew (maybe that unit test doesn’t have to process 15000 audio frames, maybe 50 is enough).
  • If you have tests that bring up a local server stack, for instance inter-server integration tests, making your servers boot faster is going to make the tests faster as well. Faster production code is faster to test! See Running on Localhost, in Pillar 2 for more on local server stacks.

Workflow Speed
We’ve talked about fast builds and tests, but the core developer workflows are also important, of course. Chromium undertook multi-year project to switch from Subversion to Git, partly because Subversion was becoming too slow. You need to keep track of your core workflows as your project grows. Your version control system may work fine for years, but become too slow once the project becomes big enough. Bug search and management must also be robust and fast, since this is generally systems developers touch every day.

Release Often
It aids hackability to deploy to real users as fast as possible. No matter how good your product's tests are, there's always a risk that there's something you haven't thought of. If you’re building a service or web site, you should aim to deploy multiple times per week. For client projects, Chrome’s six-weeks cycle is a good goal to aim for.
You should invest in infrastructure and tests that give you the confidence to do this - you don’t want to push something that’s broken. Your developers will thank you for it, since it makes their jobs so much easier. By releasing often, you mitigate risk, and developers will rush less to hack late changes in the release (since they know the next release isn’t far off).

Easy Reverts
If you look in the commit log for the Chromium project, you will see that a significant percentage of the commits are reverts of a previous commits. In Chromium, bad commits quickly become costly because they impede other engineers, and the high velocity can cause good changes to stack onto bad changes. 
Figure 4: Chromium’s revert button.

This is why the policy is “revert first, ask questions later”. I believe a revert-first policy is good for small projects as well, since it creates a clear expectation that the product/tools/dev environment should be working at all times (and if it doesn’t, a recent change should probably be reverted).
It has a wonderful effect when a revert is simple to make. You can suddenly make speculative reverts if a test went flaky or a performance test regressed. It follows that if a patch is easy to revert, so is the inverse (i.e. reverting the revert or relanding the patch). So if you were wrong and that patch wasn’t guilty after all, it’s simple to re-land it again and try reverting another patch. Developers might initially balk at this (because it can’t possibly be their patch, right?), but they usually come around when they realize the benefits.
For many projects, a revert can simply begit revert 9fbadbeefgit push origin master

If your project (wisely) involves code review, it will behoove you to add something like Chromium’s revert button that I mentioned above. The revert button will create a special patch that bypasses review and tests (since we can assume a clean revert takes us back to a more stable state rather than the opposite). See Pillar 1 for more on code review and its benefits.
For some projects, reverts are going to be harder, especially if you have a slow or laborious release process. Even if you release often, you could still have problems if a revert involves state migrations in your live services (for instance when rolling back a database schema change). You need to have a strategy to deal with such state changes. 

Reverts must always put you back to safer ground, and everyone must be confident they can safely revert. If not, you run the risk of massive fire drills and lost user trust if a bad patch makes it through the tests and you can’t revert it.

Performance Tests: Measure Everything
Is it critical that your app starts up within a second? Should your app always render at 60 fps when it’s scrolled up or down? Should your web server always serve a response within 100 ms? Should your mobile app be smaller than 8 MB? If so, you should make a performance test for that. Performance tests aid hackability since developers can quickly see how their change affects performance and thus prevent performance regressions from making it into the wild.
You should run your automated performance tests on the same device; all devices are different and this will reflect in the numbers. This is fairly straightforward if you have a decent continuous integration system that runs tests sequentially on a known set of worker machines. It’s harder if you need to run on physical phones or tablets, but it can be done. 
A test can be as simple as invoking a particular algorithm and measuring the time it takes to execute it (median and 90th percentile, say, over N runs).
Figure 5: A VP8-in-WebRTC regression (bug) and its fix displayed in a Catapult dashboard.

Write your test so it outputs performance numbers you care about. But what to do with those numbers? Fortunately, Chrome’s performance test framework has been open-sourced, which means you can set up a dashboard, with automatic regression monitoring, with minimal effort. The test framework also includes the powerful Telemetry framework which can run actions on web pages and Android apps and report performance results. Telemetry and Catapult are battle-tested by use in the Chromium project and are capable of running on a wide set of devices, while getting the maximum of usable performance data out of the devices.

Figure 1: By Malene Thyssen (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

Categories: Testing & QA

Book of the Month

Herding Cats - Glen Alleman - Thu, 11/10/2016 - 19:04

It's been a busy month for reading. I've been on the road, so I try and focus on reading rather than on working while on the plane. Here are three books underway that are related for the programs we work

Practical Guide to Distributed Scrum

This book contains processes for improving the performance of Scrum teams when they are distributed.

Two of my clients are in this situation. Mainly because the cost of living near the office is prohibitive and travel distances are the worst in Metro DC.

The book shows how to develop User Stories using a distributed team, engaging in effective release planning, managing cultural and language differences, resolving dependencies, and using remote software processes.

Logically Fallacious

It seems many of the idea debates we get into are based on logical fallacies. 

Here's a nice book on how this happens and how to address the issues when it comes up.


I've saved the best for last.

This is a MUST READ book for anyone working with agile or thinking about it.

With the Logically Fallacious book in hand, Agile! can be read in parallel.

There is so much crap out there around Agile, this book is mandatory reading. 

From the nonsense of #Noestimates to simply bad advice, Bertrand calls it out. Along with all the good things of agile


Related articles Five Estimating Pathologies and Their Corrective Actions Taxonomy of Logical Fallacies Essential Reading List for Managing Other People's Money Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Mike Cohn's Agile Quotes
Categories: Project Management

Chrome Dev Summit. Now Live Streaming

Google Code Blog - Thu, 11/10/2016 - 18:59
Posted by Paul Kinlan, Chrome Developer Relations

Good morning! Only one minute to go until Darin Fisher, VP of Chrome kick's off this year's keynote at Chrome Dev Summit 2016. Join us as we take a look at the latest web advancements with over 20 sessions presented by Chrome engineers. We're live streaming all sessions and posting videos throughout the next two days.

Categories: Programming

Look out for our bi-annual Google Play Developer Sentiment Survey, coming soon

Android Developers Blog - Thu, 11/10/2016 - 18:54

Posted by Dorothy Kelly, Head of Developer Insights, Google Play Developer Marketing

Core to our mission, we're always focused on the user and delivering the best experience possible. This same principle underlies how Google Play works with developers, as we aim to provide you with best experience working with us and our products. We can only do this through understanding what you need and how we can improve. We ran our first Developer Sentiment Survey in July this year, and heard feedback from over 4,000 developers across 15 countries. This bi-annual survey gathers feedback at scale from the thousands of developers around the world who publish their apps and games on Google Play. While it was great to hear how Google Play is working for you, we also learned how we should improve to enable you to build even more successful businesses.

This month, you may receive an email from Google Play inviting you to participate in the next Google Play Developer Sentiment Survey. This invitation is sent to a selection developers who have opted in to receive Research contacts in the Developer Console, or to those who are directly managed by Google. You can review and update your preferences in the Developer Console to ensure you get the opportunity to be invited to participate in future surveys.

In this survey we ask you to give us feedback across a number of areas:

  • Develop: Testing, publishing and launching your app or game.
  • Grow: Discovery and marketing of your app or game.
  • Engage: Distributing to and engaging with your target market.
  • Earn: Pricing and Payment methods.
  • Getting Support: Accessing the information and support you need when you have a question.

We use your feedback to decide what we need to focus on next to help you grow your app or game business. Initiatives announced at I/O 2016, such as improved betas, prelaunch reporting, the Developer Console app, and pricing templates, were all developed in response to feedback from developers like you.

If you do receive an invitation to participate in this survey, we really appreciate you taking the time to complete it. We value your feedback and want to act on it to help you create apps and games that delight your users, and help you build a successful business anywhere in the world.

Categories: Programming

Introducing the Google Slides API

Google Code Blog - Wed, 11/09/2016 - 19:41
Originally posted on G Suite Developers Blog

Posted by Wesley Chun, Developer Advocate, G Suite

At Google I/O 2016, we gave developers a preview of the Google Slides API. Since then, the gears have been cranking at full speed, and we've been working with various early-access partners and developers to showcase what you can do with it. Today, we're happy to announce that the Slides API v1 is now generally available and represents the first time that developers have ever been able to programmatically access Slides!

The Slides API breaks new ground, changing the way that presentations are created. No longer do they require manual creation by users on their desktops or mobile devices. Business data on inventory items like retail merchandise, homes/property, hotels/lodging, restaurants/menus, venues/events, and other "cataloged" assets can be instantly turned into presentations based on pre-existing slide templates. Traditionally, the sheer amount of data (and of course time[!]) that went into creating these slide decks made it unwieldy if done by hand. Applications leveraging the API can easily generate presentations like these, customized as desired, and in short order.

Developers use the API by crafting a JSON payload for each request. (We recommend you batch multiple commands together to send to the API.) You can think of these as actions one can perform from the Slides user interface but available programmatically. To give you an idea of how the new API works, here are what some requests look like for several common operations:

// create new slide (title & body layout)
"createSlide": {
"slideLayoutReference": {
"predefinedLayout": "TITLE_AND_BODY"
// insert text into textbox
"insertText": {
"objectId": titleID,
"text": "Hello World!"
// add bullets to text paragraphs
"createParagraphBullets": {
"objectId": shapeID,
"textRange": {
"type": "ALL"
// replace text "variables" with image
"replaceAllShapesWithImage": {
"imageUrl": imageURL,
"replaceMethod": "CENTER_INSIDE",
"containsText": {
"text": "{{COMPANY_LOGO}}"

If you're interested in seeing what developers have already built using the API, take a look at our initial set of partner integrations by Conga, Trello, Lucidchart, Zapier and more, as described in detail in our G Suite blog post.

To help you get started, check out the DevByte above from our new series dedicated to G Suite developers. In the video, we demonstrate how to take "variables" or placeholders in a template deck and use the API to generate new decks replacing those proxies with the desired text or image. Want to dive deeper into its code sample? Check out this blogpost. If you're not a Python developer, it'll be your pseudocode as you can use any language supported by the Google APIs Client Libraries. Regardless of your development environment, you can use similar "scaffolding" to generate many presentations with varying content for your users. Stay tuned for more videos that highlight other Slides API features.

The Slides API is available to projects in your Google Developers console today. Developers can find out more in the official documentation which features an API overview plus Quickstarts, sample code in multiple languages and environments, to bootstrap your next project. We look forward to seeing all the amazing slide deck generating applications you build with our first ever API!

Categories: Programming

Celebrating TensorFlow’s First Year

Google Code Blog - Wed, 11/09/2016 - 18:18
Originally posted on Google Research Blog
Posted by Zak Stone, Product Manager for TensorFlow, on behalf of the TensorFlow team
It has been an eventful year since the Google Brain Team open-sourced TensorFlow to accelerate machine learning research and make technology work better for everyone. There has been an amazing amount of activity around the project: more than 480 people have contributed directly to TensorFlow, including Googlers, external researchers, independent programmers, students, and senior developers at other large companies. TensorFlow is now the most popular machine learning project on GitHub. With more than 10,000 commits in just twelve months, we’ve made numerous performance improvements, added support for distributed training, brought TensorFlow to iOS and Raspberry Pi, and integrated TensorFlow with widely-used big data infrastructure. We’ve also made TensorFlow accessible from Go, Rust and Haskell, released state-of-the-art image classification models, and answered thousands of questions on GitHub, StackOverflow and the TensorFlow mailing list along the way.

At Google, TensorFlow supports everything from large-scale product features to exploratory research. We recently launched major improvements to Google Translate using TensorFlow (and Tensor Processing Units, which are special hardware accelerators for TensorFlow). Project Magenta is working on new reinforcement learning-based models that can produce melodies, and a visiting PhD student recently worked with the Google Brain team to build a TensorFlow model that can automatically interpolate between artistic styles. DeepMind has also decided to use TensorFlow to power all of their research – for example, they recently produced fascinating generative models of speech and music based on raw audio.

We’re especially excited to see how people all over the world are using TensorFlow. For example:
  • Australian marine biologists are using TensorFlow to find sea cows in tens of thousands of hi-res photos to better understand their populations, which are under threat of extinction.
  • An enterprising Japanese cucumber farmer trained a model with TensorFlow to sort cucumbers by size, shape, and other characteristics.
  • Radiologists have adapted TensorFlow to identify signs of Parkinson’s disease in medical scans.
  • Data scientists in the Bay Area have rigged up TensorFlow and the Raspberry Pi to keep track of the Caltrain.

We’re committed to making sure TensorFlow scales all the way from research to production and from the tiniest Raspberry Pi all the way up to server farms filled with GPUs or TPUs. But TensorFlow is more than a single open-source project – we’re doing our best to foster an open-source ecosystem of related software and machine learning models around it:
  • The TensorFlow Serving project simplifies the process of serving TensorFlow models in production.
  • TensorFlow “Wide and Deep” models combine the strengths of traditional linear models and modern deep neural networks.
  • For those who are interested in working with TensorFlow in the cloud, Google Cloud Platform recently launched Cloud Machine Learning, which offers TensorFlow as a managed service.

Furthermore, TensorFlow’s repository of models continues to grow with contributions from the community, with more than 3000 TensorFlow-related repositories listed on GitHub alone! To participate in the TensorFlow community, you can follow our new Twitter account (@tensorflow), find us on GitHub, ask and answer questions on StackOverflow, and join the community discussion list.

Thanks very much to all of you who have already adopted TensorFlow in your cutting-edge products, your ambitious research, your fast-growing startups, and your school projects; special thanks to everyone who has contributed directly to the codebase. In collaboration with the global machine learning community, we look forward to making TensorFlow even better in the years to come!
Categories: Programming

Adding TV Channels to Your App with the TIF Companion Library

Android Developers Blog - Wed, 11/09/2016 - 17:45

Posted by Nick Felker and Sachit Mishra, Developer Programs Engineers

The TV Input Framework (TIF) on Android TV makes it easy for third-party app developers to create their own TV channels with any type of linear media. It introduces a new way for apps to engage with users with a high-quality channel surfing experience, and it gives users a single interface to browse and watch all of their channels.

To help developers get started with building TV channels, we have created the TV Input Framework Companion Library, which includes a number of helper methods and classes to make the development process as easy as possible.

This library provides standard classes to set up a background task that updates the program guide and an interface that helps integrate your media player with the playback controller, as well as supports the new TV Recording APIs that are available in Android Nougat. It includes everything you need to start showing your content on your Android TV's live TV app.

(Note: source from android-tv-sample-inputs sample)

To get started, take a look at the sample app and documentation. The sample demonstrates how to extend this library to create custom channels and manage video playback. Developers can immediately get started with the sample app by updating the XMLTV file with their own content or dynamically creating channels in the SampleJobService.

You can include this library in your app by copying the library directory from the sample into your project root directory. Then, add the following to your project's settings.gradle file:

include ':library'

In your app's build.gradle file, add the following to your dependencies:

compile project(':library')

Android TV continues to grow, and whether your app has on-demand or live media, TIF is a great way to keep users engaged with your content. One partner for example, Haystack TV, recently integrated TIF into their app and it now accounts for 16% of watch time for new users on Android TV.

Check out our TV developer site to learn more about Android TV, and join our developer community on Google+ at g.co/androidtvdev to discuss this library and other topics with TV developers.

Categories: Programming

The Software Development Knights Who Say “No!”

From the Editor of Methods & Tools - Wed, 11/09/2016 - 17:13
In the movie “Monty Python and the Holy Grail“, King Arthur and the Knights of the Round Table had to face the obstacle of the knights who say “ni!” to travel further in their quest for the Grail. In the modern software development world, it seems that software developers can instead follow the knights that […]

Estimating is a Learned Skill

Herding Cats - Glen Alleman - Wed, 11/09/2016 - 05:25

Estimating is a learned skill, used for any purpose from every-day life to management of projects. When I left for the airport this morning to catch my flight to a customer site I estimated, given the conditions, how much time I need to get to my favorite parking spot at DIA. When I landed in Boston, I asked the taxi driver how long it will take to get back to the airport on Wednesday at 3:00PM. He knew that answer. From my location at the office in the North End to the airport, between 7 to 12 minutes to the SWA terminal.

The same process for estimating is applied to multi-billion dollar projects we work. And the same process is applied to the Scrum development processes on those projects. 

Here's some materials that provide the tools and processes needed to learn how to estimate. Google will find these when there is no URL provided.

So when you hear we can't estimate you'll know better. And when you hear estimates are a waste you'll realize that person must work in a de minimis project, where those paying have no need to know how much it will cost, when the project will be done, and what Capabilities they'll get for that time and money before the time and money runs outs.

The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them ‒ Steve McConnell

  • “Believing is Seeing: Confirmation Bias Studies in Software Engineering, “Magne Jørgensen and Efi Papatheocharous, 41st Euromicro Conference on Software Engineering and Advanced Applications (SEAA)
  • “Numerical anchors and their strong effects on software development effort estimates,” Erik Løhrea, and Magne Jørgensen, Simula Research Laboratory, Oslo.
  • “Review on Traditional and Agile Cost Estimation Success Factor in Software Development Project,” Zulkefli Mansor, Saadiah Yahya, Noor Habibah Hj Arshad, International Journal on New Computer Architectures and Their Applications (IJNCAA) 1(3): 942–952.
  • “Release Planning & Buffered MoSCoW Rules,” Dr. Eduardo Miranda Institute for Software Research ASSE 2013 ‐ 14th Argentine Symposium on Software Engineering / 42 JAIIO (Argentine Conference on Informatics) September 16th, 2013, Cordoba, Argentina
  • “Fixed price without fixed specification,” Magne Jørgensen, Simula Research Laboratory, 15 March 2016.
  • “The Use of Precision of Software Development Effort Estimates to Communicate Uncertainty,” Magne Jørgensen, Software Quality Days. The Future of Systems-and Software Development. Springer International Publishing, 2016.
  • “The Influence of Selection Bias on Effort Overruns in Software Development Projects,” Magne Jørgensen, Simula Research Laboratory & Institute of Informatics, University of Oslo.
  • “Software effort estimation terminology: The tower of Babel,” Stein Grimstad, Magne Jørgensen, Kjetil Moløkken-Østvold, Information and Software Technology 48 (2006) 302–310.
  • “Planning and Executing Time-Bound Projects,” Eduardo Miranda, IEEE Computer, March 2002, pp. 73 ‒ 78.
  • “When 90% Confidence Intervals are 50% Certain: On the Credibility of Credible Intervals,” Karl Halvor Teigen and Magne Jørgensen, Applied Cognitive Psychology Applied Cognitive Psychology, 19: 455–475 (2005).
  • “Software quality measurement,” Magne Jørgensen, Advances in Engineering Software 30 (1999) 907–912.
  • “Group Processes in Software Effort Estimation,” Kjetil Moløkken-østvold and Magne Jørgensen, Empirical Software Engineering, 9, 315–334, 2004.
  • “Story Point Estimating,” Richard Carlson, ALEA, Agile and Lean Associates, 2013.
  • “Project Estimation in the Norwegian Software Industry – A Summary,” Kjetil Moløkken, Magne Jørgensen, Sinan S. Tanilkan, Hans Gallis, Anette C. Lien, and Siw E. Hove, Simula Research Laboratory.
  • “Software Estimation using a Combination of Techniques,” Klaus Nielsen, PM Virtual Library, 2013
  • “An Effort Prediction Interval Approach Based on the Empirical Distribution of Previous Estimation Accuracy,” Magne Jørgensen and D. I. K. Sjøberg, Simula Research Laboratory, Norway.
  • “Better Sure than Safe? Overconfidence in Judgment Based Software Development Effort Prediction Intervals,” Magne Jørgensen, Karl Halvor Teigen, and Kjetil Moløkken-Østvold, Journal of Systems and Software, February 2004
  • “The Impact of Irrelevant and Misleading Information on Software Development Effort Estimates: A Randomized Controlled Field Experiment,” Magne Jørgensen and Stein Grimstad, IEEE Transactions of Software Engineering, Volume 37, Issue 5, September ‒ October 2011.
  • “The Heisenberg Uncertainty Principle and Its Application to Software,” P. A. Laplante, ACM SIGSOFT Software Engineering Notes, Vol. 15 No. 5 Oct 1990, Page 21.
  • “Experience With the Accuracy of Software Maintenance Task Effort Prediction Models, Magne Jørgensen, IEEE Transactions On Software Engineering, Vol. 21, No. 8, August 1995.
  • “Conducting Realistic Experiments in Software Engineering,” Dag I. K. Sjøberg, Bente Anda, Erik Arisholm, Tore Dybå, Magne Jørgensen, Amela Karahasanovic, Espen F. Koren and Marek Vokác, International Symposium on Empirical Software Engineering, 2002.
  • “Forecasting of software development work effort: Evidence on expert judgement and formal models,” Magne Jørgensen, International Journal of Forecasting 23(3) pp. 449-462, July 2004.
  • “Software effort estimation terminology: The tower of Babel,” Stein Grimstad, Magne Jørgensen, Kjetil Moløkken-Østvold, Information and Software Technology, 48 (2006) 302–310.
  • “A Systematic Review of Software Development Cost Estimation Studies,” Magne Jørgensen and Martin Shepperd, IEEE Transactions On Software Engineering, Vol. 33, No. 1, January 2007.
  • “Towards a Fuzzy based Framework for Effort Estimation in Agile Software Development,” Atef Tayh Raslan, Nagy Ramadan Darwish, and Hesham Ahmed Hefny, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 1, 2015,
  • “Evaluation of Model Evaluation Criterion for Software Development Effort Estimation,” S. K. Pillai, and M. K. Jeyakumar, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering, Vol: 9, No: 1, 2015.
  • “Modern Tools to Support DoD Software Intensive System of Systems Cost Estimation: A DACS State-of-the-Art Report, August 2007
  • “Software Effort Estimation with Ridge Regression and Evolutionary Attribute Selection,” Efi Papatheocharous , Harris Papadopoulos and Andreas S. Andreou, 3rd Artificial Intelligence Techniques in Software Engineering Workshop, 7 October, 2010, Larnaca, Cyprus
  • “The Business of Software Estimation Is Not Evil: Reconciling agile approaches and project estimates,” Phillip G. Armour, Communications of the ACM, January 2014, Vol. 57, No. 1.
  • “Analysis of Empirical Software Effort Estimation Models,” Saleem Basha and Dhavachelvan P, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 7, No. 3, 2010,
  • “Empirical Estimation of Hybrid Model: A Controlled Case Study,” Sadaf Un Nisa and M. Rizwan Jameel Qureshi, I.J., Information Technology and Computer Science, 2012, 8, 43–50.
  • “Identification of inaccurate effort estimates in agile software development,” Florian Raith, Ingo Richter, Robert Lindermeier, and Gudrun Klinker, 2013 20th Asia-Pacific Software Engineering Conference (APSEC)
  • “Efficient Indicators to Evaluate the Status of Software Development Effort Estimation inside the Organizations,” Elham Khatibi and Roliana Ibrahim, International Journal of Managing Information Technology (IJMIT) Vol. 4, No. 3, August 2012
  • “Modern Project Management: A New Forecasting Model to Ensure Project Success,” Iman Attarzadeh and Ow Siew Hock, International Conference on Future Computer and Communication, 2009.
  • “Using public domain metrics to estimate software development effort,” Ross Jeffery, Melanie Ruhe, and Isabella Wieczorek, Proceedings. Seventh International Software Metrics Symposium, 2001.
  • “What We Do and Don’t Know about Software Development Effort Estimation,” Magne Jørgensen, IEEE Software, March / April 2014.
  • “A review of studies on expert estimation of software development effort,” Magne Jørgensen, The Journal of Systems and Software 70 (2004) 37–60.
  • “How to Avoid Impact from Irrelevant and Misleading Information when Estimating Software Development Effort,” Magne Jørgensen & Stein Grimstad Simula Research Laboratory.
  • “Avoiding Irrelevant and Misleading Information When Estimating Development Effort,” Bente Anda , Hege Dreiem , Dag I. K. Sjøberg1, and Magne Jørgensen, IEEE Software, Volume 25, Issues 3, May-June, 2008.
  • “Prediction of project outcome: The Application of Statistical Methods to Earned Value Management and Earned Schedule Performance Indexes,” Walt Lipke, Ofer Zwikael, Kym Henderson, and Frank Anbari, International Journal of Project Management, 27, pp. 400-407, 2009
  • “The ROI of Agile VS. Traditional Methods? An Analysis of XP, TDD, Pair Programming, and Scrum (Using Real Options),” Dr. David Rico, http://davidfrico.com/rico08b.pdf
  • “Exploring the ‘Planning Fallacy’: Why People Underestimate Their Task Completion Times.” Roger Buehler, Dale Griffin, and Michael Ross, Journal of Personality and Social Psychology, Vol 67(3), Sep 1994, 366-38.
  • “Estimates, Uncertainty, and Risk,” Barbara Kitchenham and Stephen Linkman, University of Keele, IEEE Software, May / June, 1997,
  • “Software Project Scheduling under Uncertainties,” Intaver Institute Inc.
  • “A Comparison of Software Project Overruns—Flexible versus Sequential Development Models,” Kjetil Moløkken-Østvold and Magne Jørgensen, IEEE Transactions on Software Engineering, Volume 31, Issue 9, September 2005.
  • “Cost Estimating Issues for MAIS Programs Using an Agile Approach for SW Development,” Richard Mabe, 22 September 2015, DoD Agile Meeting: Enhancing Adoption of Agile Software Development in DoD, September 2015, PARCA OSD.
  • “An Empirical Investigation on Effort Estimation in Agile Global Software Development,” Ricardo Britto, Emilia Mendes, and Jurgen Borstler, 2015 IEEE 10th International Conference on Global Software Engineering
  • “Planning, Estimating, and Monitoring Progress in Agile Systems Development Environments,” Suzette S. Johnson, STC 2010.
  • “Improving Subjective Estimates Using Paired Comparisons,” Eduardo Miranda, IEEE Software, January/February, 2001.
  • “A Comparison of Software Project Overruns—Flexible versus Sequential Development Models,” Kjetil Moløkken-Østvold and Magne Jørgensen, IEEE Transactions on Software Engineering, Volume 31, Issue 9, September 2005.
  • “Using Performance Indices to Evaluate the Estimate at Completion,” David Christensen, Journal of Cost Analysis and Management, Spring 17–24.
  • “Reliability Improvement of Major Defense Acquisition Program Cost Estimates—Mapping DoDAF to COSMO,” Ricardo Valerdi, Matthew Dabkowski, and Indrajeet Dixit, Systems Engineering, Volume 18, Issue 4, 2015
  • “Fallacies and biases when adding effort estimates.” Magne Jørgensen, https://www.simula.no/file/simulasimula2762pdf/download
  • “The Use of Precision of Software Development Effort Estimates to Communicate Uncertainty,” Magne Jørgensen, Software Quality Days. The Future of Systems-and Software Development. Springer.
  • “Communication of software cost estimates,” Magne Jørgensen, https://simula.no/file/simulasimula2498pdf/download
  • “Relative Estimation of Software Development Effort: It Matters With What and How You Compare,” Magne Jørgensen, IEEE Software(2013): 74-79.
  • “Reasons for Software Effort Estimation Error: Impact of Respondent Role, Information Collection Approach, and Data Analysis Method Magne,” Jørgensen and Kjetil Moløkken-Østvold, IEEE Transactions On Software Engineering, Vol. 30, No. 12, December 2004.
  • “Use Case Points: An estimation approach,” Gautam Banerjee, https://gl/QcPmYd
  • “Software Cost Estimation Methods: A Review, “ Vahid Khatibi and Dayang N. A. Jawawi, Journal of Emerging Trends in Computing and Information Sciences, Volume 2 No. 1.
  • “Software cost estimation,” Chapter 26, Software Engineering, 9th Edition, Ian Sommerville, 2010.
  • “Estimating Development Time and Effort of Software Projects by using a Neuro-Fuzzy Approach,” Venus Marza and Mohammad Teshnehlab, in Advanced Technologies, INTECH Open, October 2009.
  • “Function Points, Use Case Points, Story Points: Observations From a Case Study,” Joe Schofield, Alan W. Armentrout, and Regina M. Trujillo, CrossTalk: The Journal of Defense Software Engineering, May–June 2013.
  • “Advanced Topics in Agile Estimating,” Mike Cohn, Mountain Goat Software
  • “Schedule Assessment Guide: Best Practices for Schedule Assessment,” GAO-16-89G.
  • Agile Estimating and Planning, Mike Cohn, Prentice Hall, 2006
  • ” A Bayesian Software Estimating Model Using a Generalized g-Prior Approach,” Sunita Chulani and Bert Steece, Technical Report, USC-CSE-98515
  • “A Model for Software Development Effort and Cost Estimation,” Krishnakumar Pillai and V.S. Sukumaran Nair, IEEE Transactions on Software Engineering, Vol. 23, No. 8, August 1997.
  • “An Alternative to the Rayleigh Curve Model for Software Development Effort,” F. N. Parr, IEEE Transactions On Software Engineering, Vol. SE–6, NO. 3, May1980.
  • Fifty Quick Ideas to Improve Your User Stories, Gojko Adzix and David Evans, http://leanpub.com/50quickideas
  • The Use of Agile Surveillance Points: An Alternative to Milestone Reviews,” Richard “Dick” Carlson, http://a2zalea.com/wp–content/uploads/2014/02/Agile–Surveillance–Points_20140113.pdf
  • “A Planning Poker Tool for Supporting Collaborative Estimation in Distributed Agile Development,” Fabio Calefato and Filippo Lanubile, ICSEA 2011, The Sixth International Conference on Software Engineering Advances.
  • “An Effort Estimation Model for Agile Software Development,” Shahid Ziauddin, Kamal Tipu, Shahrukh Zia, Advances in Computer Science and its Applications (ACSA) 314 Vol. 2, No. 1, 2012.
  • “Successful Solutions Through Agile Project Management,” ESI International White Paper, 2010.
  • “Towards a Fuzzy based Framework for Effort Estimation in Agile Software Development,” Atef Tayh Raslan, Nagy Ramadan Darwish, and Hesham Ahmed Hefny, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 13, No. 1, 2015.
  • “Improving Subjective Estimations Using Paired Comparisons,” Eduardo Miranda, IEEE Software Magazine, Vol. 18, No. 1, January 2001.
  • “Sprint Planning Optimization in Agile Data Warehouse Design,” Matteo Golfarelli, Stefano Rizzi, and Elisa Turricchia, LNCS 7448, pp. 30–41, 2012.
  • “Effort Estimation in Global Software Development: A Systematic Literature Review,” Ricardo Britto, Vitor Freitas, Emilia Mendes, and Muhammad Usman, IEEE 9th International Conference on Global Software Engineering, 2014.
  • “An evaluation of the paired comparisons method for software sizing,” Eduardo Miranda, Proceedings of the 2000 International Conference on Software Engineering.
  • “Protecting Software Development Projects Against Underestimation,” Eduardo Miranda and Alain Abran, Project Management Journal, Volume 39, Issue 3, Pages 75-85, September, 2008.
  • “Sizing User Stories Using Paired Comparisons,” Eduardo Miranda, Pierre Bourque, and Alain Abran, Information and Software Technology, Volume 51, Issue 9, September 2009, Pages 1327–1337.
  • “Effort Estimation in Agile Software Development using Story Points,” Evita Coelho and Anirban Basu, International Journal of Applied Information Systems (IJAIS), Volume 3, Number 7, August 2012.
  • “A Model for Estimating Agile Project Schedule Acceleration,” Dan Ingold, Barry Boehm, Supannika Koolmanojwong, and Jo Ann Lane, Center for Systems and Software Engineering, University of Southern California, Los Angeles, 2013.
  • “Cost Estimation in Agile Development Projects,” Siobhan Keaveney and Kieran Conboy, International Conference on Information Systems Development (ISD2011) Prato, Italy.
  • “Analysis of Empirical Software Effort Estimation Models,” Saleem Basha and Dhavachelvan P, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 7, No. 3, 2010.
  • IT Project Estimation: A Practical Guide to Costing Software, Paul Coombs, Cambridge University Press, 2003
  • “Replanning, Reprogramming, and Single Point Adjustments,”
    July 2013, NAVY CEVM (Center for Earned Value Management).
  • “Software Cost Estimating for Iterative / Incremental Development Programs – Agile Cost Estimating,” NASA CAS, July 2014.
  • “Distinguishing Two Dimensions of Uncertainty,” Craig Fox and Güllden Ülkümen, Perspectives on Thinking, Judging, and Decision Making, Brun, W., Keren, G., Kirkeboen, G., & Montgomery, H. (Eds.). (2011). Oslo, Norway: Universitetsforlaget.
  • “Could Social Factors Influence the Effort Software Estimation?” Valentina Lenarduzzi, 7th International Workshop on Social Software Engineering (SSE), At Bergamo (Italy), September 2015.
  • “Object-Oriented Software Cost Estimation Methodologies Compared,” D. Gregory Foley & Brenda K. Wetzel, Society of Cost Estimating and Analysis – International Society of Parametric Analysts, 22 December 2011, pp 41-63.
  • “Fix Your Estimating Bad Habits,” Ted M. Young, http://slideshare.net/tedyoung/fix-you-some-bad-estimation-habits
  • “How to Estimate and Agile Project,” Saunders Learning Center, http://www.slideshare.net/FloydSaunders/how-to-estimate-an-agile-project
  • Software Cost Estimation Metrics Manual for Defense Systems, Bradford Clark and Richard Madachy (editors), 2015.
  • “Metrics for Agile Projects: Finding the Right Tools for the Job,” ESI International, https://www.projectsmart.co.uk/white–papers/metrics–for–agile–projects.pdf
  • “The Sprint Planning Meeting, Richard “Dick” Carlson, http://www.a2zalea.com/wp–content/uploads/2014/02/SprintPlanningMeeting_20140118.pdf
  • “Software Development Estimation Biases: The Role of Interdependence,” Magne Jørgensen and Stein Grimstad, IEEE Transactions on Software Engineering, Vol. 38, No. 3, May/June 2012.
  • “Managing Projects of Chaotic and Unpredictable Behavior,” Richard “Dick” Carlson, http://www.a2zalea.com/wp–content/uploads/2014/02/Managing–Projects–of–Chaotic–and–Unpredictable–Behavior_20140219.pdf
  • ‘Practical Guidelines for Expert-Judgement-Based Software Effort Estimation,” Magna Jørgensen, IEEE Software, May/June 2005.
  • “How do you estimate on an Agile project?,” eBook, ThoughtWorks, https://gl/ES5M3c
  • “Using the COSMIC Method to Estimate Agile User Stories,” Jean-Marc Desharnais, Luigi Buglione, Bugra Kocatürk, Proceedings of the 12th International Conference on Product Focused Software Development and Process Improvement
  • “On the problem of the software cost function,” J. J. Dolado, Information and Software Technology 43 (2001) 61–72.
  • Software Project Effort Estimation: Foundations and Best Practice Guidelines for Success, 2014th Edition, Adam Trendowicz and Ross Jeffery, Springer.
  • “Unit effects in project estimation: It matters whether you estimate in work-hours or workdays,” Magne Jørgensen Journal of Systems and Software(2015).
  • “Estimating Software Development Effort based on Use Cases – Experiences from Industry,” Bente Anda , Hege Dreiem , Dag I. K. Sjøberg1, and Magne Jørgensen, Proceedings of the 4th International Conference on The Unified Modeling Language, Modeling Languages, Concepts, and Tools, Pages 487-502
  • “A Neuro-Fuzzy Model with SEER-SEM for Software Effort Estimation,” Wei Lin Du, Danny Ho, Luiz Fernando Capretz, 25th International Forum on COCOMO and Systems/Software Cost Modeling, Los Angeles, CA, 2010.
  • “A Program Manager's Guide For Software Cost Estimating,” Andrew L. Dobbs, Naval Postgraduate School, December 2002.
  • “An Engineering Context For Software Engineering,” Richard D. Riehle, September 2008, Naval Postgraduate School.
  • “Application of Real Options theory to software engineering for strategic decision making in software related capital investments,” Albert O. Olagbemiro, Monterey, California. Naval Postgraduate School, 2008.
  • “Next Generation Software Estimating Framework: 25 Years and Thousands of Projects Later,” Michael A. Ross, Journal of Cost Analysis and Parametrics, Volume 1, 2008 - Issue 2.
  • “A Probabilistic Method for Predicting Software Code Growth,” Michael A. Ross, Journal of Cost Analysis and Parametrics, Volume 4, 2011 - Issue 2
  • “Application of selected software cost estimating models to a tactical communications switching system: tentative analysis of model applicability to an ongoing development program,” William B. Collins, Naval Postgraduate School
  • “An examination of project management and control requirements and alternatives at FNOC,” Charlotte Ruth Gross, Naval Postgraduate School.
  • “Software cost estimation through Bayesian inference of software size, In Kyoung Park, Naval Postgraduate School.
  • “Using the agile development methodology and applying best practice project management processes,” Gary R. King, Naval Postgraduate School.
  • “Calibrating Function Points Using Neuro-Fuzzy Technique,” Vivian Xia Danny Ho Luiz F. Capretz, 21st International Forum on Systems, Software and COCOMO Cost Modeling, Washington, 2006.
  • Practical Software Project Estimation: A Toolkit for Estimating Software Development Effort & Duration, Peter Hill, International Software Benchmarking Standards Group.
  • Software Estimation Best Practices, Tools & Techniques: A Complete Guide for Software Project Estimators, Murali K. Chemuturi, J. Ross Publishing, August 2009.
  • Software Project Cost and Schedule Estimating: Best Practices, William H. Roetzheim, Prentice Hall.
  • Estimating the Scope of Software Projects Using Statistics, Louis Newstrom, Louis Newstrom Publisher, December 4, 2015.
  • Organizational Structure Impacts Flight Software Cost Risk,” Jairus M. Hihn , Karen Lum, and Erik Monson, Journal of Cost Analysis and Parametrics, Volume 2, 2009 - Issue 1.
  • “Estimate of the appropriate Sprint length in agile development by conducting simulation,” Ryushi Shiohama, Hironori Washizaki, Shin Kuboaki, Kazunori Sakamoto, and Yoshiaki Fukazawa, 2012 Agile Conference, 13-17 August 2012, Dallas Texas
  • “Advancement of decision making in Agile Projects by applying Logistic Regression on Estimates,” Lakshminarayana Kompella, 2013 IEEE 8th International Conference on Global Software Engineering Workshops.
  • Estimating in Actual Time,” Moses M. Hohman, IEEE Proceedings of the Agile Development Conference (ADC’05), Denver, Colorado 24-29 July, 2005
  • “Cost Estimation In Agile Development Projects,” Siobhan Keaveney and Kieran Conboy, ECIS 2006 Proceedings
  • “Coping with the Cone of Uncertainty: An Empirical Study of the SAIV Process Model,” Da Yang, Barry Boehm , Ye Yang, Qing Wang, and Mingshu Li, ICSP 2007, LNCS 4470, pp. 37–48, 2007
  • “Combining Estimates with Planning Poker – An Empirical Study,” Kjetil Moløkken-Østvold and Nils Christian Haugen, Proceedings of the 2007 Australian Software Engineering Conference (ASWEC'07).
  • “A Case Study Research on Software Cost Estimation Using Experts’ Estimates, Wideband Delphi, and Planning Poker Technique,” Taghi Javdani Gandomani , Koh Tieng Wei, and Abdulelah Khaled Binhamid, International Journal of Software Engineering and Its Applications, 8, No. 11 (2014), pp. 173-182.
  • “Algorithmic Based and Non-Algorithmic Based Approaches to Estimate the Software Effort,” WanJiang Han , TianBo Lu , XiaoYan Zhang , LiXin Jiang and Weijian Li, International Journal of Multimedia and Ubiquitous Engineering, 10, No. 4 (2015), pp. 141-154.
  • “Reducing Estimation Uncertainty with Continuous Assessment: Tracking the 'Cone of Uncertainty’” Pongtip Aroonvatanaporn, Chatchai Sinthop and Barry Boehm, Center for Systems and Software Engineering University of Southern California Los Angeles, CA 90089, ASE’10, September 20–24, 2010, Antwerp, Belgium, 2010.
  • “Integrated Approach of Software Project Size Estimation,” Brajesh Kumar Singh, Akash Punhani, and A. K. Misra, International Journal of Software Engineering and Its Applications 10, No. 2 (2016), pp. 45-64.
  • “Investigating the Effect of Using Methodology on Development Effort in Software Projects,” Vahid B. Khatibi, Dayang N. A. Jawawi, and Elham Khatibi, International Journal of Software Engineering and Its Applications 6, No. 2, April, 2012.
  • “Data-Driven Decision Making as a Tool to Improve Software Development Productivity,” Mary Erin Brown, Walden University, 2013
  • “Applying Agile Practices to Space-Based Software Systems,” Arlene Minkiewicz, Software Technology Conference, Long Beach, CA 31 March – 3 April, 2014
  • “Estimating the Effort Overhead in Global Software Development,” Ansgar Lamersdorf, Jurgen Munch, Alicia Fernandez-del Viso Torre, Carlos Rebate Sanchez, and Dieter Rombach, 2010 5th IEEE International Conference on Global Software Engineering
  • “A Proposed Framework for Software Effort Estimation Using the Combinational Approach of Fuzzy Logic and Neural Networks,” Pawandeep Kaur and Rupinder Singh, International Journal of Hybrid Information Technology 8, No. 10 (2015), pp. 73-80.
  • “Software Estimating Rules of Thumb,” Capers Jones, http://compaid.com/caiinternet/ezine/capers-rules.pdf
  • “Why Are Estimates Always Wrong: Estimation Bias and Strategic Misestimation,” Daniel D. Galorath, http://iceaaonline.com/ready/wp-content/uploads/2015/06/RI03-Paper-Galorath-Estimates-Always-Wrong.pdf
  • “Using planning poker for combining expert estimates in software projects,” K. Moløkken-Østvold, N. C. Haugen, and H. C. Benestad, Journal of Systems and Software, vol. 81, issue 12 (2008) pp. 2106–2117.
  • “Effort Distribution to Estimate Cost in Small to Medium Software Development Project with Use Case Points,” Putu Linda Primandari and Sholiq, The Third Information Systems International Conference, 2015
  • “Estimation of IT-Projects Highlights of a Workshop,” Manfred Bundschuh, Metrics News, Vol. 4, Nr. 2, December 1999, pp. 29 – 37, https://itmpi.org/Portals/10/PDF/bundschuh-est.pdf
  • “Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice,” Bent Flyvbjerg, European Planning Studies 16, No. 1, January 2008.
  • “The Use of Precision of Software Development Effort Estimates to Communicate Uncertainty,” Magne Jørgensen, Software Quality Days. The Future of Systems-and Software Development. Springer International Publishing, 2016.
  • “Numerical anchors and their strong effects on software development effort estimates,” Erik Løhrea, and Magne Jørgensen, Journal of Systems and Software(2015).
  • “A Neuro-Fuzzy Model for Function Point Calibration,” Wei Xia, Danny Ho, and Luiz Fernando Capretz, WSEAS, Transactions On Information Science & Applications, Issue 1, Volume 5, January 2008.
  • “Unit effects in project estimation: It matters whether you estimate in work-hours or workdays,” Magne Jørgensen, Journal of Systems and Software(2015), https://simula.no/file/time-unit-effect-woauthorinfpdf/download
  • “Fallacies and biases when adding effort estimates,” Magne Jørgensen, Proceedings at Euromicro/SEEA. : IEEE, 2014.
  • “How Does Project Size Affect Cost Estimation Error? Statistical Artifacts and Methodological Challenges,” International Journal of Project Management 30 (2012): 751-862, https://simula.no/file/simulasimula742pdf/download
  • “Does the Use of Fibonacci Numbers in Planning Poker Affect Effort Estimates?” Ritesh Tamrakar and Magne Jørgensen, 16th International Conference on Evaluation & Assessment in Software Engineering, 2012.
  • “Using inferred probabilities to measure the accuracy of imprecise forecasts,” Paul Lehner, Avra Michelson, Leonard Adelman, and Anna Goodman, Judgment and Decision Making, Vol. 7, No. 6, November 2012, pp. 728–740.
  • “Software Development Effort Estimation: Why it fails and how to improve it,” Magne Jørgensen, Simula Research Laboratory & University of Oslo, https://simula.no/file/simulasimula1688pdf/download
  • Contrasting Ideal and Realistic Conditions As a Means to Improve Judgment-Based Software Development Effort Estimation,” Magne Jørgensen, Information and Software Technology53 (2011): 1382-1390.
  • Software Effort Estimation as Collaborative Planning Activity Kristin Børte, https://simula.no/file/simulasimula1226pdf/download
  • “Human judgment in planning and estimation of software projects,” https://simula.no/file/simulasimula886pdf/download
  • “Guideline for Sizing Agile Projects with COSMIC,” Sylvie Trudel and Luigi Buglione, http://cosmic-sizing.org/publications/guideline-for-sizing-agile-projects-with-cosmic
  • “The COSMIC Functional Size Measurement Method, Version 3.0.1, Guideline for the use of COSMIC FSM to manage Agile projects, VERSION 1.0,” September 2011, http://cosmic-sizing.org/cosmic-method-v3-0-1-agile-projects-guideline-v1-0/
  • “Using the COSMIC Method to Evaluate the Quality of the Documentation of Agile User Stories,” Jean-Marc Desharnais, Buğra Kocatürk, and Alain Abran, Proceedings of the 12th International Conference on Product Focused Software Development and Process Improvement, Pages 68-73
  • “An Empirical Study of Using Planning Poker for User Story Estimation,” Nils C. Haugen, Proceedings of AGILE 2006 Conference (AGILE’06).
  • “A Framework for the Analysis of Software Cost Estimation Accuracy,” Stein Grimstad and Magne Jørgensen, ISESE'06, September 21–22, 2006.
  • “An Empirical Investigation on Effort Estimation in Agile Global Software Development,” Ricardo Britto, Emilia Mendes, and Jurgen Borstler, 2015 IEEE 10th International Conference on Global Software Engineering
  • “Software Effort Estimation: Unstructured Group Discussion as a Method to Reduce Individual Biases,” Kjetil Moløkken and Magne Jørgensen, Incremental and Component-Based Software Development October 2003, University of Oslo.
  • “A Case Study on Agile Estimating and Planning using Scrum,” V. Mahnic, Electronics And Electrical Engineering, No. 5(111).
  • “Review on Traditional and Agile Cost Estimation Success Factor in Software Development Project,” Zulkefli Mansor, Saadiah Yahya, Noor Habibah Hj Arshad, International Journal on New Computer Architectures and Their Applications (IJNCAA) 1(3): 942-952.
  • “A Collective Study of PCA and Neural Network based on COCOMO for Software Cost Estimation,” Rina M. Waghmode, L.V. Patil, and S.D Joshi, International Journal of Computer Applications (0975 – 8887) Volume 74– No. 16, July 2013.
  • “iUCP: Estimating Interactive-Software Project Size with Enhanced Use-Case Points,” Nuno Jardim Nunes, Larry Constantine, and Rick Kazman, IEEE Software, Issue No. 04 - July/August (2011 vol. 28)
  • “Estimating Software Project Effort Using Analogies,” Martin Shepperd and Chris Schofield, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 23, NO. 12, NOVEMBER 1997.
  • “Software Engineering Economics,” Barry W. Boehm, Software Information Systems Division, TRW Defense Systems Group, Redondo Beach, CA 90278
  • “Adapting, Correcting, and Perfecting Software Estimates: A Maintenance Metaphor,” Tarek K. Abdel-Hamid, IEEE Computer, March 1993.
  • “Estimating software projects,” R. Agarwal, Manish Kumar t, Yogesh, S. Mallick, RM. Bharadwaj, D. Anantwar, ACM Software Engineering Notes Vol 26 No. 4 July 2001, Pg. 6.
  • “Cost estimation in agile development projects,” Siobhan Keaveney and Kieran Conboy, Proceedings of the Fourteenth European Conference on Information Systems, ECIS 2006, Göteborg, Sweden, 2006.
  • “Managing Uncertainty in Agile Release Planning,” K. McDaid, D. Greer, F. Keenan, P. Prior, P. Taylor, G. Coleman, Proceedings of the Eighteenth International Conference on Software Engineering & Knowledge Engineering (SEKE'2006).
  • “Research Challenges of Agile Estimation,” Rashmi Popli, Dr. Naresh Chauhan, International Journal of IT & Knowledge Management, Volume 7 • Number 1 • December 2013 pp. 108-111 (ISSN 0973-4414),
  • “Allowing for Task Uncertainties and Dependencies in Agile Release Planning,” Kevin Logue, Kevin McDaid, and Des Greer,
  • “Agile Software Development in Large Organizations, ” Mikael Lindvall, Dirk Muthig, Aldo Dagnino Christina Wallin, Michael Stupperich, David Kiefer, John May, adn Tuomo Kähkönen, IEEE Computer, December 2004.
  • "Adoption of Team Estimation in a Specialist Organizational Environment,“ Tor Erlend Fægri, Lecture Notes in Business Information Processing · June 2010
  • “The Relationship between Customer Collaboration and Software Project Overruns,” Kjetil Moløkken-Østvold and Kristian Marius Furulund, IEEE Agile Conference, 13-17 August, 2007.
  • “Improving Estimations in Agile Projects: Issues and Avenues,” Luigi Buglione, Alain Abran, Proceedings Software Measurement European Forum (SMEF)
  • “Allowing for Task Uncertainties and Dependencies in Agile Release Planning,” Kevin Logue, Kevin McDaid, and Des Geer, Proceedings Software Measurement European Forum (SMEF)
  • “Fundamental uncertainties in projects and the scope of project management,” Roger Atkinson , Lynn Crawford, and Stephen Ward, International Journal of Project Management 24 (2006) 687–698.
  • “Improving estimation accuracy by using Case Based Reasoning and a combined estimation approach,” Srinivasa Gopal and Meenakshi D’Souza, Proceedings of ISEC '12, Feb. 22-25, 2012.
  • “Effort Estimation in Agile Software Development: A Systematic Literature Review,” Muhammad Usman , Emilia Mendes , Francila Weidt , and Ricardo Britto, Proceedings of the 10th International Conference on Predictive Models in Software Engineering, 2014, Pages 82-91
  • “Incremental effort prediction models in Agile Development using Radial Basis Functions ,” Raimund Moser, Witold Pedrycz , and Giancarlo Succi, SEKE 2007
  • “Applying Combined Efforts of Resource Capability of Project Teams for Planning and Managing Contingency Reserves for Software and Information Engineering Projects,” Peter H. Chang, GSTF Journal on Computing (JoC) Vol. 2 No. 3, October 2012.
  • Evidence-Based Software Engineering and Systematic Reviews, Barbara Ann Kitchenham, David Budgen and Pearl Brereton, November 5, 2015
  • “The Signal and the Noise in Cost Estimating,” Christian B. Smart, Ph.D., 2016 International Training Symposium, Bristol England, 2016.
  • Estimating Software Intensive Systems: Project, Products, and Processes, Richard Stutzke, Addison Wesley.
  • Estimating Software Costs: Bringing Realism to Estimating, 2nd Edition, Capers Jones, McGraw Hill
  • Software Estimation: Demystifying the Black Art, Steve McConnell, Microsoft Press.
  • Software Metrics: A Rigorous and Practical Approach, 3rd Edition, Norman Fenton and Janes Bieman, CRC Press.
  • Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey, CRC Press.
  • Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte-Carlo Simulation, Troy Magennis, CreateSpace Independent Publishing Platform, October 25, 2011.
  • “Effort estimation for Agile Software Development Projects,” Andreas Schmietendorf, Martin Kunz, Reiner Dumke, Proceedings 5th Software Measurement European Forum, Milan 2008.
  • “The QUELCE Method: Using Change Drivers To Estimate Program Costs,” Sarah Sheard, April 2016, Software Engineering Institute.
  • “Software Cost and Schedule Estimating: A Process Improvement Initiative,” Robert Park, Wolfhart Goethert, J. Todd Webb, Special Report CMU/SEI-94-SR-3 May 1994.
  • “Organizational Considerations for the Estimating Process,” Bob Ferguson, Software Engineering Institute, November, 2004.
  • “A Parametric Analysis of Project Management Performance to Enhance Software Development Process,” N. R. Shashikumar, T. R. Gopalakrishnan Nair, Suma V, IEEE International Conference on Advanced Research in Engineering and Technology (ICARET - 2013)
  • “Checklists and Criteria for Evaluating the Cost and Schedule Estimating Capabilities of Software Organizations,” Robert E. Park, CMU/SEI-95-SR-005
  • “A Manager's Checklist for Validating Software Cost and Schedule Estimates,” Robert E. Park, Special Report CMU/SEI-95-SR-004 January 1995.
  • ACE Accurate Confident Estimating TSP Symposium November 4, 2014 Pittsburgh, PA, Team Software Process (TSP) Symposium, 4 Nov 2014, SEI Carnegie Mellon University
  • How to Lie with Statistics, Darrell Huff, W. W. Norton, 1954
  • “A Simulation and Evaluation of Earned Value Metrics to Forecast the Project Duration,” Mario Vanhoucke and Stephan Vandevoorde, The Journal of the Operational Research Society, 58, No. 10 (Oct., 2007), pp. 1361-1374
  • “Avoid Software Project Horror Stories: Check the Reality Value of the Estimate First!”, Harold van Herringen, ICEAA 2014
  • COSMIC: Guideline for Sizing Business Software, Version 3, http://www.etsmtl.ca/Unites-de-recherche/GELOG/accueil
  • “Factors Affecting Duration And Effort Estimation Errors In Software Development Projects,” Ofer Morgenshtern, Tzvi Raz, and Dov Dvir, Working Paper No 8/2005, Henry Crown Institute of Business Research, Israel. http://recanati-bs.tau.ac.il/Eng/?CategoryID=444&ArticleID=747
  • “An Empirical Validation of Software Cost Estimation Models,” Chris F. Kemerer, Research Contributions, Management of Computing, Communications of the ACM , May 1987, Volume 30, Number 5.
  • “A Decision Support System To Choose Optimal Release Cycle Length In Incremental Software Development Environments,” Avnish Chandra Suman, Saraswati Mishra, and Abhinav Anand, International Journal of Software Engineering & Applications (IJSEA), 7, No.5, September 2016.
  • “Protecting Software Development Projects Against Underestimation,” Eduardo Miranda, Alain Abran, École de technologie supérieure - Université du Québec, http://mse.isri.cmu.edu/software-engineering/documents/faculty-publications/miranda/mirandaprotectingprojectsagainstunderestimations.pdf
  • “Improving Subjective Estimates Using Paired Comparisons,” Eduardo Miranda, IEEE Software, January/February 2001.
  • “Improving Estimations in Agile Projects: Issues and Avenues,” Luigi Buglione, Alain Abran, Software Measurement European Forum – SMEF2007, Rome (Italy), May 8-11, 2007.
  • “Estimation of Software Development Effort from Requirements Based Complexity,” Ashish Sharma , Dharmender Singh Kushwaha, 2nd International Conference on Computer, Communication, Control and Information Technology (C3IT 2012), February 25 - 26, 2012
  • “Estimating the Test Volume and Effort for Testing and Verification & Validation,” Alain Abran, Juan Garbajosa, , Laila Cheikhi1, First Annual ESA Workshop on Spacecraft Data Systems and Software - SDSS 2005, ESTEC, Noordwijk, European Space Agency, Netherlands, 17-20 October 2005.
  • “A General Empirical Solution to the Macro Software Sizing and Estimating Problem,” Lawrence H. Putnam, IEEE Transactions On Software Engineering, VOL. SE-4, NO. 4, JULY 1978.
  • “A Comparison of Software Cost, Duration, and Quality for Waterfall vs. Iterative and Incremental Development: A Systematic Review,” Susan M. Mitchell and Carolyn B. Seaman, Third International Symposium on Empirical Software Engineering and Measurement, 2009.
  • Software Sizing and Estimating: MK II FPA, Charles R. Symons, John Wiley and Sons, 1991
  • “A Review of Surveys on Software Effort Estimation,” Kjetil Molkken and Magne Jorgensen, Proceeding of International Symposium on Empirical Software Engineering, ISESE '03.
  • “Accurate Estimates Without Local Data?” Tim Menzies, Steve Williams, Oussama Elrawas, Daniel Baker, Barry Boehm, Jairus Hihn, Karen Lum, and Ray Madachy, Software Process Improvement And Practice, (2009).
  • “An Assessment and Comparison of Common Software Cost Estimation Modeling Techniques,” Lionel C. Briand, Khaled El Emam, Dagmar Surmann, Isabella Wieczorek, and Katrina D. Maxwell, Proceedings of the 21st international conference on Software engineering, Pg. 313-322
  • “The Probable Lowest-Cost Alternative According to Borda,” Neal D. Hulkower, Journal of Cost Analysis and Parametrics, 3:2, 29-36
  • “An Efficient Approach for Agile Web Based Project Estimation: AgileMOW,” Ratnesh Litoriya and Abhay Kothari, Journal of Software Engineering and Applications, 2013, 6, 297-303.
  • “Corad Agile Method for Agile Software Cost Estimation,” Govind Singh Rajput and Ratnesh Litoriya, http://dx.doi.org/10.4236/oalib.1100579
  • “A Baseline Model for Software Effort Estimation,” Peter A. Whigham, Caitlin A. Owen, and Stephen G. MacDonell, ACM Transaction on Software Engineering Methodology, 24, 3, Article 20 (May 2015).
  • Agile Product Management: Agile Estimating & Planning Your Sprint with Scrum and Release Planning 21 Steps, Paul Vii, Create Space, 2016
  • “Core Estimating Concepts,” William Roetzheim, CrossTalk: The Journal of Defense Software Engineering—January/February 2013.
  • “A Practical Approach to Size Estimation of Embedded Software Components,” Kenneth Lind and Rogardt Heldal, IEEE Transactions On Software Engineering, Vol. 38, No. 5, September/October 2012.
  • “A Probabilistic Model for Predicting Software Development Effort,” Parag C. Pendharkar, Girish H. Subramanian, and James A. Rodger, IEEE Transactions On Software Engineering, Vol. 31, No. 7, July 2005.
  • “A Pattern Language for Estimating,” Dmitry Nikelshpur, PLoP '11 Proceedings of the 18th Conference on Pattern Languages of Programs, Article No. 17.
  • “Do Estimators Learn? On the Effect of a Positively Skewed Distribution of Effort Data on Software Portfolio Productivity,” Hennie Huijgens and Frank Vogelezang, 7th International Workshop on Emerging Trends in Software Metrics, 2016.
  • “The Inaccurate Conception: Some thoughts on the accuracy of estimates,” Phillip G. Armour, Communications Of The ACM, March 2008/Vol. 51, No. 3
  • “Understanding Software Project Estimates,” Katherine Baxter, Cross Talk The Journal of Defense Software Engineering, March/April 2009,
  • “Validation Methods for Calibrating Software Effort Models,” Tim Menzies, Dan Port, Zhihao Chen, and Jairus Hihn, May 15–21, 2005, http://menzies.us/pdf/04coconut.pdf
  • “Requirements Instability in Cost Estimation,” Abiha Batool, Sabika Batool, and Mohammad Ayub Latif, https://www.academia.edu/4493828/Requirements_Instability_in_Cost_Estimation
  • “Negative Results for Software Effort Estimation Tim Menzies, Ye Yang, George Mathew, Barry Boehm, Jairus Hihn, EMSE 2016.
  • “Creating Requirements-Based Estimates Before Requirements Are Complete,” Carol A. Dekkers, Cross Talk The Journal of Defense Software Engineering, April 2005.
  • “Rational Cost Estimation of Dedicated Software Systems,” Beata Czarnacka-Chrobot, Journal of Software Engineering and Applications, 2012, 5, 262-269.
  • “Summarization of Software Cost Estimation,” Xiaotie Qina and Miao Fang, Advances in Control Engineering and Information Science, 2011
  • “Software Project Development Cost Estimation,” Barbara Kitchenham, The Journal of Systems and Software 3, 267-278 (1985).
  • “Cost Estimation in Agile Software Development Projects,” Michael Lang, Kieran Conboy and Siobhán Keaveney, International Conference on Information Systems Development (ISD2011) Prato, Italy.
  • “Project Estimating and Scheduling,” Terry Boult, University of Colorado, Colorado Springs, CS 330 Software Engineering.
  • “Practical Guidelines for Expert-Judgment-Based Software Effort Estimation,” Magne Jørgensen, IEEE Software, May/June 2005.
  • “Predicting Software Projects Cost Estimation Based on Mining Historical Data,” Hassan Najadat, Izzat Alsmadi, and Yazan Shboul, ISRN Software Engineering, Volume 2012, Article ID 823437.
  • “Models for Improving Software System Size Estimates during Development,” William W. Agresti, William M. Evanco, William M. Thomas, Journal of Software Engineering & Applications, 2010, 3: 1-10.
  • “Requirements Engineering for Agile Methods,” Alberto Sillitti and Giancarlo Succi, in Engineering and Managing Software Requirements, pp. 306-326, Springer, 2005.
  • “A Method for Improving Developers’ Software Size Estimates,” Lawrence H. Putnam, Douglas T. Putnam, and Donald M. Beckett, Cross Talk The Journal of Defense Software Engineering, April 2005.
  • “PERT, CPM, and Agile Project Management,” Robert C. Martin 5 October 2003, http://www.codejournal.com/public/cj/previews/PERTCPMAGILE.pdf
  • “Reliability and accuracy of the estimation process Wideband Delphi vs. Wisdom of Crowds,” Marek Grzegorz Stochel, 35th IEEE Annual Computer Software and Applications Conference, 2011
  • “Predicting development effort from user stories,” P. Abrahamsson, I. Fronza, R. Moser, J. Vlasenko, and W. Pedrycz, International Symposium on Empirical Software Engineering and Measurement, 2011
  • “Effort prediction in iterative software development processes - incremental versus global prediction models,” Pekka Abrahamsson, Raimund Moser, Witold Pedrycz, Alberto Sillitti, Giancarlo Succi, First International Symposium on Empirical Software Engineering and Measurement, 2007.
  • “Planning Poker or How to avoid analysis paralysis while release planning,” James Grenning, https://wingman-sw.com/papers/PlanningPoker-v1.1.pdf
  • “Agile Estimation using CAEA: A Comparative Study of Agile Projects,” Shilpa Bhalerao , Maya Ingle, 2009 International Conference on Computer Engineering and Applications IPCSIT, Vol.2 (2011).
  • “A Bayesian approach to improve estimate at completion in earned value management,” Franco Caron, Fabrizio Ruggeri, and Alessandro Merli, Project Management Institute Journal, Vol. 44, No. 1, pp. 3-16. 2013.
  • “An Empirical Approach for Estimation of the Software Development Effort,” Amit Kumar Jakhar and Kumar Rajnish, International Journal of Multimedia and Ubiquitous Engineering, 10, No. 2 (2015), pp. 97-110.
  • “Forecasting of Software Development Work Effort: Evidence on Expert Judgment and Formal Models,” Magne Jørgensen, International Journal of Forecasting, 2007
  • Software Project Effort Estimation Foundations and Best Practice Guidelines for Success, Adam Trendowicz and Ross Jeffery, Springer 2014.
  • “Agile Release Planning: Dealing with Uncertainty in Development Time and Business Value,” Kevin Logue and Kevin McDaid, 15th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems, March 31 April 4, 2008.
  • “Why Are Estimates Always Wrong: Estimation Bias and Strategic Misestimation,” Daniel D. Galorath, AIAA SPACE 2015 Conference and Exposition Pasadena, California, 2015.
  • “Estimation of Project Size Using User Stories,” Murad Ali, Zubair A Shaikh , Eaman Ali, International Conference on Recent Advances in Computer Systems (RACS 2015).
  • “A Survey of Agile Software Estimation Methods,” Hala Hamad Osman and Mohamed Elhafiz Musa, International Journal of Computer Science and Telecommunications, Volume 7, Issue 3, March 2016.
  • “Why Can’t People Estimate: Estimation Bias and Mitigation,” Dan Galorath, IEEE Software Technology Conference, October 12-15, 2015 
    Hilton Hotel, Long Beach California
  • “Cost-Effective Supervised Learning Models for Software Effort Estimation in Agile Environments,” Kayhan Moharreri, Alhad Vinayak Sapre, Jayashree Ramanathan, and Rajiv Ramnath, 40th Annual IEEE Computer Software and Applications Conference (COMPSAC), 2016
  • “A Review of Surveys on Software Effort Estimation,” Kjetil Moløkken and Magne Jørgensen, http://uio.no/isu/INCO/Papers/Review_final8.pdf
  • “Project Duration Forecasting Using Earned Value Method and Time Series,” Khandare Manish A., Vyas Gayatri S., International Journal of Engineering and Innovative Technology (IJEIT) Volume 1, Issue 4, April 2012.
  • “Integrating Risk Assessment and Actual Performance for Probabilistic Project Cost Forecasting: A Second Moment Bayesian Model,” Byung-Cheol Kim, IEEE Transactions On Engineering Management, Vol. 62, No. 2, May 2015.
  • “A study of project selection and feature weighting for analogy based software cost estimation,” Y.F. Li , M. Xie, and T.N. Goh, The Journal of Systems and Software 82 (2009) 241–252.
  • “Complementing Measurements and Real Options Concepts to Support InterSprint Decision-Making in Agile Projects,” Zornitza Racheva , Maya Daneva, Luigi Buglione, 34th Euromicro Conference Software Engineering and Advanced Applications
  • “Software Cost Estimation and Sizing Methods: Issues and Guidelines,” Shari Lawrence Pfleeger, Felicia Wu, and Rosalind Lewis, RAND Corporation, Project Air Force, 2005.
  • “Anchoring and Adjustment in Software Estimation,” Jorge Aranda and Steve Easterbrook, ESEC-FSE’05, September 5–9, 2005, Lisbon, Portugal.
  • “Cycle Time Analytics: Making Decision using lead Time and Cycle Time to avoid needing estimates for every story,” Troy Magennis, LKCE 2013 ‒ Modern Management Methods,
  • “Probabilistic Forecasting Decision Making: When Do You Want it?” Larry Maccherone, http://www.hanssamios.com/dokuwiki/_media/larry-maccherone-probabilistic-decision-making.pdf
  • “Software Project Planning for Robustness and Completion Time in the Presence of Uncertainty using Multi Objective Search Based Software Engineering,” Stefan Gueorguiev, Mark Harman, and Giuliano Antoniol, GECCO’09, July 8–12, 2009, Montréal Québec, Canada.
  • “Empirical Validation of Neural Network Models for Agile Software Effort Estimation based on Story Points,” Aditi Panda, Shashank Mouli Satapathy, and Santanu Kumar Rath, 3rd International Conference on Recent Trends in Computing 2015 (ICRTC-2015).
  • “When 90% Confidence Intervals are 50% Certain: On the Credibility of Credible Intervals,” Karl Halvor Teigen and Magne Jørgensen, Applied Cognitive Psychology, 19: 455–475 (2005)
  • “Scaling Agile Estimation Methods with a Parametric Cost Model,” Carl Friedrich Kreß, Oliver Hummel, Mahmudul Huq, ICSEA 2014 : The Ninth International Conference on Software Engineering Advances, 2014
  • Expert Estimation and Historical Data: An Empirical Study,” Gabriela Robiolo, Silvana Santos, and Bibiana Rossi, ICSEA 2013 : The Eighth International Conference on Software Engineering Advances
  • Agile Monitoring Using The Line Of Balance,” Eduardo Miranda, Institute for Software Research – Carnegie-Mellon University and Pierre Bourque, École de technologie supérieure – Université du Québec
  • Managerial Decision Making Under Risk and Uncertainty,” Ari Riabacke, IAENG International Journal of Computer Science, 32:4, IJCS_32_4_12
  • “Simple Method Proposal for Cost Estimation from Work Breakdown Structure,” Sérgio Sequeira and Eurico Lopes, Conference on ENTERprise Information Systems / International Conference on Project Management/ Conference on Health and Social Care Information Systems and Technologies, CENTERIS / ProjMAN / HCist 2015 October 7-9, 2015.
  • “From Nobel Prize to Project Management: Getting Risks Right,” Bent Flyvbjerg, Aalborg University, Denmark, Project Management Journal, vol. 37, no. 3, August 2006, pp. 5-15.
  • “The Uncertainty Principle in Software Engineering,” Hadar Ziv and Debra Richardson, ICSE 97, 9th International Conference on Software Engineering, Boston MA, 17 ‒ 23 May, 1997
  • “Analyzing Software Effort Estimation using k means Clustered Regression Approach,” Geeta Nagpal, Moin Uddin, and Arvinder Kaur, ACM SIGSOFT Software Engineering Notes, January 2013 Volume 38 Number 1.
  • “Assuring Software Cost Estimates: Is it an Oxymoron?,” Jairus Hihnl and Grant Tregre, Goddard Space Flight Center, 2013 46th Hawaii International Conference on System Sciences.
  • “How Does NASA Estimate Software Cost? Summary Findings and Recommendations,” Jairus Hihn, Lisa VanderAar, Manuel Maldonado, Pedro Martinez, Grant Tregre, NASA Cost Symposium, OCE Software Working Group, August 27-29, 2013.
  • “Calibrating Software Cost Models Using Bayesian Analysis,” Sunita Chulani, Barry Boehm, Bert Steece, USC-CSE 1998
  • “Software Project and Quality Modelling Using Bayesian Networks,” Norman Fenton, Peter Hearty, Martin Neil, and Łukasz Radliński, Elsevier, November 31, 2013.
  • “Software Project Level Estimation Model Framework based on Bayesian Belief Networks,” Hao Wang, Fei Peng, and Chao Zhang, 2006 Sixth International Conference on Quality Software (QSIC'06), 27-28 October, 2006.
  • “Using Bayesian Belief Networks to Model Software Project Management Antipatterns,” Dimitrios Settas, Stamatia Bibi, Panagiotis Sfetsos, Ioannis Stamelos, and Vassilis Gerogiannis, Software Engineering Research, Management and Applications, ACIS International Conference on (2006), Seattle, Washington, Aug. 9, 2006 to Aug. 11, 2006.
  • “A Survey Of Bayesian Net Models For Software Development Effort Prediction,” Lukasz Radlinski, International Journal Of Software Engineering And Computing, Vol. 2, No. 2, July-December 2010
  • “Ten Unmyths of Project Estimation Reconsidering some commonly accepted project management practices,” Phillip Armour, Communications of the ACM, November 2002, Vol. 45, No. 11
  • “Using Earned Value Data to Forecast the Duration and Cost of DoD Space Programs,” Capt. Shedrick Bridgeforth Air Force Cost Analysis Agency (AFCAA).
  • “Enterprise Agility—What Is It and What Fuels It?,” Rick Dove, in Utility Agility - What Is It and What Fuels It? - Part 2, 10/24/2009.
  • “Scrum Metrics for Hyperproductive Teams: How They Fly like Fighter Aircraft,” Scott Downey and Jeff Sutherland, HICSS '13 Proceedings of the 2013 46th Hawaii International Conference on System Sciences, Pages 4870-4878, IEEE Computer Society
  • Software Project Estimation: The Fundamentals for Providing High Quality Information to Decision Makers, Alain Abran, Wiley-IEEE, 6 April 2016
  • Practical Software Project Estimation 3rd Edition, Peter Hill, McGraw-Hill Education, 2010
  • “Engaging Software Estimation Education using LEGOs: A Case Study,” Linda Laird and Ye Yang, IEEE/ACM 38th IEEE International Conference on Software Engineering Companion, 2016
  • “Software Estimation – A Fuzzy Approach,” Nonika Bajaj, Alok Tyagi and Rakesh Agarwal, ACM SIGSOFT Software Engineering Notes, Page 1 May 2006 Volume 31 Number 3.
  • “Limits to Software Estimation,” J. P. Lewis, ACM SIGSOFT Software Engineering Notes Vol. 26 No. 4, July 2001, Page 54.
  • “Recent Advances in Software Estimation Techniques,” Richard E. Fairley, ICSE '92: Proceedings of the 14th International Conference on Software Engineering, May 1992.
  • “Software Estimation Using the SLIM Tool ,”Nikki Panlilio-Yap, Proceedings of the 1992 conference of the Centre for Advanced Studies on Collaborative research - Volume 1, CASCON '92
  • “An Approach for Software Cost Estimation,” Violeta Bozhikova and Mariana Stoeva, International Conference on Computer Systems and Technologies - CompSysTech’10.
  • “Estimating Software -Intensive Projects in the Absence of Historical Data,” Aldo Dagnino, ICSE '13: Proceedings of the 2013 International Conference on Software Engineering, May 2013.
  • “A Framework for Software Project Estimation Based on COSMIC, DSM and Rework Characterization,” Sharareh Afsharian, Marco Giacomobono, and Paola Inverardi, BIPI’08, May 13, 2008.
  • “Estimating software projects,” R. Agarwal, Manish Kumart, Yogesh, S. Mallick, RM. Bharadwaj, and D. Anantwar, Software Engineering Notes vol 26 no 4 July 2001 Page 60.
  • “Software Intensive Systems Cost and Schedule Estimation Technical Report SERC-2013-TR-032-2,” Stevens Institute of Technology, Systems Engineering Research Center, 31 June 2013.
  • “Improved Size And Effort Estimation Models For Software Maintenance,” Vu Nguyen, University of Southern California, December 2010.
  • “Probabilistic Estimation of Software Project Duration,” Andy M. Connor, New Zealand Journal of Applied Computing & Information Technology, 11(1), 11-22, 2007.
  • “Application of Sizing Estimation Techniques for Business Critical Software Project Management,” Parvez Mahmood Khan and M.M. Sufyan Beg, International Journal of Soft Computing And Software Engineering (JSCSE), 3, No. 6, 2013
  • “Double Whammy – How ICT Projects are Fooled by Randomness and Screwed by Political Intent,” Alexander Budzier and Bent Flyvbjerg, Saïd Business School working papers, August 2011.
  • “Software Cost Estimation Framework for Service-Oriented Architecture Systems using Divide-and-Conquer Approach,” Zheng Li and Jacky Keung, Proceedings of the 5th International Symposium on Service-Oriented System Engineering (SOSE 2010), pp. 47-54, Nanjing, China, June 4-5, 2010.
  • “Investigating Effort Prediction Of Software Projects On The ISBSG Dataset,” Sanaa Elyassami and Ali Idri, International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 3, No. 2, March 2012.
  • “Resource Estimation in Software Engineering,” Lionel C. Briand, Encyclopedia of Software Engineering, John Wiley and Sons, 2001.
  • Software Measurement and Estimation: A Practical Approach, Linda M. Laird and M. Carol Brennan, Wiley-IEEE Computer Society Press, June 2006
  • CECS 543/643 Advanced Software Engineering, Dar-Biau Liu, California State University, Long Beach, Spring 2012
  • “Software Cost Estimation Review,” Alphonce Omondo Ongere, Helsinki Metropolia University of Applied Sciences, 30 May 2013.
  • “Software estimation process: a comparison of the estimation practice between Norway and Spain,” Paul Salaberria, 1 December 2014, Universitetet I Bergenm.
  • “Comparison of available Methods to Estimate Effort, Performance and Cost with the Proposed Method,” M. Pauline, Dr. P. Aruna, Dr. B. Shadaksharappa, International Journal of Engineering Inventions, Volume 2, Issue 9 (May 2013), pp. 55-68
  • “Applying Fuzzy ID3 Decision Tree for Software Effort Estimation,” Ali Idri and Sanaa Elyassam, IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 4, No 1, July 2011.
  • “Software Effort Estimation with Ridge Regression and Evolutionary Attribute Selection,” Efi Papatheocharous, Harris Papadopoulos and Andreas S. Andreou, 3d Artificial Intelligence Techniques in Software Engineering Workshop, 7 October, 2010.
  • “A Comparison of Different Project Duration Forecasting Methods Using Earned Value Metrics,” Stephan Vandevoorde and Mario Vanhoucke, International Journal of Project Management, 24 (2006), 289-302
  • “Quantifying IT Forecast Quality,” J. L. Eveleens and C. Verhoef, Science of Computer Programming, Volume 74, Issues 11–12, November 2009, Pages 934-988.
  • “Predictive Modeling: Principles and Practices,” Rick Hefner, Dean Caccavo, Philip Paul, and Rasheed Baqai, NDIA Systems Engineering Conference, pp. 20-23 October 2008.
  • “Modeling, Simulation & Data Mining: Answering Tough Cost, Date& Staff Forecasts Questions,” Troy Magennis and Larry Maccherone, Agile 2012, 13-17 August, 2013, Dallas Texas.
  • How to Measure Anything, Douglas Hubbard, John Wiley and Sons, 2014.
  • All About Agile: Agile Management Made Easy!, Kelly Waters, CreateSpace Independent Publishing Platform, March 19, 2012.
  • Enhance Accuracy In Software Cost And Schedule Estimation By Using 'Uncertainty Analysis And Assessment’ In The System Modeling Process,” Kardile Vilas Vasantrao, International Journal of Research & Innovation in Computer Engineering, Vol 1, Issue 1, (6-18), August 2011.
  • “Estimating Perspectives, Richard D. Stutzke, 20th International COCOMO and Software Cost Modeling Forum, Los Angeles 25-28 October 2005
  • “Introduction to Systems Cost Uncertainty Analysis: An Engineering Systems Perspective,” Paul R. Garvey, National Institute of Aerospace (NIA) Systems Analysis & Concepts Directorate NASA Langley Research Center, 2 May 2006.
  • “Cost and Schedule Uncertainty Analysis of Growth in Support of JCL,” Darren Elliot and Charles Hunt, 2014 NASA Cost Symposium, 13 August 2014.
  • “Measurement of Software Size: Contributions of COSMIC to Estimation Improvements,” Alain Abran, Charles Symons, Christof Ebert, Frank Vogelezang, and Hassan Soubra, ICEAA International Training Symposium, Bristol England, 2016.
  • “What Does a Mature Cost Engineering Organization Look Like?” Dale Shermon, International Cost Estimating and Analysis Association (ICEAA) 2016 18th to 20th October 2016.
  • “A Hybrid Model for Estimating Software Project Effort from Use Case Points,” Mohammad Azzeh and Ali Bou Nassif, Applied Soft Computing journal, Elsevier
  • “A Deep Learning Model for Estimating Story Points,” Morakot Choetkiertikul, Hoa Khanh Dam, Truyen Tran, Trang Pham, Aditya Ghose, and Tim Menzies,
  • “A Hybrid Intelligent Model for Software Cost Estimation,” Wei Lin Du, Luiz Fernando Capretz, Ali Bou Nassif, Danny Ho, Journal of Computer Science, 9(11):1506-1513, 2013
  • “An Empirical Analysis of Task Allocation in Scrum-based Agile Programming,” Jun Lin, Han Yu, Zhiqi Shen
  • “Agile Planning & Metrics That Matter,” Sally Elatta, Agile Transformation for Government.
  • Introduction to Uncertainty Quantification, J. Sullivan, Springer, 2016
  • Generalized Estimating Equations, 2nd Edition, James W. Hardin, Chapman and Hall, 2012.
  • Software Estimation Best Practices, Tools & Techniques: A Complete Guide for Software Project Estimators, Murali K. Chemuturi
  • “#NoEstimates, But #YesMeasurements: Why Shouldn’t agile teams waste their time and effort in estimating,” Pekka Forselius, ISBSG IT Confidence Conference, 2016
  • “Agile Benchmarks: What Can You Concluded?” Don Reifer, ISBSG IT Confidence Conference, 2016
  • “Improve Estimation Maturity using Functional Size Measurement and Industry Data,” Drs. Harold van Heeringen, ISBSG IT Confidence Conference, 2016
  • “Why Can’t People Estimate: Estimation Bias and Mitigation,” Dan Galorath, ISBSG IT Confidence Conference, 2015
  • “Why Can’t People Estimate: Estimation Bias and Strategic Mis-Estimation,” Daniel D. Galorath, ISBSG IT Confidence Conference, 2014
  • “Estimation ‒ Next Level,” Ton Dekkers, ISBSG IT Confidence Conference, 2013.
  • “Are We Really That Bad? A Look At Software Estimation Accuracy,” Peter R. Hill, ISBSG IT Confidence Conference, 2013.
  • Hybrid Approach For Estimating Software Project Development Effort: Designing Software Cost Estimation Model, Ketema Kifle, LAP LAMBERT Academic Publishing, June 2, 2016.
  • A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables, Marc Ellis, BiblioScholar, 2012.
  • The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy Indicators,” Khalid Khan, School of Computing Blekinge Institute of Technology, Sweden
  • Uncertainty Quantification: Theory, Implementation, and Applications, Ralph Smith, SIAM-Society for Industrial and Applied Mathematics, December 2, 2013.
  • “A Knowledge and Analytics-Based Framework and Model for Forecasting Program Schedule Performance, Kevin T. Knudsen and Mark Blackburn, Complex Adaptive Systems, Los Angeles, CA , 2016
  • “Managing Project Uncertainty: From Variation to Chaos,” Arnound de Meyer, Christoph Loch, and Michael Pich, IEEE Engineering Management Review 43(3):91 - 91 · December 2002
  • “Project Uncertainty and Management Style,” C. H. Loch, M. T. Pich, and A. De Meyer, 2000/31/TM/CIMSO 10
  • Effects of Feature Complexity of Software Estimates ‒ An Exploratory Study,” Ana Magazinius and Richard Berntsson Svensson, 40th Euromicro Conference on Software Engineering and Advanced Applications, 2014
  • Project Management Under Risk: Using the Real Options Approach to Evaluate Flexibility in R & D," Arnd Huchzermeier and Christoph Loch, INSEAD
Related articles The Fallacy of the Planning Fallacy Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes Agile Software Development in the DOD Essential Reading List for Managing Other People's Money Five Estimating Pathologies and Their Corrective Actions Humpty Dumpty and #NoEstimates
Categories: Project Management

Scaling Agile: Scrum of Scrums: Anti-patterns Part 1

Heads up!

Heads up!

A Scum of Scrums (SoS) is a mechanism to coordinate a group of teams so that they act as a team of teams.  SoS is a powerful tool. As with any powerful tool, if you use it wrong, problems will ensue. Six problematic implementations, called anti-patterns, are fairly common. We’ll discuss three in part 1 and finish the rest in part 2.

  1. The Soapbox. A SoS can easily become a platform for individuals to pontificate on topics or to broker debates on issues. When a SoS has become a place where the speakers is just trying  to win a argument rather than to facilitate coordination or problem solving, there is a real problem. The concept of the stand-up, with its three or four question format, is shoved aside to embrace the common corporate meeting format that is more about office politics.
    One of myriad possible solutions is brutal time-boxing.

    • The 15 minute time box is a great way to ensure that run-on meetings don’t occur. In this solution the meeting runs for 15 minutes then BOOM the meeting is over. The 15 minutes rule needs to be coupled with a standard framework of questions to ensure that the meeting stays on track.
  2. The Blame Game. The blame game is a variant of the soapbox; however the impact is even more virulent. It therefore needs to be addressed separately. When a SoS is used to apportion blame for problems, the goal of the meeting shifts away from coordination and problem solving to rationalization or worse, infighting.  Either outcome destroys effectiveness and team building.
    One possible solutions is Cohn’s “no name rule”.

    • Cohn’s “no name rule” is a nice mechanism to dampen the possibility of rants or apportioning blame during the SoS.  The rule prohibits the use of names during an attendee’s report, therefore lessening the chance remarks will become personal and require a response from someone else. Avoiding names is useful for defusing the emotions of interactions during the session.  The rule, while useful, does not necessarily address the root cause of the issue. The blame game is typically a reflection of teaming issues. Having a common goal helps establish an environment for establishing a good environment for team formation. As a coach I always ensure that all SoS participants have and understand that they share a common overall goal and that they have to work together to deliver that goal.
  3. The Surrogate Project Manager. In Agile, project management at the team level is distributed across the team. Guidance and planning at higher levels is generally facilitated between product owners and product management groups. When the SoS becomes a surrogate project manager, it stops being a mechanism for coordination between groups, spreading information and facilitating technical decisions and begins to direct activity. When the SoS directs activity, it disrupts the teams’ ability to self-organize and self-manage, core Agile principles. Remember the SoS is not is a replacement project manager to direct personnel or teams, to collect status or deliver administrative services.
    One possible solutions is to change the composition.

    • Banning Scrum Masters and team leads from attending the SoS, and substituting more technical individuals is one technique to defuse a SoS that has become a surrogate project manager. This typically pushes the SoS to focus on coordinating technical work because that is what the participants are more comfortable with. In the long run continuously rotating membership will ensure that single group thinking does not develop, which could ignore the needs of other stakeholder groups.

The Soapbox, Blame Game and Surrogate Project Manager are three anti-patterns that often plague Scrum of Scrums. We will discuss the Three Bears and Pyramid anti-patterns in the next blog entry. All of these problems are not insurmountable. They first require teams to recognize they have a problem and then be willing to take action that might feel uncomfortable. Coaching one of the best tools to generate change.

Categories: Process Management

SE-Radio Episode 274: Sam Aaron on Sonic Pi

Felienne talks with Sam Aaron on Sonic Pi. Topics include how to design a programming language with a broad audience, what features enable a language to be powerful and fun for children to play with, what the role of programming and programming education is in the world in general and the world of music in […]
Categories: Programming

SE-Radio Episode 274: Sam Aaron on Sonic Pi

Felienne talks with Sam Aaron on Sonic Pi. Topics include how to design a programming language with a broad audience, what features enable a language to be powerful and fun for children to play with, what the role of programming and programming education is in the world in general and the world of music in […]
Categories: Programming

Sponsored Post: Loupe, New York Times, ScaleArc, Aerospike, Scalyr, Gusto, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • The New York Times is looking for a Software Engineer for its Delivery/Site Reliability Engineering team. You will also be a part of a team responsible for building the tools that ensure that the various systems at The New York Times continue to operate in a reliable and efficient manner. Some of the tech we use: Go, Ruby, Bash, AWS, GCP, Terraform, Packer, Docker, Kubernetes, Vault, Consul, Jenkins, Drone. Please send resumes to: technicaljobs@nytimes.com

  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.
Fun and Informative Events
  • Your event here!
Cool Products and Services
  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture