Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Cast Away with Android TV and Google Cast

Google Code Blog - Wed, 06/25/2014 - 19:55

By Dave Burke and Majd Bakar, Engineering Directors and TV Junkies

Last summer, we launched Chromecast, a small, affordable device that lets you cast online video, music and anything from the web to your TV. Today at Google I/O, we announced Android TV, the newest form factor to the Android platform, and a way to extend the reach of Google Cast to more devices, like televisions, set-top boxes and consoles.

Check out Coming to a Screen Near You for some details on everything we’re doing to make your TV the place to be.

For developers though--sorry, you don’t get to unwind in front of the TV. We need you to get to work and help us create the best possible TV experience, with all of the new features announced at I/O today.

Get started with Android TV

In addition to Google Cast apps that send content to the TV, you can now build immersive native apps and console-style games on Android TV devices. These native apps work with TV remotes and gamepads, even if you don’t have your phone handy. The Android L Developer Preview SDK includes the new Leanback support library that allows you to design smoother, simpler, living room apps.

And this is just the beginning. In the fall, new APIs will allow you to cast directly to these apps, so users can control the app with the phone, the remote, or even their Android Wear watch. You’ll also start seeing Android TV set-top boxes, consoles and televisions from Sony, TP Vision, Sharp, Asus, Razer and more.

Help more users find your Google Cast app

We want to help users more easily find your content, so we’ve improved the Google Cast SDK developer console to let you upload your app icon, app name, and app category for Android, iOS and Chrome. These changes will help your app get discovered on chromecast.com/apps and on Google Play.

Additional capabilities have also been added to the Google Cast SDK. These include: Media Player Library enhancements, bringing easier integration with MPEG-DASH Smooth Streaming, and HLS. We’ve also added WebAudio & WebGL support, made the Cast Companion Library available, and added enhanced Closed Caption support. And coming soon, we will add support for queuing and ID delegation.

Ready to get started? Visit developer.android.com/tv and developers.google.com/cast for the SDKs, style guides, tutorials, sample code, and the API references. You can also request an ADT-1 devkit to bootstrap your Android TV development.

Posted by Louis Gray, Googler


.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }
Google I/O 2014I/O LivestreamsI/O Bytes Videos+Google DevelopersGoogle DesignAndroid WearAndroid TVAndroid AutoExplore the conference with the Google I/O 2014 Android app:Get it on Google Play
Categories: Programming

Cloud Platform at Google I/O - new Big Data, Mobile and Monitoring products

Google Code Blog - Wed, 06/25/2014 - 19:30
Author PictureBy Greg DeMichillie, Google Cloud Platform team

Today at Google I/O, we are introducing new services that help developers build and optimize data pipelines, create mobile applications, and debug, trace, and monitor their cloud applications in production.

Introducing Google Cloud Dataflow
A decade ago, Google invented MapReduce to process massive datasets using distributed computing. Since then, more devices and information require more capable analytics pipelines — though they are difficult to create and maintain.

Today at Google I/O, we are demonstrating Google Cloud Dataflow for the first time. Cloud Dataflow is a fully managed service for creating data pipelines that ingest, transform and analyze data in both batch and streaming modes. Cloud Dataflow is a successor to MapReduce, and is based on our internal technologies like Flume and MillWheel.

Cloud Dataflow makes it easy for you to get actionable insights from your data while lowering operational costs without the hassles of deploying, maintaining or scaling infrastructure. You can use Cloud Dataflow for use cases like ETL, batch data processing and streaming analytics, and it will automatically optimize, deploy and manage the code and resources required.

Debug, trace and monitor your application in production
We are also introducing several new Cloud Platform tools that let developers understand, diagnose and improve systems in production.

Google Cloud Monitoring is designed to help you find and fix unusual behavior across your application stack. Based on technology from our recent acquisition of Stackdriver, Cloud Monitoring provides rich metrics, dashboards and alerting for Cloud Platform, as well as more than a dozen popular open source apps, including Apache, Nginx, MongoDB, MySQL, Tomcat, IIS, Redis, Elasticsearch and more. For example, you can use Cloud Monitoring to identify and troubleshoot cases where users are experiencing increased error rates connecting from an App Engine module or slow query times from a Cassandra database with minimal configuration.

We know that it can be difficult to isolate the root cause of performance bottlenecks. Cloud Trace helps you visualize and understand time spent by your application for request processing. In addition, you can compare performance between various releases of your application using latency distributions.

Finally, we’re introducing Cloud Debugger, a new tool to help you debug your applications in production with effectively no performance overhead. Cloud Debugger gives you a full stack trace and snapshots of all local variables for any watchpoint that you set in your code while your application continues to run undisturbed in production. This brings modern debugging to cloud-based applications.

New features for mobile development
With rapid autoscaling, caching and other mobile friendly capabilities, many apps like Snapchat or Rising Star have built and run on Cloud Platform. We’re adding new features that make building a mobile app using Cloud Platform even better.

Today, we’re demonstrating a new version of Google Cloud Save, which gives you a simple API for saving, retrieving, and synchronizing user data to the cloud and across devices without needing to code up the backend. Data is stored in Google Cloud Datastore, making the data accessible from Google App Engine or Google Compute Engine using the existing Datastore API. Google Cloud Save is currently in private beta and will be available for general use soon.

We’ve also added tooling to Android Studio, which simplifies the process of adding an App Engine backend to your mobile app. In particular, Android Studio now has three built-in App Engine backend module templates, including Java Servlet, Java Endpoints and an App Engine backend with Google Cloud Messaging. Since this functionality is powered by the open-source App Engine plug-in for Gradle, you can use the same build configuration for both your app and your backend across IDE, CLI and Continuous Integration environments.

We’ll be doing more detailed follow-up posts about these announcements in the coming days, so stay tuned.

Greg DeMichillie has spent his entire career working on developer platforms for web, mobile, and the cloud. He started as a software engineer before making the jump to Product Management. When not coding, he's an avid photographer and gadget geek.

Posted by Louis Gray, Googler

Apache, Nginx, MongoDB, MySQL, Tomcat, IIS, Redis, Elasticsearch and Cassandra are trademarks of their respective owners.


.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }
Google I/O 2014I/O LivestreamsI/O Bytes Videos+Google DevelopersGoogle DesignAndroid WearAndroid TVAndroid AutoExplore the conference with the Google I/O 2014 Android app:Get it on Google Play
Categories: Programming

This is material design

Google Code Blog - Wed, 06/25/2014 - 19:20
By Nicholas Jitkoff, Designer

When we started building for the first mobile devices, mobile meant less: less screen space, slower connection, fewer features. A mobile experience was often a lesser experience. But mobile devices have evolved—they have become more powerful, faster, and more intuitive—so must our approach to design.

And as Google, including the Android platform, expands into new form factors, we’re introducing one consistent design that spans devices across mobile, desktop, and beyond. Today at Google I/O, we introduced material design, which uses tactile surfaces, bold graphic design, and fluid motion to create beautiful, intuitive experiences.

In material design, surface and shadow establish a physical structure to explain what can be touched and what can move. Content is front and center, using principles of modern print design. Motion is meaningful, clarifying relationships and teaching with delightful details.

We needed something that felt at home on the smallest watch, the largest TV, and every screen in between. We used it for Android Wear, our project to extend Android wearables, as well as Android TV, and Android Auto. So as you create applications and services for this expansive new range of devices, we’ve created one unified set of style guidelines that works across any platform. We’re releasing the first version of these guidelines as part of our preview today. You can find them on google.com/design.

Material design, in L

Bringing material design to Android is a big part of the L-Release of Android, the version we previewed today. We’ve added the new Material theme, which you can apply to your apps for a new style: it lets you easily infuse your own color palette into your app, and offers new system widgets, screen transitions and animated touch feedback. We’ve also added the ability to specify a view’s elevation, allowing you to raise UI elements and cast dynamic, real-time shadows in your apps.

Bringing material design to the web, with Polymer

Last year at I/O we announced Polymer, an ambitious UI toolkit for the web. As a developer, you’ll now have access to all the capabilities of material design via Polymer, bringing tangibility, bold graphics, and smooth animations to your applications on the web.

If you’d like to learn more about material design, please take a look at our guidelines. Join us as we continue to design and iterate at +Google Design.

Categories: Programming

The Secret of Scaling: You Can't Linearly Scale Effort with Capacity

The title is a paraphrase of something Raymond Blum, who leads a team of Site Reliability Engineers at Google, said in his talk How Google Backs Up the Internet. I thought it a powerful enough idea that it should be pulled out on its own:

Mr. Blum explained common backup strategies don’t work for Google for a very googly sounding reason: typically they scale effort with capacity.

If backing up twice as much data requires twice as much stuff to do it, where stuff is time, energy, space, etc., it won’t work, it doesn’t scale. 

You have to find efficiencies so that capacity can scale faster than the effort needed to support that capacity.

A different plan is needed when making the jump from backing up one exabyte to backing up two exabytes.

When you hear the idea of not scaling effort with capacity it sounds so obvious that it doesn't warrant much further thought. But it's actually a profound notion. Worthy of better treatment than I'm giving it here:

Categories: Architecture

All Project Numbers are Random Numbers — Act Accordingly

Herding Cats - Glen Alleman - Wed, 06/25/2014 - 15:11

Dice5The numbers that appear in projects — cost, schedule, performance — are all random variables drawn from an underlying statistical process. This process is officially called a non-stationary stochastic process. It has several important behaviours that create problems for those trying to make decisions in the absence of understanding how these processes work in practice.

The first issue is that all point estimates for projects are wrong, in the absence of a confidence interval and an error band on that confidence.

How long will this project take is a common question asked by those paying for the project. The technically correct answer is there is an 80% confidence of completing on or before some date, with a 10% error on that confidence. This is a cumulative probability number collecting all the possible completion dates and describing the cumulative probability - the 80% - of an on or before, since the project can complete before that final probabilistic date as well. 

Same conversation for cost. The cost of the project will be at or below "some amount" with a 80% confidence.

The performance of products or services are the third random variables. By technical performance it means anything and everything that is not cost or schedule. This is the wrapper term for the old concept of scope. But in modern terms there are two general purpose categories of Performance with one set of parameters.

  • Measures of Effectiveness - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. The Measures of Effectiveness:
    • Are stated in units meaningful to the buyer,
    • Focus on capabilities independent of any technical implementation,
    • Are connected to the mission success.
  • Measures of Performance - are the measures that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. The Measures of Performance are:
    • Attributes that assure the system has the capability and capacity to perform,
    • Assessment of the system to assure it meets design requirements to satisfy the MoE.
  • Key Performance Parameters - represent the capabilities and characteristics so significant  that failure to meet them can be cause for reevaluation, reassessing, or termination of the program. Key Performance Parameters:
    • Have a threshold or objective value,
    • Characterize the major drivers of performance,
    • Are considered Critical to Customer (CTC).

These measures are all random numbers with confidence intervals and error bands.

So What's The Point?

When we hear you can't forecast the future, that's not true. The person saying that didn't pay attention in the High School statistics class. You can forecast the future. You can make estimates of anything. The answers you get may not be useful, but it's an estimate all the same. If it is unclear on how to do this, here's a reading assignment for the books we use nearly every month to make our estimates at completion and estimates to complete for software intensive project, starting with the simplist:

While on the topic of books, here are some books that should be on your shelf that put those probability and statistics to work.

  • Facts and Fallacies of Software Engineering, Robert Glass - speaks to the common fallacies in software development. The most common is we can't possibly estimate when we'll be done or how much it will cost. Read the book and start calling BS on anyone using that excuse to not do their homework. And a nice update by Jack Atwood, founder of Stack Exchange.
  • Estimating Software-Intensive Systems, Richard Stutzle - this is the book that started the revolution of statistical modeling of software projects. When you hear oh this is so olde school, that person didn't take the HS Stats class either. 
  • Software Engineering Economics, Barry Boehm - is how to pull all this together. And when you hear this concept is olde school, you'll know better as well.

There are several tools that make use these principles and practices:

Here's the End

  • Learn to estimate.
  • Teach others to estimate.
  • When the Dilbert boss comes around, you'll have to tools to have a credible discussion about the Estimate to Complete number he's looking for is bogus. He may not listen or even understand, but you will.

And that's a start in fixing the dysfunction of bad estimating when writing software for money. Start with the person who can actually make a change - You

Related articles Averages Without Variances are Meaningless - Or Worse Misleading Elements of Project Success Can There Be Successful Projects With Fixed Dates, Fixed Budgets, and Promised Minimal Features? Four Critical Elements of Project Success To Stay In Business You Need to Know Both Value and Cost How to Forecast the Future Making Estimates For Your Project Require Discipline, Skill, and Experience How To Assure Your Project Will Fail Random Sample Calculations And My Prediction That 300,000 Lawyers Will Be Using Random Sampling By 2022 The Uncertainty of Predictions

 

Categories: Project Management

Do You Need to Create Virtual Teams with Freelancers?

Have you seen Esther Schindler’s great article yet? Creating High-Performance Virtual Teams of Freelancers and Contractors.

Here’s the blurb:

Plenty has been written about telecommuting for employees: how to encourage productivity, build a sense of “we’re all in this together,” and the logistics (such as tools and business processes) that streamline a telework lifestyle. But what about when your team is neither employees nor on-site? That gives any project manager extra challenges.

Lots of good tips.

 

Categories: Project Management

How Agile accelerates your business

Xebia Blog - Wed, 06/25/2014 - 10:11

This drawing explains how agility accelerates your business. It is free to use and distribute. Should you have any questions regarding the subjects mentioned, feel free to get in touch.
Dia1

Requirements: The Chronic Problem With Project Estimation

Just being ready to run does not mean you know all of the requirements.

Just being ready to run does not mean you know all of the requirements.

When I recently asked a group of people the question “What are the two largest issues in project estimation?” I received one response more than any other: requirements, prefixed by words like unclear and changing. In the eight years I have been hosting the Software Process and Measurement Cast I end each interview by asking what two changes could be made to make it easier to deliver better functionality (the actual words vary), and requirements are one of the culprits that appear over and over.  The requirements/estimation conundrum has not changed much over the multiple decades I have been in the software development world.  We have described the budget, estimate and plan continuum that most large projects follow, the estimation “problem” follows the same continuum.

Most large organizations have follow a cycle of budgeting for projects. Plans are immediately put into motion based on the budgets including “guidance” provided to the markets in public companies. The budgets become part of how executives and managers are paid or bonused. In IT, projects can make up a significant portion of C-level and other managers budgets.  Projects at this level generally represent concepts and at best are estimated based on interpolations from analogies. However estimates of cost, duration and revenue are generated by a lot of thought and hardwork.  Anyone that has been in the business knows that the scope of projects at this point are dynamic, But even this early on, the die has begun to be set.  

As programs and projects begin, a better understanding of the central concept is developed.  Based on that better understanding the budget refined however the refinement generally occurs within the boundaries developed and placed in the budget.  I have known project and program managers that tried all sorts of techniques as they fought to respect the stakeholder’s central concept and the need to meet the numbers. Techniques include sourcing decisions (offshoring), buying packages or even shedding known scope.  All of this activity occurs as the concept and the underlying requirements evolve. This issue occurs both in industry and government.

As work begins budgeting and estimating shift to planning.  In waterfall projects, estimators and schedules build elaborate work breakdown plans that help guild team members through the process of delivering value. Each requirement and task is estimated to support the WBS.  [WBS?] This type of behavior also occurs in some pseudo-Agile teams. For work that is highly deterministic this approach may work well, however if the business environment is dynamic or requirements evolve to more fully meet the product owner or other stakeholders needs, it won’t work.

The natural tendency is to eschew budgeting and estimating, to change how public companies report and how executives are paid.  This will happen, but not overnight.  In the interim the best option in most cases is to manage the boundary between estimating and planning using tools like release plans and minimum marketable/acceptable products.  The release plan needs to identify what has to be delivered (minimum marketable/acceptable product) with the nice to haves acting as the buffer that is managed to meet the corporate promises. This approach requires all parties to change some behaviors, such as over-promising by both IT and stakeholders and treating IT less like a factory and more like a collaborative venture.  Both are difficult  changes but just holding out for better requirements has not worked for decades and probably won’t get better soon.


Categories: Process Management

What to expect at I/O’14 - Distribute

Google Code Blog - Tue, 06/24/2014 - 22:45
By Ellie Powers, Google Play team

Less than one day to go until Google I/O 2014! In our third post on what to expect at Google I/O 2014, we'd like to share what’s coming up if you’re looking to grow your app distribution, engage users, and make money on Google Play.

Now that you've designed your app and developed it with the power of Google technologies, you need to attract, retain and grow your audience Here are a few of the sessions we'll be hosting at I/O to help you build a successful app business with Google:

  • Making Money on Google Play || Wednesday, 1-1:45PM (room 4) Developers are finding great success on Google Play. In this session, we’ll review developer success stories and the drivers of revenue, and we’ll share tips for developers to get the most out of Google Play.
  • Google Play Power Session || Wednesday, 2-2:45PM (room 4) The multi-device mobile app world presents huge opportunities for developers to connect with users, and Google Play is one of the richest and most competitive business ecosystems on the planet. Understand how to leverage the complete suite of Google products to make a great app, build your userbase, expand your app onto new device types, and reach people globally.
  • Going Global with Google Play || Thursday, 9-9:45AM (room 7 - live streamed) Think your app or game has what it takes to become a global hit? Get key insights into major international markets and trends of successful apps and games in those regions. Leverage these pro tips and best practices to expand your game to a global audience.
  • Maximize app engagement, monetization and distribution || Thursday, 2-2:45PM (room 7 - live streamed) You built an app. Awesome. Do you have a monetization plan? An easy way for your users to pay? Does anyone know about it? How are you tracking success? In addition to providing the platforms and tools to design and develop your apps, we also have solutions to help make it discoverable and more profitable. Learn how you can turn your app into a business by understanding your most valuable users, finding more of them, and tailoring the monetization experience for each different group. We can also help enable fast and seamless transactions so users can purchase products or services in your app.

We’ll also be hosting ‘Box talks with Google product teams who will share best practices for distributing mobile, web, Glass and Google Cast apps, as well as broadening reach with Google+. In addition, we’ve invited entrepreneurs and VCs to discuss their experiences (and maybe even inspire the creation of a few great new businesses).

Have specific questions and want to talk with experts? We’ll have designated Office Hours for Google Play, Google Wallet, AdMob and Google Analytics, and Knowledge Graph.

For more info on how to grow your app business with Google, visit developers.android.com/distribute and developers.google.com/mobile.

See you at I/O!

Ellie Powers is a product manager at Google Play, focused on developing the apps ecosystem. She joined Google to re-launch the Google Play Developer Console, and is now working to make Play the best place for developers and users alike.

Posted by Louis Gray, Googler
Categories: Programming

Software architecture as code

Coding the Architecture - Simon Brown - Tue, 06/24/2014 - 21:22

If you've been following the blog, you will have seen a couple of posts recently about the alignment of software architecture and code. Software architecture vs code talks about the typical gap between how we think about the software architecture vs the code that we write, while An architecturally-evident coding style shows an example of how to ensure that the code does reflect those architectural concepts. The basic summary of the story so far is that things get much easier to understand if your architectural ideas map simply and explicitly into the code.

Regular readers will also know that I'm a big fan of using diagrams to visualise and communicate the architecture of a software system, and this "big picture" view of the world is often hard to see from the thousands of lines of code that make up our software systems. One of the things that I teach people during my sketching workshops is how to sketch out a software system using a small number of simple diagrams, each at very separate levels of abstraction. This is based upon my C4 model, which you can find an introduction to at Simple sketches for diagramming your software architecture. The feedback from people using this model has been great, and many have a follow-up question of "what tooling would you recommend?". My answer has typically been "Visio or OmniGraffle", but it's obvious that there's an opportunity here.

Representing the software architecture model in code

I've had a lot of different ideas over the past few months for how to create, what is essentially, a lightweight modelling tool and for some reason, all of these ideas came together last week while I was at the GOTO Amsterdam conference. I'm not sure why, but I had a number of conversations that inspired me in different ways, so I skipped one of the talks to throw some code together and test out some ideas. This is basically what I came up with...

It's a description of the context and container levels of my C4 model for the techtribes.je system. Hopefully it doesn't need too much explanation if you're familiar with the model, although there are some ways in which the code can be made simpler and more fluent. Since this is code though, we can easily constrain the model and version it. This approach works well for the high-level architectural concepts because there are very few of them, plus it's hard to extract this information from the code. But I don't want to start crafting up a large amount of code to describe the components that reside in each container, particularly as there are potentially lots of them and I'm unsure of the exact relationships between them.

Scanning the codebase for components

If your code does reflect your architecture (i.e. you're using an architecturally-evident coding style), the obvious solution is to just scan the codebase for those components, and use those to automatically populate the model. How do we signify what a "component" is? In Java, we can use annotations...

Identifying those components is then a matter of scanning the source or the compiled bytecode. I've played around with this idea on and off for a few months, using a combination of Java annotations along with annotation processors and libraries including Scannotation, Javassist and JDepend. The Reflections library on Google Code makes this easy to do, and now I have simple Java program that looks for my component annotation on classes in the classpath and automatically adds those to the model. As for the dependencies between components, again this is fairly straightforward to do with Reflections. I have a bunch of other annotations too, for example to represent dependencies between a component and a container or software system, but the principle is still the same - the architecturally significant elements and their dependencies can mostly be embedded in the code.

Creating some views

The model itself is useful, but ideally I want to look at that model from different angles, much like the diagrams that I teach people to draw when they attend my sketching workshop. After a little thought about what this means and what each view is constrained to show, I created a simple domain model to represent the context, container and component views...

Again, this is all in code so it's quick to create, versionable and very customisable.

Exporting the model

Now that I have a model of my software system and a number of views that I'd like to see, I could do with drawing some pictures. I could create a diagramming tool in Java that reads the model directly, but perhaps a better approach is to serialize the object model out to an external format so that other tools can use it. And that's what I did, courtesy of the Jackson library. The resulting JSON file is over 600 lines long (you can see it here), but don't forget most of this has been generated automatically by Java code scanning for components and their dependencies.

Visualising the views

The last question is how to visualise the information contained in the model and there are a number of ways to do this. I'd really like somebody to build a Google Maps or Prezi-style diagramming tool where you can pinch-zoom in and out to see different views of the model, but my UI skills leave something to be desired in that area. For the meantime, I've thrown together a simple diagramming tool using HTML 5, CSS and JavaScript that takes a JSON string and visualises the views contained within it. My vision here is to create a lightweight model visualisation tool rather than a Visio clone where you have to draw everything yourself. I've deployed this app on Pivotal Web Services and you can try it for yourself. You'll have to drag the boxes around to lay out the elements and it's not very pretty, but the concept works. The screenshot that follows shows the techtribes.je context diagram.

A screenshot of a simple context diagram

Thoughts?

All of the C4 model Java code is open source and sitting on GitHub. This is only a few hours of work so far and there are no tests, so think of this as a prototype more than anything else at the moment. I really like the simplicity of capturing a software architecture model in code, and using an architecturally-evident coding style allows you to create large chunks of that model automatically. This also opens up the door to some other opportunities such automated build plugins, lightweight documentation tooling, etc. Caveats apply with the applicability of this to all software systems, but I'm excited at the possibilities. Thoughts?

Categories: Architecture

Join I/O 2014 from anywhere!

Google Code Blog - Tue, 06/24/2014 - 19:30
By Billy Rutledge, Director of Developer Relations

We’re loading the big green robot into Moscone and watching the event come to life in front of our eyes. We're really looking forward to seeing you here, but even if you aren't able to attend in person, you can still follow everything that's happening in real time. Here is how:


  • The Keynote and selected sessions will be live streamed throughout the 2 days, on 4 different channels. Tune into google.com/io on June 25 starting at 9AM PDT to see everything from the comfort of your couch.
  • Download the mobile app which allows you to access the live stream on the go, discover I/O-related conversations on Google+ and be reminded when sessions in your personalized schedule are about to start.
  • Join an Extended event happening near you and watch I/O with friends.

  • Put in your I/O requests and questions about what's happening on the ground. All you need is to post publicly on Google+ using the "#io14request" hashtag and it will be picked up and answered by our team of onsite community managers. Read more about the program here.
  • Follow our +Google Developers feed so you learn about the conference announcements and highlights as they happen.
  • If you’re interested in bringing the I/O live stream and Google+ social feed directly to your audience, customize and embed our Live Blogging Gadget on your website and/or blog. Get started here.

Can’t wait for I/O to begin? Over the past few days, we have been rolling out I/O Bytes -- short videos to help developers like you dive in and experience I/O anywhere, anytime. Over 100 videos will be available on the Developers YouTube channel, throughout the I/O live stream, and on the I/O website once the event starts.

See you on Wednesday!

Billy Rutledge, Director of Developer Relations

Posted by Louis Gray, Googler
Categories: Programming

Sponsored Post: Apple, Chartbeat, Monitis, Netflix, Salesforce, Blizzard Entertainment, Cloudant, CopperEgg, Logentries, Wargaming.net, PagerDuty, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, Ma

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Senior Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Sr Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Senior Security Engineer. You will be the ‘tip of the spear’ and will have direct impact on the Point-of-Sale system that powers Apple Retail globally. You will contribute to implementing standards and processes across multiple groups within the organization. You will also help lead the organization through a continuous process of learning and improving secure practices. Please apply here.
    • Quality Assurance Engineer - Mobile Platforms. Apple’s Mobile Services/Emerging Technology group is looking for a highly motivated, result-oriented Quality Assurance Engineer. You will be responsible for overseeing quality engineering of mobile server and client platforms and applications in a fast-paced dynamic environment. Your job is to exceed our business customer's aggressive quality expectations and take the QA team forward on a path of continuous improvement. Please apply here.

  • Chartbeat measures and monetizes attention on the web. Our traffic numbers are growing, and so is our list of product and feature ideas. That means we need you, and all your unparalleled backend engineer knowledge to help up us scale, extend, and evolve our infrastructure to handle it all. If you've these chops: www.chartbeat.com/jobs/be, come join the team!

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Java Software Engineers of all levels, your time is now. Blizzard Entertainment is leveling up its Battle.net team, and we want to hear from experienced and enthusiastic engineers who want to join them on their quest to produce the most epic customer-facing site experiences possible. As a Battle.net engineer, you'll be responsible for creating new (and improving existing) applications in a high-load, high-availability environment. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event here.
Cool Products and Services
  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike in-Memory NoSQL database is now Open Source. Read the news and see who scales with Aerospike. Check out the code on github!

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Teams Should Go So Fast They Almost Spin Out of Control

Mike Cohn's Blog - Tue, 06/24/2014 - 15:00

Yes, I really did refer to guitarist Alvin Lee in a Certified Scrum Product Owner class last week. Here's why.

I was making a point that Scrum teams should strive to go as fast as they can without going so fast they spin out of control. Alvin Lee of the band Ten Years After was a talented guitarist known for his very fast solos. Lee's ultimate performance was of the song "I'm Going Home" at Woodstock. During the performance, Lee was frequently on the edge of flying out of control, yet he kept it all together for some of the best 11 minutes in rock history.

I want the same of a Scrum team--I want them going so fast they are just on the verge of spinning out of control yet are able to keep it together and deliver something classic and powerful.

Re-watching Ten Years After's Woodstock performance I'm struck by a couple of other lessons, which I didn't mention in class last week:

One: Scrum teams should be characterized by frequent, small hand-offs. A programmer gets eight lines of code working and yells, "Hey, Tester, check it out." The tester has been writing automated tests while waiting for those eight lines and runs the tests. Thirty minutes later the programmer has the next micro-feature coded and ready for testing. Although a good portion of the song is made up of guitar solos, they aren't typically long solos. Lee plays a solo and soon hands the song back to his bandmates, repeating for four separate solos through the song.

Two: Scrum teams should minimize work in progress. While "I'm Going Home" is a long song (clocking in at over eleven minutes), there are frequent "deliveries" of interpolated songs throughout the performance. Listen for "Blue Suede Shoes, "Whole Lotta Shaking" and others, some played for just a few seconds.

OK, I'm probably nuts, and I certainly didn't make all these points in class. But Alvin Lee would have made one great Scrum teammate. Let me know what you think in the comments below.

We're All Looking for the Simple Fix - There Isn't One

Herding Cats - Glen Alleman - Tue, 06/24/2014 - 14:39

Light Bulb

Every project domain is looking for a simple answer to complex problems. There isn't a simple answer to complex problems. There are answers, but they require hard work, understanding, skill, experience, and tenacity to address the hard problems of showing up on time, on or near the planned cost, and have some acceptable probability that the products or services produced by the project will work, and will actually provide the needed capabilities to fulfill the business case or mission of the project.

So It Comes Down To This

  • If we don't know what done looks like in some unit of measure meaningful to the decision makers, we'll never recognize it before we run out of time and money.
  • If we don't know what it will cost to reach done, we're over budget before we start.
  • If we don't have some probabilistic notion of when the project will be complete, we're late before you start.
  • If we don't measure progress to plan in some units of physical percent complete we have no idea if we are actually making progress. These measures include two classes:
    • Effectiveness - is the thing we're building actually effective at solving the problem.
    • Performance - is the solution performing in a way that allows it to be effective.
  • If we don't know what impediments we'll encounter along the way to done, those impediments will encounter us. They don't go away just because we don't know about them.
  • If we don't have any idea about what resources we'll be needing on the project, we will soon enough when we start to fall behind schedule or our products or services suffer from lack of skills, experience, or capacity for work.

Doing project work is about many things. But it's not just about writing code or bending metal. It's about the synergistic collaboration between all the participants. The notion that we don't need project management is one of those nonsense notions that is stated in the absence of a domain and context. The Product Owner in agile is the glue to pulls the development team together. But someone somewhere needs to fund that development, assure the logistics of deploying the resulting capabilities is in place, users trained, help desk manned and training, regulations complied wtih. The Program Manager on a mega-project in construction or defense does many of the same things.

Core information is need as well. Cost, planned deliverables, risk management. resource management and other house keeping functions are needed.

Delivering on or near the planned time, at or near the planned budget, and more or less with the needd capabilities is hard work. 

Related articles Lean Startup, Innovation, #NoEstimates, and Minimal Viable Features It Can't Be Any Clearer Than This Top impediments to Agile adoptions that I've encountered Managing In The Presence Uncertainty - Redux Risk Management for Dummies How to Deal With Complexity In Software Projects?
Categories: Project Management

Book Tour Schedule 2014

NOOP.NL - Jurgen Appelo - Tue, 06/24/2014 - 10:21
Book Tour 2014

Last week was Sweden-week in the Management 3.0 Book Tour, with workshops in Stockholm and Gothenburg. (Check out the videos!)

This week is Germany-Week, where I am visiting Munich, Frankfurt, and Berlin.

We have a lot of other countries on the list as well. Check out the complete schedule until December. Registration will open soon! (Sorry, no other countries will be added at this time

The post Book Tour Schedule 2014 appeared first on NOOP.NL.

Categories: Project Management

Humans suck at statistics - how agile velocity leads managers astray

Software Development Today - Vasco Duarte - Tue, 06/24/2014 - 04:00

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Portfolio-Level Estimation

Portfolio Estimation?  A life coach might help!

Portfolio Estimation? A life coach might help!

I recently asked a group of people the question “What are the two largest issues in project estimation?”   The group were all involved in delivering value to clients either as developers, testers, methodologists and consultants.  The respondents experience ran the gamut from Scrum and eXtreme through Scaled Agile Framework (SAFe) and Disciplined Agile Development (DaD) to waterfall.  While not a scientific survey, the responses were illuminating.  While I am still in process of compiling the results and extracting themes, I thought I would share one of the first responses: all resources are not created equal.  The respondent made the point that most estimating exercises, which begin at the portfolio level, don’t take into account the nuances of individual experience and capacity when projects are “plucked” from a prioritized portfolio to begin work.  This problem, at the portfolio level, is based on two issues.  The first is making assumptions are based on assumptions and the second is making decisions based on averages.  At the portfolio level both are very hard to avoid.

Nearly all organizations practice some form of portfolio management.  Portfolio management techniques can be range from naïve (e.g. the squeaky wheel method) to sophisticated (e.g. portfolio-level Kanban).  In most cases the decision process as to when to release a piece of work from the portfolio requires making assumptions about the perceived project size and organizational capabilities required to deliver the project. In order to make the assumptions, a number of assumptions must be made (a bit of foreshadowing, assumptions made based on assumptions are a potential problem).  The most important set of assumptions that are made when a project is released is that the requirements and solution are known.  These assumptions will affect how large the project needs to be and the capabilities required to deliver the project. Many organizations go to great lengths to solve this problem.  Tactics used to address this issue include trying to gather and validate all of the requirements before starting any technical work (waterfall), running a small proof-of-concept project (prototypes) to generating rapid feedback (Agile). Other techniques that are used include creating repositories that link skills to people or teams.  And while these tools are useful for assembling teams in matrix organizations, these are rarely useful at the portfolio level because they are not forecasting tools. In all cases, the path that provides the most benefit revolves around generating information as early as possible and then reacting to the information. 

The second major issue is that estimates and budgets divined at the portfolio level are a reflection of averages.  In many cases, organizations use analogies to generate estimates and initial budget numbers for portfolio-level initiatives.  When using analogies an estimator (or group) will compare the project he or she is trying to estimate to completed projects to determine how alike they are to another. For example, if a you think that a project is about 70% the size of a known project, simple arithmetic can be used to estimate the new project.  Other assumptions and perceptions would be used to temper the precision.  Real project performance will reflect on all of the nuances that the technology, solution and the individual capabilities generate.  These nuances will generate variances from the estimate.  As with the knowledge issue, organizations use many techniques to manage the impact of the variances that will occur.  Two popular methods used include contingencies in the work breakdown schedule (waterfall) and backlog re-planning (Agile). In all cases, the best outcomes are reflective of feedback based on performance of real teams delivering value. 

Estimates by definition are never right (hopefully close). Estimates (different than planning) are based on what the estimator knows very early in the process.  What really needs to be built becomes know later in the process after estimates and budgets are set at the portfolio levels.  Mature organizations recognize that as projects progress new information is gathered which should be quickly used to refine estimates and budgets. 


Categories: Process Management

What to expect at I/O’14 - Develop

Google Code Blog - Mon, 06/23/2014 - 19:20
By Reto Meier, Google Developer Advocate

Google I/O 2014 will be live in less than 48 hours. Last Friday we shared a sneak peek of content and activities around design principles and techniques. This morning we’re excited to give a glimpse into what we have in store for develop experiences.

Google I/O at its core has always been about providing you with the inspiration and resources to develop remarkable applications using Google’s platforms, tools and technologies.

This year we’ll have a full lineup of sessions from the Android, Chrome and Cloud Platform teams, highlighting what’s new as well as showcasing cross-product integrations. Here’s a sample of some of the sessions we’ll be live streaming:
  • What’s new in Android || Wednesday 1-1:45PM (Room 8): Join us for a thrilling, guided tour of all the latest developments in Android technologies and APIs. We’ll cover everything that’s new and improved in the Android platform since…well, since the last time.
  • Making the mobile web fast, feature rich and beautiful || Thursday 10-10:45AM (Room 6): Reintroducing the mobile web! What is the mobile web good at? Why should developers build for it? And how do mobile web and native complement each other? The mobile web is often the first experience new users have with your brand and you're on the hook for delivering success to them. There's been massive investment in mobile browsers; so now we have the speed, the features, and the tools to help you make great mobile web apps.
  • Predicting the future with the Google Cloud Platform || Thursday 4-4:45PM (Room 7): Can you predict the future using Big Data? Can you divine if your users will come back to your site or where the next social conflict will arise? And most importantly, can Brazil be defeated at soccer on their own turf? In this talk, we'll go through the process of data extraction, modelling and prediction as well as generating a live dashboard to visualize the results. We’ll demonstrate how you can use Google Cloud and Open Source technologies to make predictions about the biggest soccer matches in the world. You’ll see how to use Google BigQuery for data analytics and Monte Carlo simulations, as well as how to create machine learning models in R and pandas. We predict that after this talk you’ll have the necessary tools to cast your own eye on the future.
In addition, we’ve invited notable speakers such as Ray Kurzweil, Regina Dugan, Peter Norvig, and a panel of robotics experts, hosted by Women Techmakers, and will be hosting two Solve for X workshops. These speakers are defining the future with their groundbreaking research and technology, and want to bring you along for the ride.

Finally, we want to give you ample face to face time with the teams behind the products, so are hosting informal ‘Box talks for Accessibility, Android, Android NDK / Gaming Performance, Cloud, Chrome, Dart, and Go. Swing by the Develop Sandbox to connect, discuss, learn and maybe even have an app performance review.

See you at I/O!

Reto Meier manages the Scalable Developer Advocacy team as part of Google's Developer Relations organization, and wrote Professional Android 4 Application Development.

Posted by Louis Gray, Googler
Categories: Programming

Performance at Scale: SSDs, Silver Bullets, and Serialization

This is a guest post by Aaron Sullivan, Director & Principal Engineer at Rackspace.

We all love a silver bullet. Over the last few years, if I were to split the outcomes that I see with Rackspace customers who start using SSDs, the majority of the outcomes fall under two scenarios. The first scenario is a silver bullet—adding SSDs creates near-miraculous performance improvements. The second scenario (the most common) is typically a case of the bullet being fired at the wrong target—the results fall well short of expectations.

With the second scenario, the file system, data stores, and processes frequently become destabilized. These demoralizing results, however, usually occur when customers are trying to speed up the wrong thing.

A common phenomena at the heart of the disappointing SSD outcomes is serialization. Despite the fact that most servers have parallel processors (e.g. multicore, multi-socket), parallel memory systems (e.g. NUMA, multi-channel memory controllers), parallel storage systems (e.g. disk striping, NAND), and multithreaded software, transactions still must happen in a certain order. For some parts of your software and system design, processing goes step by step. Step 1. Then step 2. Then step 3. That’s serialization.

And just because some parts of your software or systems are inherently parallel doesn’t mean that those parts aren’t serialized behind other parts. Some systems may be capable of receiving and processing thousands of discrete requests simultaneously in one part, only to wait behind some other, serialized part. Software developers and systems architects have dealt with this in a variety of ways. Multi-tier web architecture was conceived, in part, to deal with this problem. More recently, database sharding also helps to address this problem. But making some parts of a system parallel doesn’t mean all parts are parallel. And some things, even after being explicitly enhanced (and marketed) for parallelism, still contain some elements of serialization.

How far back does this problem go? It has been with us in computing since the inception of parallel computing, going back at least as far as the 1960s(1). Over the last ten years, exceptional improvements have been made in parallel memory systems, distributed database and storage systems, multicore CPUs, GPUs, and so on. The improvements often follow after the introduction of a new innovation in hardware. So, with SSDs, we’re peering at the same basic problem through a new lens. And improvements haven’t just focused on improving the SSD, itself. Our whole conception of storage software stacks is changing, along with it. But, as you’ll see later, even if we made the whole storage stack thousands of times faster than it is today, serialization will still be a problem. We’re always finding ways to deal with the issue, but rarely can we make it go away.

Parallelization and Serialization
Categories: Architecture

Quantifying the Value of Information

Herding Cats - Glen Alleman - Mon, 06/23/2014 - 15:00

From the book How To Measure Anything, there is a notion starting from the McNamara Fallacy.

The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can't easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily isn't important. This is blindness. The fourth step is to say that what can't be easily measured really dosnt' exist. This is suicide.Charles HandyThe Empty Raincoat (1995), describing the Vietnam-era measurement policies of Secretary of Defense, Robert McNamara

There are three reasons to seek information in the process of making business decisions:

  1. Information reduces uncertainty about decisions that have economic consequences.
  2. Information affects the behaviour of others, which has economic consequences.
  3. Information sometimes has its own market value.

When we read ...

No Estimates

... and there are no alternatives described, then it's time to realize this is an empty statement. To be successful in the software development business we need information about the cost to development of value, the duration of the work effort that produces this value for those paying for the outcomes of our efforts, and the confidence that we can produce the needed capabilities on or near the planned delivery date, at or below the planned budget. (And fixing the budget just leaves two other variables open, so that is an empty approach as well.)

The solution to the first has been around since the 1950's - decision theory. The answer to the second is provided by measuring productivity in regards to uncertainty about investments - an options or analysis of alternatives (AoA) process. The notion of market information is based on Return on Investment, where the value produced in exchange for cost to produce that value in an fundamental principle of all successful businesses.

If we can somehow separate the writing of software from the discussion of determining the cost of that effort, it may become clearer that the software development community needs to consider the needs of those funding their work over their own self-interest of not wanting to estimate the cost of that work. In the end those with the money need to know. If the development community isn't interested in providing viable - credible business processes - to answer how much, when, and what - then it'll be done without them, because to stay in business, the business must know the cost of their products or services.

Related articles Do It Right or Do It Twice We Can Know the Business Value of What We Build "Statistical Science and Philosophy of Science: where should they meet?"
Categories: Project Management