Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Five Years Ago

NOOP.NL - Jurgen Appelo - Wed, 09/30/2015 - 16:57

On October 1, 2010, I started my life as a self-employed entrepreneur. My full-color high-quality #Workout book costs ONLY EUR 15.00 excluding VAT (normally EUR 27.50) for 24 hours on October 1.

The post Five Years Ago appeared first on NOOP.NL.

Categories: Project Management

Strategy: Taming Linux Scheduler Jitter Using CPU Isolation and Thread Affinity

When nanoseconds matter you have to pay attention to OS scheduling details. Mark Price, who works in the rarified high performance environment of high finance, shows how in his excellent article on Reducing system jitter.

For a tuning example he uses the famous Disrupter inter-thread messaging library. The goal is to keep the OS continuously feeding CPUs work from high priority threads. His baseline test shows the fastest message is sent in 76 nanoseconds, 1 in 100 messages took longer than 2 milliseconds, and the longest delay was 11 milliseconds.

The next section of the article shows in loving detail how to bring those latencies lower and more consistent, a job many people will need to do in practice. You'll want to read the article for a full explanation, including how to use perf_events and HdrHistogram. It's really great at showing the process, but in short:

  • Turning off power save mode on the CPU reduced brought the max latency from 11 msec down to 8 msec.
  • Guaranteeing threads will always have CPU resources using CPU isolation and thread affinity brought the maximum latency down to 14 microseconds.
Related Articles
Categories: Architecture

You Don’t Have To Do What You Are Good At

Making the Complex Simple - John Sonmez - Wed, 09/30/2015 - 13:00

We all have strengths and weaknesses. This should come as no surprise to anyone. It is common advice to work on your weaknesses, so you can turn them into strengths. This can be fine and, when applied judiciously, can turn you into a more well-rounded person. It can also result in spending your precious time […]

The post You Don’t Have To Do What You Are Good At appeared first on Simple Programmer.

Categories: Programming

IntelliJ 14.1.5: Unable to import maven project

Mark Needham - Wed, 09/30/2015 - 06:54

After a recent IntelliJ upgrade I’ve been running into the following error when trying to attach the sources of any library being pulled in via Maven:

Unable to import maven project

It seems like this is a recent issue in the 14.x series and luckily is reasonably easy to fix by adding the following flag to the VM options passed to the Maven importer:


And this is where you need to add it:

2015 09 30 00 18 17

Cmd + , -> Build, Execution, Deployment -> Build Tools -> Maven -> Importing

Categories: Programming

Metrics Minute: Burn-Up Chart

Audio Version:  SPaMCAST 133


A burn-up chart is a graph that tracks progress by accumulating a measure of functionality as it is completed over time. The measure of functionality can be compared to a goal, such as a budget or release plan, to provide the team and others with feedback on progress. Graphically the X axis is time and the Y axis is accumulated functionality completed over that period of time. The burn-up chart, like its cousin the burn-down chart, provides a simple yet powerful tool to provide visibility into the sprint or program.

The burn-up chart can be thought of as the mirror image of the burn-down chart, but it is generally extended over multiple sprints to show progress as the project builds toward release and product delivery.


As with its close cousin, the burn-down chart, there is not really a formula for a burn-up chart as much as there are instructions on how to graphically represent progress against a goal. In its simplest form, the X axis represents time (days, sprints, or releases) and the Y axis represents the accumulated functionality completed over that period of time (stories, value or cost).

Using the basic form of the graph as a base, other data can be integrated. The release plan, which typically includes an outline of the funcitonaity (in story points or function points) for a number, can be overlaid on the chart to give visibility into the anticipated flow of the project. The project budget can be added to the chart to provide feedback on budget utilization; the budget line can be raised or lowered to show how much work remains as changes to scope are incorporated. Value can be tracked on a second Y axis to show the team the relationship between work and value. The burn-up chart is one chart covering the big three topics of on-time, on-budget and on-schedule.


The burn-up and burn-down charts have many similar uses such as planning and monitoring. Rather than repeat the discussion already published in the Metrics Minute: Burn-Down Charts, I will focus on the major distinctions between the charts. The unique power of the burn-up chart can be found in its ability to provide a holistic view for the project. The view is holistic because the chart can be used to show progress over sprints and releases. To use a photographic analogy, the burn-up chart provides a panoramic view rather than a normal picture, which is narrow slice of real life.

James Shore suggests developing a risk-adjusted burn-up chart to make and meet commitments.  It tracks commitments alongside progress. Adding a continually refined risk-adjustment to the burn-up chart accounts for Steve McConnell’s cone of uncertainty (the cone of uncertainty theorizes that plans are less accurate the farther in the future they are projected). Actual performance (delivered functionality) can be tracked against commitments; significant gaps would signal a need to adjust the release plan if performance strays to far from what is possible as established by the cone of uncertainty.


One issue is that a burn-up chart provides visibility at a high level, rather than progress at a story level. This causes some people concern. Burn-up, like burn-down charts, are not designed as a precise report on where each story is on a day-to-day basis, but rather as a view of how functionality or value has been delivered. The burn-down chart and the detail found in a card wall (or proxy) are better tools to drive a project on a day-to-day basis.

The second issue occurs because value is recognized only when a piece of work is complete.  In software projects, this equates to functional code that has been integrated and tested.  Value is not recognized until work in process is complete.   Alistair Cockburn suggests recognizing functionality when ‘done’ yields more reliable information about the projects actual state than the classic plan based mode. The problem is that if the increment of work is too large the graph will resemble a J with the majority of value accumulating late in the sprint leaving room from negative surprises. The way to avoid this issue is to increase the granularity of the units of work (break the work down in smaller pieces). This will allow the team to recognize value more quickly, and therefore smooth out the curve.

The third issue is also related to the choice of unit of measure for use on the vertical axis. The unit needs to be predictable early in the process. Most physical measures, such as lines of code, can’t be known until you’re very near the end of the sprint or projects. Leveraging a metric that can expand even though scope has not changed muddles the information provided by the chart, thus reducing its usefulness. This issue can be avoided by choosing unit of measure that can’t expand unless additional work is accepted into the sprint or project.  Functional measures, such as Quick and Early Function Points, that translate stories or requirements into numbers are a disciplined/formal mechanism for addressing the unit of measure issue without adding overhead.

Related or Variant Measures

  • Functional Size
  • Velocity
  • Burn-down Charts
  • Earned Value
  • Productivity
  • Cumulative Flow Diagrams


The most significant criticism of the burn-up chart is that, in its basic form, it does not reflect the flow of work.  Work in some sprints does not lead to completed functionality, therefore value is not accrued. This leads to a feeling that partial credit should be given, splitting stories using waterfall stages or other bad practices all in a rush to show progress.  One solution is to use flow metrics by using techniques like Kanban and cumulative flow diagrams is a way of accruing value on a more granular basis if the organization can’t find a measure to split user stories so they can be completed properly in a single sprint.

The final criticism is that burn-up charts do not predict surprises.  This is a false criticism as a surprise by definition is not predictable.  Use the value at risk metric as a risk management tool to help avoid surprises, but vigilance should never be supplanted by measurement.


Categories: Process Management

Telltale Games share their tips for success on Android TV

Android Developers Blog - Wed, 09/30/2015 - 00:10

Lily Sheringham, Developer Marketing at Google Play

Editor’s note: This is another post in our series featuring tips from developers finding success on Google Play. This week, we’re sharing advice from Telltale Games on how to create a successful game on Android TV. -Ed.

With new Android hardware being released from the likes of Sony, Sharp, and Philips amongst others, Android TV and Google Play can help you bring your game to users right in their living rooms through a big screen experience.

The recent Marshmallow update for Android TV means makes it easier than ever to extend your new or existing games and apps for TV. It's important to understand how your game is presented in the user interface and how it can help users get to the content they want quickly.

Telltale Games is a US-founded game developer and publisher, based in San Francisco, California. They’re well known for the popular series ‘The Walking Dead’ and ‘Game of Thrones‘ which was created in partnership with HBO.

Zac Litton, VP of Technology at Telltale Games, shares his tips for creating and launching your games with Android TV.

Tips for launching successful games on Android TV
  1. Determine the Device for Android TV: Determine what device your game is running on by using the UiModeManager.getCurrentModeType() method. If the device is running in television mode, you can declare what to display as the launch point of the game on the Android TV itself (Configuration). Add the LEANBACK_LAUNCHER filter category to one of your intent-filters to identify your game as being enabled for TV. This is required for your game to be considered a TV app in Google Play.
  2. Touchscreen vs TV: TVs don’t have touch screens so make sure you set the touchscreen required flag to false in the manifest as touch is implicitly true by default on Android. This will help avoid your game getting filtered from the TV Play store right out of the gate. Also, check your permissions, as some imply hardware requirements which you may need to override explicitly.
  3. Use Hardware APIs: Use the package manager which has System Feature API to enable your game to reason about what capabilities it can and should expose. For example, whether to show the user touch screen controls or game controller controls. You can also make your app location aware using the location APIs available in Google Play services to add location awareness with automated location tracking, geofencing, and activity recognition.
  4. Use appropriate controllers: To reach the most users, your app should support a simplified input scheme that doesn’t require a directional pad (D-pad controller). The player needs to be able to use a D-Pad in all aspects of the game—not just controlling core gameplay, but also navigating menus and ads, therefore your Android TV game shouldn’t refer to a touch interface specifically. For example, an Android TV game should not tell a player to "Tap here to continue."
  5. Appear in the right place: Make sure you add an android:isGame attribute to the application element of the manifest and set it to true in order to enable the installed game to show up on the correct launcher row, games.
  6. Provide home screen banners: Provide a home screen banner for each localization supported, especially if you are an international developer. The banner (320 x 180) is the game launch point that appears on the TV home screen on the games row.
  7. Use a TV image for your Store Listing: Be sure you provide at least one TV screen shot on your Store Listing page. Then include a high res icon, feature graphic, promo graphic and TV banner.
  8. Improve visibility through ‘search’ and ‘recommendations’: Android TV uses the Android search interface to retrieve content data from installed apps and games, and deliver search results to the user. Implement a ContentProvider to show instant suggestions to the user, and a SearchManager to deep link your game’s content.
  9. Set appropriate pricing and distribution: Check “Distribute to Android TV” in the relevant section in the Developer Console. This will trigger a review by Google to ensure your game meets the minimum requirements for TV.
  10. Guide the user: Use a tutorial to guide the player into the game mechanics and provide an input reference to the user based on the input control they are using.

With the recently released Android TV codelab and online class from Udacity, you can learn how to convert your existing mobile game into Android TV in just four hours. Find out more about how to build games for Android TV and how you to publish them using familiar tools and processes in Google Play.

Categories: Programming

New Android Marshmallow sample apps

Android Developers Blog - Wed, 09/30/2015 - 00:04

Posted by Rich Hyndman, Developer Advocate

Three new Android Marshmallow sample applications have gone live this week. As usual they are available directly from the Google Samples repository on GitHub or through the Android Studio samples browser.

Android Direct Share Sample

table, th, td { border: clear; border-collapse: collapse; }

Direct Share is a new feature in Android Marshmallow that provides APIs to make sharing more intuitive and quick for users. Direct Share allows users to share content to targets, such as contacts, within other apps. For example, the direct share target might launch an activity in a social network app, which lets the user share content directly to a specific friend in that app.

This sample is a dummy messaging app, and just like any other messaging apps, it receives intents for sharing a plain text. It demonstrates how to show some options directly in the list of share intent candidates. When a user shares some text from another app, this sample app will be listed as an option. Using the Direct Share feature, this app also shows some of contacts directly in the chooser dialog.

To enable Direct Share, apps need to implement a Service extending ChooserTargetService. Override the method onGetChooserTargets() and return a list of Direct Share options.

In your AndroidManifest.xml, add a meta-data tag in your Activity that receives the Intent. Specify android:name as android.service.chooser.chooser_target_service, and point the android:value to the Service.

Android MidiSynth Sample

Android 6.0 introduces new support for MIDI. This sample demonstrates how to use the MIDI API to receive and play MIDI messages coming from an attached input device (MIDI keyboard).

The Android MIDI API ( allows developers to connect a MIDI device to an Android device and process MIDI messages coming from it.

This sample demonstrates some basic features of the MIDI API, such as:

  • Enumeration of currently available devices (including name, vendor, capabilities, etc)
  • Notification when MIDI devices are plugged in or unplugged
  • Receiving and processing MIDI messages

It also contains a simple implementation of an oscillator and note playback.

Android MidiScope Sample

A sample demonstrating how to use the MIDI API to receive and process MIDI signals coming from an attached device.

The Android MIDI API ( allows developers to connect a MIDI device to Android and process MIDI signals coming from it. This sample demonstrates some basic features of the MIDI API, such as enumeration of currently available devices (Information includes name, vendor, capabilities, etc), notification when MIDI devices are plugged in or unplugged, and receiving MIDI signals. This sample simply shows all the received MIDI signals to the screen log and does not play any sound for them.

Check out a sample today and jumpstart your Android Marshmallow development.

Categories: Programming

New permissions requirements for Android TV

Android Developers Blog - Tue, 09/29/2015 - 23:59

Posted by Anirudh Dewani, Developer Advocate

Android 6.0 introduces a new runtime permission model that gives users more granular control over granting permissions requested from their apps and leads to faster app installs. Users can also revoke these permissions from Settings at any point of time. If an app running on the M Preview supports the new permissions model, the user does not have to grant any permissions when they install or upgrade the app. Developers should check for permissions that require runtime grant from users, and request them if the app doesn’t already have them.

To list all permissions that require runtime grant from users on Android 6.0 -

adb shell pm list permissions -g -d 

Apps should generally request as few permissions as possible. Voice search is an integral part of Android TV content discovery experience. When using the internal SpeechRecognizer to enable Voice Search, apps must declare RECORD_AUDIO permission in the manifest. RECORD_AUDIO requires explicit user grant during runtime in Android 6.0. When using the Android TV Leanback support library, apps can eliminate the need for requesting RECORD_AUDIO during runtime by using SpeechRecognitionCallback instead of SpeechRecognizer.

Commit from Android TV Leanback Sample repository.

mFragment = (SearchFragment) getFragmentManager()

    mSpeechRecognitionCallback = new SpeechRecognitionCallback() {
        public void recognizeSpeech() {
            if (DEBUG) Log.v(TAG, "recognizeSpeech");
            startActivityForResult(mFragment.getRecognizerIntent(), REQUEST_SPEECH);

When SpeechRecognitionCallback is set, Android Leanback support library will let the your activity process the voice search action instead of using the internal SpeechRecognizer. The app can then use RecognizerIntent to support speech recognition.

If you have an Android TV app targeting API Level 23, please update the app to use SpeechRecognitionCallback and remove RECORD_AUDIO permission from your manifest.

Categories: Programming

Chrome custom tabs smooth the transition between apps and the web

Android Developers Blog - Tue, 09/29/2015 - 23:53

Originally posted on the Chromium blog

Posted by Yusuf Ozuysal, Chief Tab Customizer

Android app developers face a difficult tradeoff when it comes to showing web content in their Android app. Opening links in the browser is familiar for users and easy to implement, but results in a heavy-weight transition between the app and the web. You can get more granular control by building a custom browsing experience on top of Android’s WebView, but at the cost of more technical complexity and an unfamiliar browsing experience for users. A new feature in the most recent version of Chrome called custom tabs addresses this tradeoff by allowing an app to customize how Chrome looks and feels, making the transition from app to web content fast and seamless.

Chrome custom tabs with pre-loading vs. Chrome and WebView

Chrome custom tabs allow an app to provide a fast, integrated, and familiar web experience for users. Custom tabs are optimized to load faster than WebViews and traditional methods of launching Chrome. As shown above, apps can pre-load pages in the background so they appear to load nearly instantly when the user navigates to them. Apps can also customize the look and feel of Chrome to match their app by changing the toolbar color, adjusting the transition animations, and even adding custom actions to the toolbar so users can perform app-specific actions directly from the custom tab.

Custom tabs benefit from Chrome’s advanced security features, including its multi-process architecture and robust permissions model. They use the same cookie jar as Chrome, allowing a familiar browsing experience while keeping users’ information safe. For example, if a user has signed in to a website in Chrome, they will also be signed in if they visit the same site in a custom tab. Other features that help users browse the web, like saved passwords, autofill, Tap to Search, and Sync, are also available in custom tabs.

Custom tabs are easy for developers to integrate into their app by tweaking a few parameters of their existing VIEW intents. Basic integrations require only a few extra lines of code, and a support library makes more complex integrations easy to accomplish, too. Since custom tabs is a feature of Chrome, it’s available on any version of Android where recent versions of Chrome are available.

Users will begin to experience custom tabs in the coming weeks in Feedly, The Guardian, Medium,, Skyscanner, Stack Overflow, Tumblr, and Twitter, with more coming soon. To get started integrating custom tabs into your own application, check out the developer guide.

Categories: Programming

Support for 100MB APKs on Google Play

Android Developers Blog - Tue, 09/29/2015 - 23:49

Posted by Kobi Glick, Google Play team

Smartphones are powerful devices that can support diverse tasks from graphically intensive games to helping people get work done from anywhere. We understand that developers are challenged with delivering a delightful user experience that maximizes the hardware of the device, while also ensuring that their users can download, install, and open the app as quickly as possible. It’s a tough balance to strike, especially when you’re targeting diverse global audiences.

To support the growing number of developers who are building richer apps and games on Google Play, we are increasing the APK file size limit to 100MB from 50MB. This means developers can publish APKs up to 100MB in size, and users will see a warning only when the app exceeds the 100MB quota and makes use of Expansion Files. The default update setting for users will continue to be to auto-updating apps over Wi-Fi only, enabling users to access higher quality apps and games while conserving their data usage.

Even though you can make your app bigger, it doesn’t always mean you should. Remember to keep in mind the following factors:

  • Mobile data connectivity: Users around the world have varying mobile data connectivity speeds. Particularly in developing countries, many people are coming online with connections slower than those of users in countries like the U.S. and Japan. Users on a slow connection are less likely to install an app or game that is going to take a long time to download.
  • Mobile data caps: Many mobile networks around the world give users a limited number of MB that they can download each month without incurring additional charges. Users are often wary of downloading large files for fear of exceeding their limits.
  • App performance: Mobile devices have limited RAM and storage space. The larger your app or game, the slower it may run, particularly on older devices.
  • Install time: People want to start using your app or game as quickly as possible after tapping the install button. Longer wait times increase the risk they’ll give up.

We hope that, in certain circumstances, this file size increase is useful and enables you to build higher quality apps and games that users love.

Categories: Programming

Sponsored Post: iStreamPlanet,, Instrumental, Location Labs, Enova, Surge, Redis Labs,, VoltDB, Datadog, SignalFx, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • As a Networking & Systems Software Engineer at iStreamPlanet you’ll be driving the design and implementation of a high-throughput video distribution system. Our cloud-based approach to video streaming requires terabytes of high-definition video routed throughout the world. You will work in a highly-collaborative, agile environment that thrives on success and eats big challenges for lunch. Please apply here.

  • As a Scalable Storage Software Engineer at iStreamPlanet you’ll be driving the design and implementation of numerous storage systems including software services, analytics and video archival. Our cloud-based approach to world-wide video streaming requires performant, scalable, and reliable storage and processing of data. You will work on small, collaborative teams to solve big problems, where you can see the impact of your work on the business. Please apply here.

  • is a *profitable* fast-growing SaaS startup looking for a Lead DevOps/Infrastructure engineer to join our ~10 person team in Palo Alto or *remotely*. Come help us improve API performance, tune our databases, tighten up security, setup autoscaling, make deployments faster and safer, scale our MongoDB/Elasticsearch/MySQL/Redis data stores, setup centralized logging, instrument our app with metric collection, set up better monitoring, etc. Learn more and apply here.

  • Location Labs is the global pioneer in mobile security for humans. Our services are used by millions of monthly paying subscribers worldwide. We were named one of Entrepreneur magazine’s “most brilliant” companies and TechCrunch said we’ve “cracked the code” for mobile monetization. If you are someone who enjoys the scrappy, get your hands dirty atmosphere of a startup, but has the measured patience and practices to keep things robust, well documented, and repeatable, Location Labs is the place for you. Please apply here.

  • As a Lead Software Engineer at Enova you’ll be one of Enova’s heavy hitters, overseeing technical components of major projects. We’re going to ask you to build a bridge, and you’ll get it built, no matter what. You’ll balance technical requirements with business needs, while advocating for a high quality codebase when working with full business teams. You’re fluent in ‘technical’ language and ‘business’ language, because you’re the engineer everyone counts on to understand how it works now, how it should work, and how it will work. Please apply here.

  • As a UI Architect at Enova, you will be the elite representative of our UI culture. You will be responsible for setting a vision, guiding direction and upholding high standards within our culture. You will collaborate closely with a group of talented UI Engineers, UX Designers, Visual Designers, Marketing Associates and other key business stakeholders to establish and maintain frontend development standards across the company. Please apply here.

  • VoltDB's in-memory SQL database combines streaming analytics with transaction processing in a single, horizontal scale-out platform. Customers use VoltDB to build applications that process streaming data the instant it arrives to make immediate, per-event, context-aware decisions. If you want to join our ground-breaking engineering team and make a real impact, apply here.  

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • Instrumental is a hosted real-time application monitoring platform. In the words of one of our customers: "Instrumental is the first place we look when an issue occurs. Graphite was always the last place we looked."

  • Real-time correlation across your logs, metrics and events. just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

All Conjectures Need a Hypothesis

Herding Cats - Glen Alleman - Tue, 09/29/2015 - 16:53

As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas Osiander's (editor) preface to De Revolutionbus, Copernicus, in To Explain the World: The Discovery  of Modern Science, Steven Weinberg

In the realm of project, product, and business management we come across nearly endless ideas conjecturing to solve some problem or another.

Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.

From removing the smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.

In the absence of any hypothesis by which to test those conjectures, leaving a greater fool than when entering is the likely result. In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.

An anecdote is a sample of one from an unknown population

And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures.   

If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.

One Final Note

As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.

  • Is this idea recognized by others as being credible. Are there supporting studies that confirm the effectiveness and applicability of the idea outside the authors own experience?
  • Are those supporting the idea, themselves credible, or just following the herd?
  • Are there references to the idea that have been tested outside the authors own experience?
  • Are there criticisms of the idea in the literature? Seeking critics is itself a critical success factor in testing any ideas. There would be knock down drag out shouting matches in the halls of the physics building about an idea. Nobel Laureates would be waving arms and speaking in loud voices. In the end it was a test of new and emergent ideas. And anyone who takes offense to being criticized, has no basis to stand on for defending his idea. 
  • Is the idea the basis of a business, e.g. is the author selling something. A book, a seminar, consulting services?
  • Has this  idea been tested by someone else. We'd tear down our experiment, have someone across the country rebuild it, run the data and see if they got the same results. 

Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:

Everyone is entitled to his own opinion, but not his own facts. 

and of course my favorite

Again and again and again — what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" — what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts! - Robert Heinlein (1978)

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Shifting from Scarcity to Abundance Thinking

Mike Cohn's Blog - Tue, 09/29/2015 - 15:00

Over the past few months, I've read a few books on marketing. But I've also taken a handful of video training courses on marketing and have been listening to some enjoyable podcasts on the subject. Back when I was in college, we really did look down on the people majoring in marketing. This was the 1980s. And it seemed like they often chose that major because it didn't require any math at all. And to an outsider, the main skills they appeared to need were a firm handshake and a good golf game.

A lot has changed since then and I've become fascinated with marketing because of all the data that's available to marketers today. It appeals to my analytical side. So while consuming all this content on marketing, I noticed something. Individuals whom I thought would be competitors in that space were often endorsing one another.

Scarcity and Abundance Mindsets

And I realized this no longer happens as often as it should within the community of Scrum trainers and coaches. Too many professional Scrum trainers and coaches have shifted into a scarcity mindset. Each trainer then acts as though any win by another trainer is a loss for his or her own business. Their thinking implies there are only so many clients in the world and clients are to be divvied up like slices of a pie.

But what if that client someone else trains is then wildly successful and tells 10 or 100 or 1,000 other people? The pie grows and it can very likely grow large enough that everyone wins. This is an abundance mindset rather than a scarcity mindset.

The Founding of the Scrum Alliance

When Ken Schwaber, Esther Derby and I co-founded the Scrum Alliance, part of the plan was a credibility transfer from ourselves and the organization to other trainers. We knew that for Scrum to succeed in the world, it needed a lot more training and coaching than the three of us could or wanted to provide. And, approaching it with an abundance mindset, we knew the need for Scrum training and coaching would grow well beyond what the three of us could provide.

And that brings me to today: Many of us who make our livings training and coaching Scrum (or agile in general) seem to have reverted to a scarcity mindset. I'm generalizing, of course. There are some wonderful examples of people within the Scrum community who live an abundance mindset daily. I wonder though if it gets tough for them to continue doing so when many around them are doing the opposite.

We Are Partners and Future Partners, Not Competitors

One of the marketing books I read made an interesting point. The author said he never refers to others in his space as "competitors." Instead, they are "partners" and "future partners." And this is indeed what I noticed while participating in video training and podcasts from these marketing experts. Even when some of them could have viewed themselves as competitors since they sold similar products and services, they knew that if I bought a product from one of them, it did not preclude me from buying a product from the others

So, if you are a Scrum Alliance Certified Scrum Trainer or Coach, or a Professional Scrum Trainer, please know that I consider you a partner or future partner. You aren't my competition, and I'm not yours. Our competition is crappy products and all the forces of nature that conspire to make teams build crappy software.

And, if you're reading my blog looking for a training course, I hope you'll consider mine. But, it's a big world and I may not be in your area at the time you want training. In that case, I sincerely hope you'll consider training, coaching or at least reading the blogs of my "partners and future partners."

Finally, CSTs, CSCs and PSTs (my partners and future partners!): I encourage you to leave links to training pages, bios, blogs, etc. below.

As always, I look forward to everyone's thoughts in the comments below.

Software Development Conferences Forecast September 2015

From the Editor of Methods & Tools - Tue, 09/29/2015 - 07:52
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Announcing General Availability of HDInsight on Linux + new Data Lake Services and Language

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 21:54

Today, I’m happy to announce several key additions to our big data services in Azure, including the General Availability of HDInsight on Linux, as well as the introduction of our new Azure Data Lake and Language services. General Availability of HDInsight on Linux

Today we are announcing general availability of our HDInsight service on Ubuntu Linux.  HDInsight enables you to easily run managed Hadoop clusters in the cloud.  With today’s release we now allow you to configure these clusters to run using both a Windows Server Operating System as well as an Ubuntu based Linux Operating System.

HDInsight on Linux enables even broader support for Hadoop ecosystem partners to run in HDInsight providing you even greater choice of preferred tools and applications for running Hadoop workloads. Both Linux and Windows clusters in HDInsight are built on the same standard Hadoop distribution and offer the same set of rich capabilities.

Today’s new release also enables additional capabilities, such as, cluster scaling, virtual network integration and script action support. Furthermore, in addition to Hadoop cluster type, you can now create HBase and Storm clusters on Linux for your NoSQL and real time processing needs such as building an IoT application.

Create a cluster

HDInsight clusters running using Linux can now be easily created from the Azure Management portal under the Data + Analytics section.  Simply select Ubuntu from the cluster operating system drop-down, as well as optionally choose the cluster type you wish to create (we support base Hadoop as well as clusters pre-configured for workloads like Storm, Spark, HBase, etc).


All HDInsight Linux clusters can be managed by Apache Ambari. Ambari provides the ability to customize configuration settings of your Hadoop cluster while giving you a unified view of the performance and state of your cluster and providing monitoring and alerting within the HDInsight cluster.


Installing additional applications and Hadoop components

Similar to HDInsight Windows clusters, you can now customize your Linux cluster by installing additional applications or Hadoop components that are not part of default HDInsight deployment. This can be accomplished using Bash scripts with script action capability.  As an example, you can now install Hue on an HDInsight Linux cluster and easily use it with your workloads:


Develop using Familiar Tools

All HDInsight Linux clusters come with SSH connectivity enabled by default. You can connect to the cluster via a SSH client of your choice. Moreover, SSH tunneling can be leveraged to remotely access all of the Hadoop web applications from the browser.

image New Azure Data Lake Services and Language

We continue to see customers enabling amazing scenarios with big data in Azure including analyzing social graphs to increase charitable giving, analyzing radiation exposure and using the signals from thousands of devices to simulate ways for utility customers to optimize their monthly bills. These and other use cases are resulting in even more data being collected in Azure. In order to be able to dive deep into all of this data, and process it in different ways, you can now use our Azure Data Lake capabilities – which are 3 services that make big data easy.

The first service in the family is available today: Azure HDInsight, our managed Hadoop service that lets you focus on finding insights, and not spend your time having to manage clusters. HDInsight lets you deploy Hadoop, Spark, Storm and HBase clusters, running on Linux or Windows, managed, monitored and supported by Microsoft with a 99.9% SLA.

The other two services, Azure Data Lake Store and Azure Data Lake Analytics introduced below, are available in private preview today and will be available broadly for public usage shortly. Azure Data Lake Store

Azure Data Lake Store is a hyper-scale HDFS repository designed specifically for big data analytics workloads in the cloud. Azure Data Lake Store solves the big data challenges of volume, variety, and velocity by enabling you to store data of any type, at any size, and process it at any scale. Azure Data Lake Store can support near real-time scenarios such as the Internet of Things (IoT) as well as throughput-intensive analytics on huge data volumes. The Azure Data Lake Store also supports a variety of computation workloads by removing many of the restrictions constraining traditional analytics infrastructure like the pre-definition of schema and the creation of multiple data silos. Once located in the Azure Data Lake Store, Hadoop-based engines such as Azure HDInsight can easily mine the data to discover new insights.

Some of the key capabilities of Azure Data Lake Store include:

  • Any Data: A distributed file store that allows you to store data in its native format, Azure Data Lake Store eliminates the need to transform or pre-define schema in order to store data.
  • Any Size: With no fixed limits to file or account sizes, Azure Data Lake Store enables you to store kilobytes to exabytes with immediate read/write access.
  • At Any Scale: You can scale throughput to meet the demands of your analytic systems including the high throughput needed to analyze exabytes of data. In addition, it is built to handle high volumes of small writes at low latency making it optimal for near real-time scenarios like website analytics, and Internet of Things (IoT).
  • HDFS Compatible: It works out-of-the-box with the Hadoop ecosystem including other Azure Data Lake services such as HDInsight.
  • Fully Integrated with Azure Active Directory: Azure Data Lake Store is integrated with Azure Active Directory for identity and access management over all of your data.
Azure Data Lake Analytics with U-SQL

The new Azure Data Lake Analytics service makes it much easier to create and manage big data jobs. Built on YARN and years of experience running analytics pipelines for Office 365, XBox Live, Windows and Bing, the Azure Data Lake Analytics service is the most productive way to get insights from big data. You can get started in the Azure management portal, querying across data in blobs, Azure Data Lake Store, and Azure SQL DB. By simply moving a slider, you can scale up as much computing power as you’d like to run your data transformation jobs.


Today we are introducing a new U-SQL offering in the analytics service, an evolution of the familiar syntax of SQL.  U-SQL allows you to write declarative big data jobs, as well as easily include your own user code as part of those jobs. Inside Microsoft, developers have been using this combination in order to be productive operating on massive data sets of many exabytes of scale, processing mission critical data pipelines. In addition to providing an easy to use experience in the Azure management portal, we are delivering a rich set of tools in Visual Studio for debugging and optimizing your U-SQL jobs. This lets you play back and analyze your big data jobs, understanding bottlenecks and opportunities to improve both performance and efficiency, so that you can pay only for the resources you need and continually tune your operations.

image Learn More

For more information and to get started, check out the following links:

Hope this helps,


Categories: Architecture, Programming

GTD (Getting Things Done)

Xebia Blog - Mon, 09/28/2015 - 17:58

As a consultant I am doing a lot of things, so to keep up I have always used some form of a TODO list. The reason why I did this is because it helped me break down my tasks in to smaller ones and keep focusing, but also because I kept remembering the quote I once heard “smart people write things down, dumb people try to remember it”.

Years ago I read the books “Seven habits of highly effective people” and “Switch”, in my research in to how to become more effective I came in to contact with GTD and decided to try it out. In this post I want to show people who have heard about GTD how I use it and how it helps me.

For those who don’t know GTD or haven’t heard about the two books I mentioned please follow the links Getting things done Seven habits of highly effective people Switch and have fun.

How do I use GTD?
Because of my experience before GTD I knew I needed a digital list that I could access from my phone, laptop or tablet. It was also crucial for me to be able to have multiple lists and reminders. Because of these requirements I settled on using Todoist after having evaluated others as well.

I have couple of main groups like Personal, Work, Read-me, Someday & Maybe, Home etc.. In those lists I have topics like Xebia, CustomerX, CustomerY etc.. And in those topics I have the actual tasks or lists with tasks that I need to do.

I write everything in to my Inbox list the moment a task or idea pops up in to my mind. I do this because most of the time I don’t have the time to figure out directly what I want to do with it so I just put it in to my Inbox and deal with it later.

As a consultant I have to deal with external lists like team scrum boards. I don’t want to depend on them so I write down my own tasks with references to them. In those tasks I write the things that I need to do today, tomorrow or longer if it is relevant in making decisions on the long run.

Every day I finish my work I quickly look at my task list and decide on priorities for tomorrow, then every morning I evaluate my decisions based on the input of that day.

As a programmer I like to break down a programming task in to smaller solutions so therefore sometimes I use my list as a reference or micromanagement of tasks that take couple of hours.

To have a better overview of things in my list I also use tags like tablet, PC, home, tel so it helps me pick up tasks based on context I am in on that moment.

Besides deciding on priorities every day I also do a week review where I decide on weekly priorities and add or remove tasks. I also specify reminders on tasks, I do this day based but if it is really necessary I use time based.

Because I want to have one Inbox I am sticking to zero mail in my mail Inboxes. This means every time I check my mail I read it and then either delete it or archive it and if needed make a task to do something.

What does GTD do for me?
The biggest win for me is that it helps me clear my mind because I put everything in there that I need to do, this way my mind does not have to remember all those things. It is also a relaxing feeling knowing that everything you need to do is registered and you will not forget to do it.

It gives me structure and it allows me to make better priorities and decisions about saying yes or no to something because everything is in one place.

Having a task registered so that you can cross it off when it’s done gives a nice fulfilling feeling.

Having a big list of things that you think you need to do can be quite overwhelming and in some cases it can also feel like a pressure because you haven’t done a list of tasks that is there for a long time. That’s why I keep evaluating tasks and remove things from it after some time.

Task lists and calendars can sometimes collide so it is importent to keep your agenda as a part of your list although it’s not a list.

What am I still missing?
Right now from the GTD’s point of view I miss a better time management. Besides that I would like to start a journal of relevant things that happend or people I spoke to but also incorporate that with my GTD lists (for example combining Evernote/OneNote with my Todoist lists).

Everything else I miss is technical:

I miss a proper integration of all my mail where all the calendar invites are working without having to connect all accounts on all different devices. I also miss a proper integration between my mail and Todoist for creating and referencing mail.

Because I write down every task/idea that pops up in to my mind it would be great if voice recognition would work a little better for ‘me’ when I am in a car.

GTD has helped me structure my tasks and gave me more controle in to making decision around them. By writing this post I am hoping to trigger somebody else to look in to GTD and maybe have an impact on his or hers effectiveness.

I am also curious in what your opinion is on GTD or if you have any tips for me regarding GTD?

How Facebook Tells Your Friends You're Safe in a Disaster in Under Five Minutes

In a disaster there’s a raw and immediate need to know your loved ones are safe. I felt this way during 9/11. I know I’ll feel this way during the next wild fire in our area. And I vividly remember feeling this way during the 1989 Loma Prieta earthquake.

Most earthquakes pass beneath notice. Not this one and everyone knew it. After ceiling tiles stopped falling like snowflakes in the computer lab, we convinced ourselves the building would not collapse, and all thoughts turned to the safety of loved ones. As it must have for everyone else. Making an outgoing call was nearly impossible, all the phone lines were busy as calls poured into the Bay Area from all over the nation. Information was stuck. Many tense hours were spent in ignorance as the TV showed a constant stream of death and destruction.

It’s over a quarter of a century later, can we do any better?

Facebook can. Through a product called Safety Check, which connects friends and loved ones during a disaster. When a disaster hits Safety Check prompts people in the area to indicate if they are OK or not. Then Facebook closes the worry loop by telling their friends how they are doing.

Brian Sa, Engineer Manager at Facebook, created Safety Check out of his experience of the devastating earthquake in Fukushima Japan in 2011. He told his very moving story in a talk he gave at @Scale.

During the earthquake Brian put a banner on Facebook with helpful information sources, but he was moved to find a better way to help people in need. That impulse became Safety Check.

My first reaction to Safety Check was damn, why didn’t anyone think of this before? It’s such a powerful idea.

The answer became clear as I listened to a talk in the same video given by Peter Cottle, Software Engineer at Facebook, who also talked about building Safety Check.

It’s likely only Facebook could have created Safety Check. This observation dovetails nicely with Brian’s main lesson in his talk:

  • Solve real-world problem in a way that only YOU can. Instead of taking the conventional route, think about the unique role you and your company can play.

Only Facebook could create Safety Check, not because of resources as you might expect, but because Facebooks lets employees build crazy things like Safety Check and because only Facebook has 1.5 billion geographically distributed users, with a degree of separation between them of only 4.74 edges, and only Facebook has users who are fanatical about reading their news feeds. More about this later.

In fact, Peter talked about how resources were a problem in a sort of product development Catch-22 at Facebook. The team for Safety Check was small and didn’t have a lot of resources attached to it. They had to build the product and prove its success without resources before they could get the resources to build the product. The problem had to be efficiently solved at scale without the application of lots of money and lots of resources.

As is often the case constraints led to a clever solution. A small team couldn’t build a big pipeline and index, so they wrote some hacky PHP and effectively got the job done at scale.

So how did Facebook build Safety Check? Here’s my gloss on both Brian’s and Peter’s talks:

Categories: Architecture

Projects versus Products

Herding Cats - Glen Alleman - Mon, 09/28/2015 - 16:00

There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products  as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.

In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.

  • Products are customer facing. Market needs, business models for revenue, cost, earnings, interfaces with Marketing and Sales and other business management processes- contracts, accounting - are Product focused.

Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?

  • Projects are internally facing - internal resources need looking after. The notion of self organizing is fine, but self directed only works when the work efforts have direct contact with the customers. And even then, without some oversight - governance - a self directed team has limitations in the larger context of the business. If the self directed team IS the business, then the need for external governance is removed. This would be rare in any non-trivial business. 

Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?

In very small companies and startups these roles are usually performed by the same person.

Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.

  • The Product Manager is a member of the Business Development Team, tasked with the business side of the product delivery process. 
  • The Project Management Team (PMs and Coordinators, along with development leads and staff), is a member of the delivery team tasked with producing the capabilities needs to capture and maintain the market.

Products are about What and Why. Projects are about WhoHow, When, and Where. From Rudyard Kipling's Six Trusted Friends)

Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.

When agile says it's customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer. 

When we hear Products are not Projects, ask in what domain, business size, and value at risk is it possible not to separate these concerns between Products and Projects?

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Trust the Process

Making the Complex Simple - John Sonmez - Mon, 09/28/2015 - 13:00

For the past few days, I’ve been learning to type. I’ve been getting by with “touch typing” by using approximately four fingers. So far, it’s served me well. I can type about 60 words per minute without looking at the keyboard, but I’ve always known I’d be more efficient if I could type the correct […]

The post Trust the Process appeared first on Simple Programmer.

Categories: Programming

Online AzureCon Conference this Tuesday

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 04:35

This Tuesday, Sept 29th, we are hosting our online AzureCon event – which is a free online event with 60 technical sessions on Azure presented by both the Azure engineering team as well as MVPs and customers who use Azure today and will share their best practices.

I’ll be kicking off the event with a keynote at 9am PDT.  Watch it to learn the latest on Azure, and hear about a lot of exciting new announcements.  We’ll then have some fantastic sessions that you can watch throughout the day to learn even more.


Hope to see you there!


Categories: Architecture, Programming