Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Google Play services 7.0 - Places Everyone!

Android Developers Blog - Thu, 03/19/2015 - 23:39

Posted by Ian Lake, Developer Advocate

Today, we’re bringing you new tools to build better apps with the completion of the rollout of Google Play services 7.0. With this release, we’re delivering improvements to location settings experiences, a brand new API for place information, new fitness data, Google Play Games, and more.

Location Settings Dialog

While the FusedLocationProviderApi combines multiple sensors to give you the optimal location, the accuracy of the location your app receives still depends greatly on what settings are enabled on the device (e.g. GPS, wifi, airplane mode, etc). In Google Play services 7.0, we’re introducing a standard mechanism to check that the necessary location settings are enabled for a given LocationRequest to succeed. If there are possible improvements, you can display a one touch control for the user to change their settings without leaving your app.

This API provides a great opportunity to make for a much better user experience, particularly if location information is critical to the user experience of your app such as was the case with Google Maps when they integrated the Location Settings dialog and saw a dramatic increase in the number of users in a good location state.

Places API

Location can be so much more than a latitude and longitude: the new Places API makes it easy to get details from Google’s database of places and businesses. The built-in place picker makes it easy for the user to pick their current place and provides all the relevant place details including name, address, phone number, website, and more.

If you prefer to provide your own UI, the getCurrentPlace() API returns places directly around the user’s current location. Autocomplete predictions are also provided to allow a low latency search experience directly within your app.

You can also manually add places with the addPlace() API and report that the user is at a particular place, ensuring that even the most explorative users can input and share their favorite new places.

The Places API will also be available cross-platform: in a few days, you’ll be able to apply for the Places API for iOS beta program to ensure a great and consistent user experience across mobile platforms.

Google Fit

Google Fit makes building fitness apps easier with fitness specific APIs on retrieving sensor data like current location and speed, collecting and storing activity data in Google Fit’s open platform, and automatically aggregating that data into a single view of the user’s fitness data.

In Google Play services 7.0, the previous Fitness.API that you passed into your GoogleApiClient has now been replaced with a number of APIs, matching the high level set of Google Fit Android APIs:

  • SENSORS_API to access raw sensor data via SensorsApi
  • RECORDING_API to record data via RecordingApi
  • HISTORY_API for inserting, deleting, or reading data via HistoryApi
  • SESSIONS_API for managing sessions via SessionsApi
  • BLE_API to interact with Bluetooth Low Energy devices via BleApi
  • CONFIG_API to access custom data types and settings for Google Fit via ConfigApi

This change significantly reduces the memory requirement for Google Fit enabled apps running in the background. Like always, apps built on previous versions of Google Play services will continue to work, but we strongly suggest you rebuild your Google Fit enabled apps to take advantage of this change.

Having all the data can be an empowering part of making meaningful changes and Google Fit is augmenting their existing data types with the addition of body fat percentage and sleep data.

Google Mobile Ads

We’ve found integration of AdMob and Google Analytics a powerful combination for analyzing how your users really use your app since we launched Google Analytics in AdMob last year. This new release enables any Google Mobile Ads SDK implementation to automatically get Google Analytics integration giving you the number of users and sessions, session duration, operating systems, device models, geography, and automatic screen reporting without any additional development work.

In addition, we’ve made numerous improvements across the SDK including ad request prefetching (saving battery usage and improving apparent latency) and making the SDK MRAIDv2 compliant.

--> Google Play Games

Announced at Game Developers Conference (GDC), we’re offering new tools to supercharge your games on Google Play. Included in Google Play services 7.0 is the Nearby Connections API, allowing games to seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV.

App Indexing

App Indexing lets Google index apps just like websites, enabling Google search results to deep-link directly into your native app. We've simplified the App Indexing API to make this integration even easier for you by combining the existing view()/viewEnd() and action()/end() flows into a single start() and end() API.

Changes to GoogleApiClient

GoogleApiClient serves as the common entry point for accessing Google APIs. For this release, we’ve made retrieval of Google OAuth 2.0 tokens part of GoogleApiClient, making it much easier to request server auth codes to access Google APIs.

SDK Now Available!

You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.

To learn more about Google Play services and the APIs available to you through it, visit the Google Services section on the Android Developer site.

Join the discussion on

+Android Developers
Categories: Programming

Take your apps on the road with Android Auto

Android Developers Blog - Thu, 03/19/2015 - 19:17

Posted by Wayne Piekarski, Developer Advocate

Starting today, anyone can take their apps for a drive with Android Auto using Android 5.0+ devices, connected to compatible cars and aftermarket head units. Android Auto lets you easily extend your apps to the car in an efficient way for drivers, allowing them to stay connected while still keeping their hands on the wheel and their eyes on the road. When users connect their phone to a compatible vehicle, they will see an Android experience optimized for the head unit display that seamlessly integrates voice input, touch screen controls, and steering wheel buttons. Moreover, Android Auto provides consistent UX guidelines to ensure that developers are able to create great experiences across many diverse manufacturers and vehicle models, with a single application available on Google Play.

With the availability of the Pioneer AVIC-8100NEX, AVIC-7100NEX, and AVH-4100NEX aftermarket systems in the US, the AVIC-F77DAB, AVIC-F70DAB, AVH-X8700BT in the UK, and in Australia the AVIC-F70DAB, AVH-X8750BT, it is now possible to add Android Auto to many cars already on the road. As a developer, you now have a way to test your apps in a realistic environment. These are just the first Android Auto devices to launch, and vehicles from major auto manufacturers with integrated Android Auto support are coming soon.

With the increasing adoption of Android Auto by manufacturers, your users are going to be expecting more support of their apps in the car, so now is a good time to get started with development. If you are new to Android Auto, check out our DevByte video, which explains more about how this works, along with some live demos.

The SDK for Android Auto was made available to developers a few months ago, and now Google Play is ready to accept your application updates. Your existing apps can take advantage of all these cool new Android Auto features with just a few small changes. You’ll need to add Android Auto support to your application, and then agree to the Android Auto terms in the Pricing & Distribution category in the Google Play Developer Console. Once the application is approved, it will be made available as an update to your users, and shown in the cars’ display.

Adding support for Android Auto is easy. We have created an extensive set of documentation to help you add support for messaging (sample), and audio playback (sample). There are also short introduction DevByte videos for messaging and audio as well. Stay tuned for a series of posts coming up soon discussing more details of these APIs and how to work with them. We also have simulators to help you test your applications right at your desk during development.

With the launch of Android Auto, a new set of possibilities are available for you to make even more amazing experiences for your users, providing them the right information for the road ahead. Come join the discussion about Android Auto on Google+ at http://g.co/androidautodev where you can share ideas and ask questions with other developers.

Join the discussion on

+Android Developers
Categories: Programming

Quitting A Job You Just Started

Making the Complex Simple - John Sonmez - Thu, 03/19/2015 - 15:00

In this video I respond to an email asking about how to handle quitting a job that you just started. Keep in mind that you should never quit without having something else lined up.

The post Quitting A Job You Just Started appeared first on Simple Programmer.

Categories: Programming

Release Early and Release Often

Herding Cats - Glen Alleman - Thu, 03/19/2015 - 14:23

After a conversation of sorts about release early and release often it's clear that without a domain this notion is a very nice platitude with no way to test its applicability in an actual business and technical environment.

Like many conversations based on platitudes, it ends with do what works for you, with no actionable outcomes.

Screen Shot 2015-03-19 at 7.02.40 AM

Let's Look from a Domain Point of View

The notion that releasing early assumes the customer can take the produced value early. That there are no externalities in the system. Since must systems are actually system of systems, the ability to accept early and often releases means there are no externalities. No externalities means - in our systems engineering dominated world of enterprise and software intensive systems - that the system is simple.

So for simple systems, sure early and often might be useful. Needs to be tested though with the business and the business rhythm of the business. Let's looks at several domains I work in:

  • Enterprise IT - an ERP system integrated with legacy systems, and producers and consumers of information. Early releases into production impact other systems. This drives churn for those systems to also take early and often releases, causing further churn. This wastes resources. As well ERP systems have externalities from legacy or other business systems and processes. 
  • Real Time Systems - have interactions with external systems - the system under control. Early may mean that the system under control is not ready to use the release and has to be modified to accept the release when it is ready. 

For Non-Trivial Systems - There's A Better Philosophy

Turn Early and Often into - Plan and Release - with margin for irreducible uncertainty and buy down for reducible uncertainty - for each of the needed capabilities, at the planned time, for the planned cost. 

These releases and their dates, are of course be incremental value produced by the project against the planned value for the project as a whole. But the delivery of this value must coincide with the business's ability to not only accept the value, but to put this value it to work.

The chart below shows an enterprise project with externalities, in a health insurance domain. You can see that early and often has dependencies. It always has dependencies on any non-trivial system. All enterprise systems have interdependencies and externalities. 

Screen Shot 2015-03-19 at 9.02.55 AM

Accept the produced value into production. The last paragraph in the RERO philosophy is untested and likely unsubstantiated in practice. Like many of the agile philosophies devoid of a domain and a context.

I'm a direct user of agile. In Enterprise and DOD environments. But when I hear a phrase in the absence of a domain, context in that domain, specific assessment of the system architecture and related business process architecture, it tells me it's just a platitude. 

And like all platitudes, you can't object to the words unless you have a basis of reality to stand on. Which is why it's easy to produce the platitudes but hard to apply them

Related articles Open Loop Thinking v. Close Loop Thinking When Is Delivering Early Not That Much Value? Empirical Data Used to Estimate Future Performance Self-Organization
Categories: Project Management

Managing in Presence of Uncertainty

Herding Cats - Glen Alleman - Thu, 03/19/2015 - 13:19

On twitter there are several threads which speak to an underlying issue of managing in the presence of uncertainty. Ranging from we can make decisions without estimating to you can't possibly estimate the outcomes that occur in the future to the notion that my idea doesn't need to be tested outside my personal anecdotal experience. All the way to my idea violates the core principles of business decision making, but please apply it anyway.

The principles of successful project management are applicable across a broad spectrum of domains, when there is a value at risk sufficient to be concerned about on the part of those funding the project. Insufficient concern about loss of the investment? No one cares what you do or how you do it.

But if we have a non-trivial projects, here's five immutable principles for managing in the presence of uncertainty when spending other people's money

Screen Shot 2015-03-17 at 12.27.11 PM

But these principles need processes and practices. There's a book on that Performance-Based Project Management®. But there's all tons of other resources as well. Here's one we're applying currently in our domain

Screen Shot 2015-03-17 at 12.29.24 PM

So when you hear we can't or don't need estimate what will happen in the future what you are really hearing is we have no intention of managing in the presence of uncertainty.

Related articles I Think You'll Find It's a Bit More Complicated Than That What is a Team? Why We Need Governance Empirical Data Used to Estimate Future Performance Failure is not an Option Managing in the Presence of Uncertainty Estimating Probabilistic Outcomes? Of Course We Can! Criteria for a "Good" Estimate
Categories: Project Management

Triggering Jenkins jobs remotely via git post-commit hooks

Agile Testing - Grig Gheorghiu - Wed, 03/18/2015 - 23:49
Assume you want a Jenkins job (for example a job that deploys code or a job that runs integration tests) to run automatically every time you commit code via git. One way to do this would be to configure Github to access a webhook exposed by Jenkins, but this is tricky to do when your Jenkins instance is not exposed to the world.

One way I found to achieve this is to trigger Jenkins job remotely via a local git post-commit hook. There were several steps I had to take:

1) Create a Jenkins user to be used by remote curl commands -- let's call it user1 with password password1.

2) Configure a given Jenkins job -- let's call it JOB-NUMBER1 -- to allow remote builds to be triggered. If you go to the Jenkins configuration page for that job, you'll see a checkbox under the Build Triggers section called "Trigger builds remotely (e.g. from scripts)". Check that checkbox and also specify a random string as the Authentication Token -- let's say it is mytoken1.

3) Try to trigger a remote build for JOB-NUMBER1 by using a curl command similar to this one:

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec

If the Jenkins build is parameterized, you need to specify each parameter in the curl command, even if those parameters have default values specified in the Jenkins job definition. Let's say you have 2 parameters, TARGET_HOST and TARGET_USER. Then the curl command looks something like this:

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec --data-urlencode json='{"parameter": [{"name":"TARGET_HOST", "value":"myhost1.mycompany.com"}, {"name":"TARGET_USER", "value":"mytargetuser1"}]}'

When you run these curl commands, you should see JOB-NUMBER1 being triggered instanly in the Jenkins dashboard.

Note: if you get an error similar to "HTTP ERROR 403 No valid crumb was included in the request" it means that you have "Prevent Cross Site Request Forgery exploits" checked on the Jenkins "Configure Global Security" page. You need to uncheck that option. Since you're most probably not exposing your Jenkins instance to the world, that should be fine.

 4) Create git post-commit hook. To do this, you need to create a file called post-commit in your local .git/hooks directory under the repository from which you want to trigger the Jenkins job. The post-commit file is a regular bash script:

#!/bin/bash

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec

Don't forget to make the post-commit file executabe: chmod 755 post-commit

At this point, whenever you commit code in this repository, you should see the Jenkins job being triggered instantly.

Dependency injection with PostSharp

Actively Lazy - Wed, 03/18/2015 - 22:21

I don’t really like IoC containers. Or rather, I don’t like the crappy code people write when they’re given an IoC container. Before you know it you have NounVerbers everywhere, a million dependencies and no decent domain model. Dependencies should really be external to your application; everything outside of the core domain model that your application represents.

  • A web service? That’s a dependency
  • A database? Yup
  • A message queue? Definitely a dependency
  • A scheduler or thread pool? Yup
  • Any NounVerber (PriceCalculator, StockFetcher, BasketFactory, VatCalculator) no! Not a dependency. Stop it. They’re all part of your core business domain and are actually methods on a class. If you can’t write Price.Calculate() or Stock.Fetch() or new Basket() or Vat.Calculate() then fix your domain model first before you go hurting yourself with an IoC container

A while back I described a very simple, hand-rolled approach to dependency injection. But if we wave a little PostSharp magic we can improve on that basic idea. All the source code for this is available on github.

It works like this: if we have a dependency, say an AuthService, we declare an interface that business objects can implement to request that they have the dependency injected into them. In this case, IRequireAuthService.

class User : IRequireAuthService
{
  public IAuthService AuthService { set; private get; }

We create a DependencyInjector that can set these properties:

public void InjectDependencies(object instance)
{
  if (instance is IRequireAuthService)
    ((IRequireAuthService)instance).AuthService = AuthService;
    ...
}

This might not be the prettiest method – you’ll end up with an if…is IRequire… line for each dependency you can inject. But this provides a certain amount of friction. While it is easy to add new dependencies, developers are discouraged from doing it. This small amount of friction massively limits the unchecked growth of dependencies so prevalent with IoC containers. This friction is why I prefer the hand-rolled approach to off-the-shelf IoC containers.

So how do we trigger the dependency injector to do what it has to do? This is where some PostSharp magic comes in. We declare an attribute to use on the constructor:

  [InjectDependencies]
  public User(string id)

Via the magic of PostSharp aspect weaving this attribute causes some code to be executed before the constructor. This attribute is simply defined as:

public sealed override void OnEntry(MethodExecutionArgs args)
{
  DependencyInjector.CurrentInjector.InjectDependencies(args.Instance);
  base.OnEntry(args);
}

And that’s it – PostSharp weaves this method before each constructor with the [InjectDependencies] attribute. We get the current dependency injector and pass in the object instance (i.e. the newly created User instance) to have dependencies injected into it. Just like that we have a very simple dependency injector. Even better all this aspect weaving magic is available with the express (free!) edition of PostSharp.

Taking it Further

There are a couple of obvious extensions to this. You can create a TestDependencyInjector so that your unit tests can provide their own (mock) implementations of dependencies. This can also include standard (stub) implementations of some dependencies. E.g. a dependency that manages cross-thread scheduling can be replaced by an immediate (synchronous) implementation for unit tests to ensure that unit tests are single-threaded and repeatable.

Secondly, the DependencyInjector uses a ThreadLocal to store the current dependency injector. If you use background threads and want dependency injection to work there, you need a way of pushing the dependency injector onto the background thread. This generally means wrapping thread spawning code (which will itself be a dependency). You’ll want to wrap any threading code anyway to make it unit-testable.

Compile Time Checks

Finally, the most common failure mode we encountered with this was people forgetting to put [InjectDependencies] on the constructor. This means you get nulls at runtime, instead of dependencies. With a bit more PostSharp magic (this brand of magic requires the paid-for version, though) we can stop that, too. First, we change each IRequire to use a new attribute that indicates it manages injection of a dependency:

[Dependency]
public interface IRequireAuthService
{
  IAuthService AuthService { set; }
}

We configure this attribute to be inherited to all implementation classes – so all business objects that require auth service get the behaviour – then we define a compile time check to verify that the constructors have [InjectDependencies] defined:

public override bool CompileTimeValidate(System.Reflection.MethodBase method)
{
  if (!method.CustomAttributes.Any(a => a.AttributeType == typeof(InjectDependenciesAttribute)))
  {
    Message.Write(SeverityType.Error, "InjectDependences", "No [InjectDependencies] declared on " + method.DeclaringType.FullName + "." + method.Name, method);
    return false;
  }
  return base.CompileTimeValidate(method);
}

This compile time check now makes the build fail if I ever declare a class IRequireAuthService without adding [InjectDependencies] onto the class’ constructor.

Simple, hand-rolled dependency injection with compile time validation thanks to PostSharp!

 


Categories: Programming, Testing & QA

Episode 223: Joram Barrez on the Activiti Business Process Management Platform

Josh Long talks to Activiti cofounder Joram Barrez about the wide world of (open source) workflow engines, the Activiti BPMN2 engine, and what workflow implies when you’re building process-driven applications and services. Joram was originally a contributor to the jBPM project with jBPM founder Tom Baeyens at Red Hat. He cofounded Activiti in 2010 at […]
Categories: Programming

Android Developer Story: Outfit7 — Building an entertainment company with Google

Android Developers Blog - Wed, 03/18/2015 - 20:15

Posted by Leticia Lago, Google Play team

Outfit7, creators of My Talking Tom and My Talking Angela, recently announced they’ve achieved 2.5 billion app downloads across their portfolio. The company now offers a complete entertainment experience to users spanning mobile apps, user generated and original YouTube content, and a range of toys, clothing, and accessories. They even have a silver screen project underway.

We caught up with Iza Login, Rok Zorko and Marko Ĺ tamcar - some of the co-founders- in Ljubljana, Slovenia, to learn best practices that helped them in reaching this milestone.

To learn about some of the Google and Google Play features used by Outfit7 to create their successful business, check out these resources:

  • Monetization — explore the options available for generating revenue from your apps and games.
  • Monetization with AdMob — learn how you can maximize your ad revenue.
  • YouTube for Developers — Whether you’re building a business on YouTube or want to enhance your app with video, a rich set of YouTube APIs can bring your products to life.
Join the discussion on

+Android Developers
Categories: Programming

When Is Delivering Early Not That Much Value?

Herding Cats - Glen Alleman - Wed, 03/18/2015 - 16:49

There is a phrase in agile deliver early and deliver often. Let's test this potential platitude in the enterprise and software intensive systems business 

Let's look at Delivery Often. Often needs to match the business rhythm of the project or the business. This requires answers to several questions:

  • How often can the business accept new features into the workflow processes?
  • Do users need training for these new features? Is so, is there an undo burden on the training organization for new training sessions?
  • Are there external process that need to stay in sync with these new features?
  • Are there changes to data or reports as a result of these new features?

Deliver Early is more problematic

  • Can the organization put the features to work?
  • Are there dependencies on external connections?

So let's look at the enterprise or software intensive systems domain. How about showing up as planned. This of course means having a Plan. The picture below is an enterprise systems that has Planned capabilities in a Planned order, with Planned features. 

Screen Shot 2015-03-17 at 9.54.27 AM

So before subcoming to the platitudes of agile, determine the needs for successful project completion in your domain. Then ask is that platitude is actually applicable to your domain?

Categories: Project Management

Microservices: coupling vs. autonomy

Xebia Blog - Wed, 03/18/2015 - 14:35

Microservices are the latest architectural style promising to resolve all issues we had we previous architectural styles. And just like other styles it has its own challenges. The challenge discussed in this blog is how to realise coupling between microservices while keeping the services as autonomous as possible. Four options will be described and a clear winner will be selected in the conclusion.

To me microservices are autonomous services that take full responsibility for one business capability. Full responsibility includes presentation, API, data storage and business logic. Autonomous is the keyword for me, by making the services autonomous the services can be changed with no or minimal impact on others. If services are autonomous, then operational issues in one service should have no impact on the functionality of other services. That all sounds like a good idea, but services will never be fully isolated islands. A service is virtually always dependent on data provided by another service. For example imagine a shopping cart microservice as part of a web shop, some other service must put items in the shopping cart and the shopping cart contents must be provided to yet other services to complete the order and get it shipped. The question now is how to realise these couplings while keeping maximum autonomy.  The goal of this blog post is to explain which pattern should be followed to couple microservices while retaining maximum autonomy.

rr-ps

I'm going to structure the patterns by 2 dimensions, the interaction pattern and the information exchanged using this pattern.

Interaction pattern: Request-Reply vs. Publish-Subscribe.

  • Request-Reply means that one service does a specific request for information (or to take some action). It then expects a response. The requesting service therefore needs to know what to aks and where to ask it. This could still be implemented asynchronously and of course your could put some abstraction in place such that the request service does not have to know the physical address of the other service, the point still remains that one service is explicitly asking a for specific information (or action to be taken) and functionally waiting for a response.
  • Publish-Subscribe: with this pattern a service registers itself as being interested in certain information, or being able to handle certain requests. The relevant information or requests will then be delivered to it and it can decide what to do with it. In this post we'll assume that there is some kind of middleware in place to take care of delivery of the published messages to the subscribed services.

Information exchanged: Events vs. Queries/Commands

  • Events are facts that cannot be argued about. For example, an order with number 123 is created. Events only state what has happened. They do not describe what should happen as a consequence of such an event.
  • Queries/Commands: Both convey what should happen. Queries are a specific request for information, commands are a specific request to the receiving service to take some action.

Putting these two dimensions in a matrix results into 4 options to realise couplings between microservices. So what are the advantages and disadvantages for each option? And which one is the best for reaching maximum autonomy?

In the description below we'll use 2 services to illustrate each pattern. The Order service which is responsible for managing orders and the Shipping service which is responsible for shipping stuff, for example the items included in an order. Services like these could be part of a webshop, which could then also contain services like a shopping cart, a product (search) service, etc.

1. Request-Reply with Events:rre

In this pattern one service asks a specific other service for events that took place (since the last time it asked). This implies strong dependency between these two services, the Shipping service must know which service to connect to for events related to orders. There is also a runtime dependency since the shipping service will only be able to ship new orders if the Order service is available.

Since the Shipping service only receives events it has to decide by itself when an order may be shipped based on information in these events. The Order service does not have to know anything about shipping, it simply provides events stating what happened to orders and leaves the responsibility to act on these events fully to the services requesting the events.

2. Request-Reply with Commands/Queries:

rrcIn this pattern the shipping Order service is going to request the Shipping service to ship an order. This implies strong coupling since the Order service is explicitly requesting a specific service to take care of the shipping and now the Order service must determine when an order is ready to be shipped. It is aware of the existence of a Shipping service and it even knows how to interact with it. If other factors not related to the order itself should be taken into account before shipping the order (e.g. credit status of the customer), then the order services should take this into account before requesting the shipping service to ship the order. Now the business process is baked into the architecture and therefore the architecture cannot be changed easily.

Again there is a runtime dependency since the Order service must ensure that the shipping request is successfully delivered to the Shipping service.

3. Publish-Subscribe with Events

pseIn Publish-Subscribe with Events the Shipping service registers itself as being interested in events related to Orders. After registering itself it will receive all events related to Orders without being aware what the source of the order events is. It is loosely coupled to the source of the Order events. The shipping service will need to retain a copy of the data received in the events such that is can conclude when an order is ready to be shipped. The Order service needs to have no knowledge about shipping. If multiple services provide order related events containing relevant data for the Shipping service then this is not recognisable by the Shipping service. If (one of) the service(s) providing order events is down, the Shipping service will not be aware, it just receives less events. The Shipping service will not be blocked by this.

4. Publish-Subscribe with Commands/Queries

pscIn Publish-Subscribe with Command/Queries the Shipping service registers itself as a service being able to ship stuff. It then receives all commands that want to get something shipped. The Shipping service does not have to be aware of the source of the Shipping commands and on the flip side the Order service is not aware of which service will take care of shipping. In that sense they are loosely coupled. However, the Order service is aware of the fact that orders must get shipped since it is sending out a ship command, this does make the coupling stronger.

Conclusion

Now that we have described the four options we go back to the original question, which pattern of the above 4 provides maximum autonomy?

Both Request-Reply patterns imply a runtime coupling between two services and that implies strong coupling. Both Command/Queries patterns imply that one service is aware of what another service should do (in the examples above the order service is aware that another service takes care of shipping) and that also implies strong coupling, but this time on functional level. That leaves one option: 3. Publish-Subscribe with Events. In this case both services are not aware of each others existence from both runtime and functional perspective. To me this is the clear winner for achieving maximum autonomy between services.

The next question pops up immediately, should you always couple services using Publish-Subscribe with events? If your only concern is maximum autonomy of services the answer would be yes, but, there are more factors that should be taken into the account. Always coupling using this pattern comes at a price, data is replicated, measures must be taken to deal with lost events, events driven architectures do add extra requirements on infrastructure, their might be extra latency, and more. In a next post I'll dive into these trade-offs and put things into perspective. For now remember that Publish-Subscribe with Events is a good bases for achieving autonomy of services.

More Ways to Visualize Your Project Portfolio

Every time I work with a client or teach a workshop, people want more ways to visualize their project portfolios. Here are some ideas:

Here is a kanban view of the project portfolio with a backlog:

Kanban view of the project portfolio

Kanban view of the project portfolio

 

 

 

 

 

 

 

And a kanban view of the project portfolio with an “Unstaffed Work” line, so it’s clear:

Project Portfolio Kanban with Unstaffed Work Line

Project Portfolio Kanban with Unstaffed Work Line

 

 

 

 

 

 

 

 

If you haven’t read Visualizing All the Work in Your Project Portfolio, you should. It has some other options, too.

I have yet more options in Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects.

Categories: Project Management

Reducing the size of Docker Images

Xebia Blog - Wed, 03/18/2015 - 01:00

Using the basic Dockerfile syntax it is quite easy to create a fully functional Docker image. But if you just start adding commands to the Dockerfile the resulting image can become unnecessary big. This makes it harder to move the image around.

A few basic actions can reduce this significantly.

Three Cadence Complaints

Longer isn't necessarily better.

Longer isn’t necessarily better.

In Agile, cadence is the number days or weeks in a sprint or release. Stated another way, it is the length of the team’s development cycle. The cadence that a project or organization selects is based on a number of factors that include: criticality, risk and the type of project. As a general rule, once a team or a team of teams has settled on a specific cadence they tend not to vary significantly. While In today’s business environment a plurality of teams and organizations use a two-week sprint cadence, there is often a lot of angst over the finding the exact number of weeks in a sprint any specific team.  Many organizations adopt a standard and take the discussion off the table. With any specified cadence duration, there are typically three complaints: too much overhead, not getting stories done and not being able to commit resources on full-time basis to the work. In almost every case the complaint is coupled with a request to lengthen the duration of the sprint and therefore to slow the cadence.

  1. The sprint structure requires too much overhead: Sprints begin with a planning event and typically complete with both a demonstration and retrospective. Short stand-up meetings occur daily. Some team participants view these events as overhead. Scrum practice and observation of teams strongly suggests that the effort for the Scrum events increase or decrease depending on the length of the sprint. Longer sprints tend to require more planning and lengthier demos and retrospectives.  As a rule I suggest two hours of planning per week and 30 minutes each for the demonstration and retrospective per week in a sprint. Stand-ups should be approximately 15 minutes per day. For example, I would expect to spend 3 hours 45 minutes (2 hours for planning, 30 minutes for demo, 30 minutes for the retrospective and 45 minutes for 3 stand-up meetings) on events in a one-week sprint and 8 hours (4 hours for planning, 1 hour for the demo, 1 hour for the retrospective and 2 hours for 8 stand-up meetings). I have noticed that the effort for sprint events that are more than three weeks long tend to take longer than one would expect (four weeks sometimes being more than twice what is expected). Realistically, sprint size should not significantly affect overhead until you get to sprints duration’s of four weeks or more therefore overhead is not an obstacle to shorter sprints. Remember that in earlier posts we have shown that shorter sprints deliver feedback and value sooner so therefore are preferred all things being equal.
  2. Our stories can’t be completed during the sprint. This is typically not a problem with the duration of sprint, but either an issue of how the stories are split or a process problem. One typical corollary to this complaint is that the team can’t break stories into thin enough slices to complete. Most of the time this is a training or coaching problem rather than a technical problem, however in highly regulated environments or systems that affect human life I have seen scenarios where stories tend to require longer sprints due to very specific variation and validation. One common cause of this problem is assigning and hard wiring roles (testers can only test for example), which can cause bottlenecks and constraints if there is any imbalance in capacity. This is illustrated by the Theory of Constraints (take a look at our Re-read Saturday entries about The Goal for more on the Theory of Constraints). Typically, longer sprints will not solve the problem. Unless the underlying capacity issue is addressed longer sprints typically equate to worse performance because more stories are started and are subject to bottlenecks and constraints.
  3. Our organization can’t commit full-time resources to a sprint. Part-time team members typically have to time slice, switching between projects and pieces of work leading to some loss of efficiency. This is a reflection too much work and not enough capacity causing delays, constraints and bottlenecks. Similar to issue 2 above, typically longer sprints will not solve the problem. Unless the underlying capacity issue is addressed longer sprints typically equate to worse performance for the same reasons as noted above.

Many problems with cadence are either a reflection of process problems that generate overhead or poorly split user stories. For example, teams that do not have a groomed backlog will need to spend more time planning. Some team members see planning as avoidable overhead (the just wing it mentality). Overly large teams tend to have long daily stand-up meetings, again typically seen as overhead. Stories that are not thinly sliced will take longer to complete and have higher propensity to get stuck giving the team a feeling that a longer sprint is better. In almost every case, thinly sliced stories and committing to less work tends to improve the flow of work so that the team can actually deliver more. The duration of the sprint and cadence are usually not the root cause of the team’s problems.


Categories: Process Management

Neo4j: Detecting potential typos using EXPLAIN

Mark Needham - Tue, 03/17/2015 - 23:46

I’ve been running a few intro to Neo4j training sessions recently using Neo4j 2.2.0 RC1 and at some stage in every session somebody will make a typo when writing out of the example queries.

For example one of the queries that we do about half way finds the actors and directors who have worked together and aggregates the movies they were in.

This is the correct query:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

which should yield the following results:

==> +-----------------------------------------------------------------------------------------------------------------------+
==> | actor.name           | director.name    | movies                                                                      |
==> +-----------------------------------------------------------------------------------------------------------------------+
==> | "Hugo Weaving"       | "Andy Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Hugo Weaving"       | "Lana Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Laurence Fishburne" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Keanu Reeves"       | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Carrie-Anne Moss"   | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> +-----------------------------------------------------------------------------------------------------------------------+

However, a common typo is to write ‘DIRECTED_IN’ instead of ‘DIRECTED’ in which case we’ll see no results:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5
 
==> +-------------------------------------+
==> | actor.name | director.name | movies |
==> +-------------------------------------+
==> +-------------------------------------+
==> 0 row

It’s not immediately obvious why we aren’t seeing any results which can be quite frustrating.

However, in Neo4j 2.2 the ‘EXPLAIN’ keyword has been introduced and we can use this to see what the query planner thinks of the query we want to execute without actually executing it.

Instead the planner makes use of knowledge that it has about our schema to come up with a plan that it would run and how much of the graph it thinks that plan would touch:

EXPLAIN MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

2015 03 17 23 39 55

The first row of the query plan describes an all nodes scan which tells us that the query will start from the ‘director’ but it’s the second row that’s interesting.

The estimated rows when expanding the ‘DIRECTED_IN’ relationship is 0 when we’d expect it to at least be a positive value if there were some instances of that relationship in the database.

If we compare this to the plan generated when using the proper ‘DIRECTED’ relationship we can see the difference:

2015 03 17 23 43 11

Here we see an estimated 44 rows from expanding the ‘DIRECTED’ relationship so we know there are at least some nodes connected by that relationship type.

In summary if you find your query not returning anything when you expect it to, prefix an ‘EXPLAIN’ and make sure you’re not seeing the dreaded ‘0 expected rows’.

Categories: Programming

Sponsored Post: Signalfuse, InMemory.Net, Sentient, Couchbase, VividCortex, Internap, Transversal, MemSQL, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Sentient Technologies is hiring several Senior Distributed Systems Engineers and a Senior Distributed Systems QA Engineer. Sentient Technologies, is a privately held company seeking to solve the world’s most complex problems through massively scaled artificial intelligence running on one of the largest distributed compute resources in the world. Help us expand our existing million+ distributed cores to many, many more. Please apply here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Rise of the Multi-Model Database. FoundationDB Webinar: March 10th at 1pm EST. Do you want a SQL, JSON, Graph, Time Series, or Key Value database? Or maybe it’s all of them? Not all NoSQL Databases are not created equal. The latest development in this space is the Multi Model Database. Please join FoundationDB for an interactive webinar as we discuss the Rise of the Multi Model Database and what to consider when choosing the right tool for the job.
Cool Products and Services
  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Top Enterprise Use Cases for NoSQL. Discover how the largest enterprises in the world are leveraging NoSQL in mission-critical applications with real-world success stories. Get the Guide.
    http://info.couchbase.com/HS_SO_Top_10_Enterprise_NoSQL_Use_Cases.html

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • In-Memory Computing at Aerospike Scale. How the Aerospike team optimized memory management by switching from PTMalloc2 to JEMalloc.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

In-Memory Computing at Aerospike Scale: When to Choose and How to Effectively Use JEMalloc

This is a guest post by Psi Mankoski (email), Principal Engineer, Aerospike.

When a customer’s business really starts gaining traction and their web traffic ramps up in production, they know to expect increased server resource load. But what do you do when memory usage still keeps on growing beyond all expectations? Have you found a memory leak in the server? Or else is memory perhaps being lost due to fragmentation? While you may be able to throw hardware at the problem for a while, DRAM is expensive, and real machines do have finite address space. At Aerospike, we have encountered these scenarios along with our customers as they continue to press through the frontiers of high scalability.

In the summer of 2013 we faced exactly this problem: big-memory (192 GB RAM) server nodes were running out of memory and crashing again within days of being restarted. We wrote an innovative memory accounting instrumentation package, ASMalloc [13], which revealed there was no discernable memory leak. We were being bitten by fragmentation.

This article focuses specifically on the techniques we developed for combating memory fragmentation, first by understanding the problem, then by choosing the best dynamic memory allocator for the problem, and finally by strategically integrating the allocator into our database server codebase to take best advantage of the disparate life-cycles of transient and persistent data objects in a heavily multi-threaded environment. For the benefit of the community, we are sharing our findings in this article, and the relevant source code is available in the Aerospike server open source GitHub repo. [12]

Executive Summary
  • Memory fragmentation can severely limit scalability and stability by wasting precious RAM and causing server node failures.

  • Aerospike evaluated memory allocators for its in-memory database use-case and chose the open source JEMalloc dynamic memory allocator.

  • Effective allocator integration must consider memory object life-cycle and purpose.

  • Aerospike optimized memory utilization by using JEMalloc extensions to create and manage per-thread (private) and per-namespace (shared) memory arenas.

  • Using these techniques, Aerospike saw substantial reduction in fragmentation, and the production systems have been running non-stop for over 1.5 years.

Introduction
Categories: Architecture

Mile High Agile Keynote

Mike Cohn's Blog - Tue, 03/17/2015 - 15:00

I’m going to be giving the conference keynote this spring at Mile High Agile  in Denver on April 3, 2015.

I’ll be talking about the importance of being able to let go of some of our firmly held beliefs in order to move on. Here’s the official session title and description. I hope you can join me at this event!

Let Go of Knowing: How Holding onto Views May Be Holding You Back

You undoubtedly have a firmly held set of convictions about what is necessary to do agile well. These convictions have served you well—your teams have delivered better products more quickly and more economically than before they were agile. But could some of your firmly held convictions be holding you back? And have you ever wondered why some of your most agile friends are similarly firm in their own opinions—even ones that are the exact opposite of your own?

In this session, you’ll see ways that biases may be preventing you from questioning your assumptions, why being open to new views is hard but vital, and why beginners so often think they know it all.

After this session, you will know how to discern the inviolate rules of Scrum from its merely good practices. You’ll know why you feel certain of some aspects of agile, less so about others. You’ll leave with the confidence to let go of knowing. And when we let go of knowing, we open ourselves to learning, which is the heart of agile.

Creating Better User Experiences on Google Play

Android Developers Blog - Tue, 03/17/2015 - 14:03

Posted by Eunice Kim, Product Manager for Google Play

Whether it's a way to track workouts, chart the nighttime stars, or build a new reality and battle for world domination, Google Play gives developers a platform to create engaging apps and games and build successful businesses. Key to that mission is offering users a positive experience while searching for apps and games on Google Play. Today we have two updates to improve the experience for both developers and users.

A global content rating system based on industry standards

Today we’re introducing a new age-based rating system for apps and games on Google Play. We know that people in different countries have different ideas about what content is appropriate for kids, teens and adults, so today’s announcement will help developers better label their apps for the right audience. Consistent with industry best practices, this change will give developers an easy way to communicate familiar and locally relevant content ratings to their users and help improve app discovery and engagement by letting people choose content that is right for them.

Starting now, developers can complete a content rating questionnaire for each of their apps and games to receive objective content ratings. Google Play’s new rating system includes official ratings from the International Age Rating Coalition (IARC) and its participating bodies, including the Entertainment Software Rating Board (ESRB), Pan-European Game Information (PEGI), Australian Classification Board, Unterhaltungssoftware Selbstkontrolle (USK) and Classificação Indicativa (ClassInd). Territories not covered by a specific ratings authority will display an age-based, generic rating. The process is quick, automated and free to developers. In the coming weeks, consumers worldwide will begin to see these new ratings in their local markets.

To help maintain your apps’ availability on Google Play, sign in to the Developer Console and complete the new rating questionnaire for each of your apps. Apps without a completed rating questionnaire will be marked as “Unrated” and may be blocked in certain territories or for specific users. Starting in May, all new apps and updates to existing apps will require a completed questionnaire before they can be published on Google Play.

An app review process that better protects users

Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog. This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle. We value the rapid innovation and iteration that is unique to Google Play, and will continue to help developers get their products to market within a matter of hours after submission, rather than days or weeks. In fact, there has been no noticeable change for developers during the rollout.

To assist in this effort and provide more transparency to developers, we’ve also rolled out improvements to the way we handle publishing status. Developers now have more insight into why apps are rejected or suspended, and they can easily fix and resubmit their apps for minor policy violations.

Over the past year, we’ve paid more than $7 billion to developers and are excited to see the ecosystem grow and innovate. We’ll continue to build tools and services that foster this growth and help the developer community build successful businesses.

Join the discussion on

+Android Developers
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Tue, 03/17/2015 - 13:47

It's not so much about the creativity of the work, it's about emoting on an economically based schedule.
- Sean Penn talking with John Krakauer

At the end of the day all the work we do is about converting money into something of value. For that value to have value to those paying , we need to provide beneficial outcomes at time they can be put to use and for a cost that is less than the value they produce. Since all our project work is based on managing in the presence of uncertainty we need the ability to make decisions based on estimates. Since the future is emerging in front of us and the past is not likely to be representative of this future.

Failure to release this means a disconnection between cost and value.

 

Categories: Project Management