Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
We‚Äôre pleased to announce Pie Noon, a simple game created to demonstrate multi-player support on the Nexus Player, an Android TV device. Pie Noon is an open source, cross-platform game written in C++ which supports:
You can download the game in the Play Store and the latest open source release from our GitHub page. We invite you to learn from the code to see how you can implement these libraries and utilities in your own Android games. Take advantage of our discussion list if you have any questions, and don‚Äôt forget to throw a few pies while you‚Äôre at it!
* Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.Join the discussion on
When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.
For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.
Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.
The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.
The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?
I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.
Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.
This is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.
Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.
I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?
Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”
He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.
Here are the three ways, in case you weren’t clear:
You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.
Do you have other scenarios I haven’t discussed? Ask away in the comments.
Performance guru Martin Thompson gave a great talk at Strangeloop: Aeron: Open-source high-performance messaging, and one of the many interesting points he made was how much performance is being lost because were aren't configuring machines properly.
This point comes on the observation that "Loss, throughput, and buffer size are all strongly related."
Here's a gloss of Martin's reasoning. It's a problem that keeps happening and people aren't aware that it's happening because most people are not aware of how to tune network parameters in the OS.
The separation of programmers and system admins has become an anti-pattern. Developers don’t talk to the people who have root access on machines who don’t talk to the people that have network access. Which means machines are never configured right, which leads to a lot of loss. We are leaving 3x-4x performance on the table just because of configuration.
We need to workout how to bridge that gap, know what the parameters are, and how to fix them.
So know your OS network parameters and how to tune them.Related Articles
There is currently a strong trend for microservice based architectures and frequent discussions comparing them to monoliths. There is much advice about breaking-up monoliths into microservices and also some amusing fights between proponents of the two paradigms - see the great Microservices vs Monolithic Melee. The term 'Monolith' is increasingly being used as a generic insult in the same way that 'Legacy' is!
However, I believe that there is a great deal of misunderstanding about exactly what a 'Monolith' is and those discussing it are often talking about completely different things.
A monolith can be considered an architectural style or a software development pattern (or anti-pattern if you view it negatively). Styles and patterns usually fit into different Viewtypes (a viewtype is a set, or category, of views that can be easily reconciled with each other [Clements et al., 2010]) and some basic viewtypes we can discuss are:
A monolith could refer to any of the basic viewtypes above.
If you have a module monolith then all of the code for a system is in a single codebase that is compiled together and produces a single artifact. The code may still be well structured (classes and packages that are coherent and decoupled at a source level rather than a big-ball-of-mud) but it is not split into separate modules for compilation. Conversely a non-monolithic module design may have code split into multiple modules or libraries that can be compiled separately, stored in repositories and referenced when required. There are advantages and disadvantages to both but this tells you very little about how the code is used - it is primarily done for development management.
For an allocation monolith, all of the code is shipped/deployed at the same time. In other words once the compiled code is 'ready for release' then a single version is shipped to all nodes. All running components have the same version of the software running at any point in time. This is independent of whether the module structure is a monolith. You may have compiled the entire codebase at once before deployment OR you may have created a set of deployment artifacts from multiple sources and versions. Either way this version for the system is deployed everywhere at once (often by stopping the entire system, rolling out the software and then restarting).
A non-monolithic allocation would involve deploying different versions to individual nodes at different times. This is again independent of the module structure as different versions of a module monolith could be deployed individually.
A runtime monolith will have a single application or process performing the work for the system (although the system may have multiple, external dependencies). Many systems have traditionally been written like this (especially line-of-business systems such as Payroll, Accounts Payable, CMS etc).
Whether the runtime is a monolith is independent of whether the system code is a module monolith or not. A runtime monolith often implies an allocation monolith if there is only one main node/component to be deployed (although this is not the case if a new version of software is rolled out across regions, with separate users, over a period of time).
Note that my examples above are slightly forced for the viewtypes and it won't be as hard-and-fast in the real world.
Be very carefully when arguing about 'Microservices vs Monoliths'. A direct comparison is only possible when discussing the Runtime viewtype and properties. You should also not assume that moving away from a Module or Allocation monolith will magically enable a Microservice architecture (although it will probably help). If you are moving to a Microservice architecture then I'd advise you to consider all these viewtypes and align your boundaries across them i.e. don't just code, build and distribute a monolith that exposes subsets of itself on different nodes.
SAFe‚Äôs release planning event is has been considered both the antithesis of Agile and the secret sauce for scaling Agile to programs and the enterprise. The process is fairly simple, albeit formal. Everyone involved with a program increment (an 8 ‚Äď 12 week segment of an Agile release train) get together and synchronize on the program increment objectives, plan and then commit to the plan. Who is the ‚Äúeverybody‚ÄĚ in a release planning event? The primary participants include:
Others that are involved include:
It has been said that it takes a village to raise a child. It takes an equally complex number of roles and participants to plan and then generate commitment to that plan.
Do you know about the Conscious Software Development Telesummit? Michael Smith is interviewing more than 20 experts about all aspects of software development, project management, and project portfolio management. He’s releasing the interviews in chunks, so you can¬† listen and not lose work time. Isn’t that smart of him?
If you haven’t signed up yet, do it now. You get access to all of the interviews, recordings, and transcripts for all the speakers. That’s the Conscious Software Development Telesummit. Because you should make conscious decisions about what to do for your software projects.
Posted by Daniel Holle, Product Manager
At Google I/O back in June, we provided a preview of Android Auto. Today, we‚Äôre excited to announce the availability of our first APIs for building Auto-enabled apps for audio and messaging. Android apps can now be extended to the car in a way that is optimized for the driving experience.
For users, this means they simply connect their Android handheld to a compatible vehicle and begin utilizing a car-optimized Android experience that works with the car‚Äôs head unit display, steering wheel buttons, and more. For developers, the APIs and UX guidelines make it easy to provide a simple way for users to get the information they need while on the road. As an added bonus, the Android Auto APIs let developers easily extend their existing apps targeting Android 5.0 (API level 21) or higher to work in the car without having to worry about vehicle-specific hardware differences. This gives developers wide reach across manufacturers, model and regions, by just developing with one set of APIs and UX standards.
There are two use cases that Android Auto supports today:
To help you get started with Android Auto, check out our Getting Started guide. It‚Äôs important to note that while the APIs are available today, apps extended with Android Auto cannot be published quite yet. More app categories will be supported in the future, providing more opportunities for developers and drivers of Android Auto. We encourage you to join the Android Auto Developers Google+ community to stay up-to-date on the latest news and timelines.
We‚Äôve already started working with partners to develop experiences for Android Auto: iHeartRadio, Joyride, Kik, MLB.com, NPR, Pandora, PocketCasts, Songza, SoundCloud, Spotify, Stitcher, TextMe, textPlus, TuneIn, Umano, and WhatsApp. If you happen to be in the Los Angeles area, stop by the LA Auto Show through November 30 and visit us in the Hyundai booth to take Android Auto and an app or two for a test drive.Join the discussion on
The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.
I have a bit of a problem with all the hatred shown to so-called vanity metrics.
Eric Ries first defined vanity metrics in his landmark book, The Lean Startup. Ries says vanity metrics are the ones that most startups are judged by—things like page views, number of registered users, account activations, and things like that.
Ries says that vanity metrics are in contrast to actionable metrics. He defines an actionable metric as one that demonstrates clear cause and effect. If what causes a metric to go up or down is clear, the metric is actionable. All other metrics are vanity metrics.
I’m pretty much OK with all this so far. I’m big on action. I’ve written in my books and in posts here that if a metric will not lead to a different action, that metric is not worth gathering. I’ve said the same of estimates. If you won’t behave differently by knowing a number, don’t waste time getting that number.
So I’m fine with the definitions of “actionable” and “vanity” metrics. My problem is with some of the metrics that are thrown away as being merely vanity. For example, the number one hit on Google today when I searched for “vanity metrics” was an article on TechCrunch.
They admit to being guilty of using them and cite such metrics as 1 million downloads and 10 million registered users.
But are such numbers truly vanity metrics?
One chapter in Succeeding with Agile, is about metrics. In it, I wrote about creating a balanced scorecard and using both leading and lagging indicators. A lagging indicator is something you can measure after you have done something, and can be used to determine if you achieved a goal.
If your goal is improving quality, a lagging indicator could be the number of defects reported in the first 30 days after the release. That would tell you if you achieved your goal—but it comes with the drawback of not being at all available until 30 days after the release.
A leading indicator, on the other hand, is available in advance, and can tell you if you are on your way to achieving a goal.
The number of nightly tests that pass would be a leading indicator that a team is on its way to improving quality. The number of nightly tests passing, though, is a vanity metric in Ries’ terms. It can be easily manipulated; the team could run the same or similar tests many times to deliberately inflate the number of tests. Therefore, the linkage because cause and effect is weak. More passing tests do not guarantee improved quality.
But is the number of passing tests really a vanity metric? Is it really useless?
To show that it’s not, consider a few other metrics you’re probably familiar with: your cholesterol value, your blood pressure, your resting pulse, even your weight. A doctor can use these values and learn something about your health. If your total cholesterol value is 160, a heart attack is probably not imminent. A value of 300, though, and it’s a good thing you’re visiting your doctor.
These are leading indicators. They don’t guarantee anything. I could have a cholesterol value of 160 and have a heart attack as soon as I walk out of the doctor’s office. The only true lagging indicator would be the number of heart attacks I’ve had in the last year. Yes, absolutely a much better metric, but not available until the end of the year.
So should we avoid all vanity metrics? No. Vanity metrics can possess meaningful information. They are often leading indicators. If a website’s goal is to sell memberships then number of memberships sold is that company’s key, actionable metric.
But number of unique new visitors—a vanity metric—can be a great leading indicator. More new visitors should lead to more memberships sold. Just like more passing tests should lead to higher quality. It’s not guaranteed, but it is indicative.
The TechCrunch article I mentioned has the right attitude. It says, “Vanity metrics aren’t completely useless, just don’t be fooled by them.” The real danger of vanity metrics is that they can be gamed. We can run tests that can’t fail. We can buy traffic to our site that we know will never translate into paid memberships, but make the traffic metrics look good.
As long as no one is doing things like that, vanity metrics can serve as good leading indicators. Just keep in mind that they don’t measure what you really care about. They merely indicate whether you’re on the right path.
I regularly get the question, “How Do You Do It?”
“How are you able to travel so much and not get sick of it?”
“How can you read 50+ books per year and also write your own?”
Gosh, I don’t know.
The full potential of many an agile organization is hardly ever reached. Many teams find themselves redefining¬†user stories although they have been committed to as part of the sprint. The ‚Äėready phase‚Äô, meant to get user stories clear and sufficiently detailed so they can be implemented, is missed. How will each user story result in¬†high quality features that deliver business value? The ‚ÄėDefinition of Ready‚Äô is lacking one important entry: ‚ÄúAutomated tests are available.‚ÄĚ Ensuring to have testable and hence automated acceptance criteria before committing to user stories in a sprint, allows you to retain focus during the sprint. We define this as: Ready, Test,¬†Go!
Behaviour-Driven Development has proven to be a fine technique to write automated acceptance criteria. Using the¬†Gherkin format (given, when, then), examples can be specified that need to be supported by the system once the user story is completed. When a sufficiently detailed list of examples is available, all Scrum stakeholders agree with the specification. Common understanding is achieved that when the story is implemented, we are one step closer to building the right thing.
The specification itself becomes executable: at any moment in time, the gap between the desired and implemented functionality becomes visible. In other words, this automated acceptance test should be run continuously. First time, it happily fails. Next, implementation can start. This, following Test-Driven Development principles, starts with writing (also failing) unit tests. Then, development of the production code starts. When the unit tests are passing and acceptance tests for a story are passing, other user stories can be picked up; stories of which the tests happily fail. Tests thus act as a safeguard to continuously verify that the team is building the thing right. Later, the automated tests (acceptance tests and unit tests) serve as a safety net for regression testing during subsequent sprints.
That's simple: release your software to production. Ensure that other testing activities (performance tests, chain tests, etc) are as much as possible automated and performed as part of the sprint.
The (Agile) Test Automation Engineer
In order to facilitate or boost this way of working, the role of the test automation engineer is key. The test automation engineer is defining the test architecture and facilitating the necessary infrastructure needed to run tests often and fast. He is interacting with developers to co-develop fixtures, to understand how the production code is built, and to decide upon the structure and granularity of the test code.
Apart from their valuable and unique analytical skills, relevant testers grow their technical skills. If it cannot be automated, it cannot be checked, so one might question whether the user story is ready to be committed to in a sprint. The test automation engineer helps the Scrum teams to identify when they¬†are ‚Äėready to test‚Äô and urges the product owner and business to specify requirements ‚Äď at least for the next sprint ahead.
So: ready, test, go!!
Last week Gian Ntzik gave a great talk at the F#unctional Londoners meetup on the Nessos Streams library. It‚Äôs a lightweight F#/C# library for efficient functional-style pipelines on streams of data.
The main difference between LINQ/Seq and Streams is that LINQ is about composing external iterators (Enumerable/Enumerator) and Streams is based on the continuation-passing-style composition of internal iterators, which makes optimisations such as loop fusion easier.
Gian started the session by live coding a simple implementation of streams in about 20 minutes:
type Stream<'T> = ('T -> unit) -> unit let inline ofArray (source: 'T) : Stream<'T> = fun k -> let mutable i = 0 while i < source.Length do k source.[i] i <- i + 1 let inline filter (predicate: 'T -> bool) (stream: Stream<'T>) : Stream<'T> = fun k -> stream (fun value -> if predicate value then k value) let inline map (mapF: 'T -> 'U) (stream: Stream<'T>) : Stream<'U> = fun k -> stream (fun v -> k (mapF v)) let inline iter (iterF: 'T -> unit) (stream: Stream<'T>) : unit = stream (fun v -> iterF v) let inline toArray (stream: Stream<'T>) : 'T  = let acc = new List<'T>() stream |> iter (fun v -> acc.Add(v)) acc.ToArray() let inline fold (foldF:'State->'T->'State) (state:'State) (stream:Stream<'T>) = let acc = ref state stream (fun v -> acc := foldF !acc v) !acc let inline reduce (reducer: ^T -> ^T -> ^T) (stream: Stream< ^T >) : ^T when ^T : (static member Zero : ^T) = fold (fun s v -> reducer s v) LanguagePrimitives.GenericZero stream let inline sum (stream : Stream< ^T>) : ^T when ^T : (static member Zero : ^T) and ^T : (static member (+) : ^T * ^T -> ^T) = fold (+) LanguagePrimitives.GenericZero stream
and as you can see only about 40 lines of code.
Just with this simple implementation, Gian was able to demonstrate a significant performance improvement over F#‚Äôs built-in Seq module for a simple pipeline:
#time // Turns on timing in F# Interactive let data = [|1L..1000000L|] let seqValue = data |> Seq.filter (fun x -> x%2L = 0L) |> Seq.map (fun x -> x * x) |> Seq.sum // Real: 00:00:00.252, CPU: 00:00:00.234, GC gen0: 0, gen1: 0, gen2: 0 let streamValue = data |> Stream.ofArray |> Stream.filter (fun x -> x%2L = 0L) |> Stream.map (fun x -> x * x) |> Stream.sum // Real: 00:00:00.119, CPU: 00:00:00.125, GC gen0: 0, gen1: 0, gen2: 0
Note for operations over arrays, the F# Array module would be more appropriate choice and is slightly faster:
let arrayValue = data |> Array.filter (fun x -> x%2L = 0L) |> Array.map (fun x -> x * x) |> Array.sum // Real: 00:00:00.094, CPU: 00:00:00.093, GC gen0: 0, gen1: 0, gen2: 0
Also LINQ does quite well here as it has a specialized overloads including one for summing over int64 values:
open System.Linq let linqValue = data .Where(fun x -> x%2L = 0L) .Select(fun x -> x * x) .Sum() // Real: 00:00:00.058, CPU: 00:00:00.062, GC gen0: 0, gen1: 0, gen2: 0
However with F# Interactive running in 64-bit mode Streams take back the advantage (thanks to Nick Palladinos for the tip):
let streamValue = data |> Stream.ofArray |> Stream.filter (fun x -> x%2L = 0L) |> Stream.map (fun x -> x * x) |> Stream.sum // Real: 00:00:00.033, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
Looks like the 64-bit JIT is doing some black magic there.
Switching to the full Nessos Streams library, there‚Äôs support for parallel streams via the ParStream module:
let parsStreamValue = data |> ParStream.ofArray |> ParStream.filter (fun x -> x%2L = 0L) |> ParStream.map (fun x -> x + 1L) |> ParStream.sum // Real: 00:00:00.069, CPU: 00:00:00.187, GC gen0: 0, gen1: 0, gen2: 0
which demonstrates a good performance increase with little effort.
For larger computes Nessos Streams supports cloud based parallel operations against Azure.
Overall Nessos Streams looks like a good alternative to the Seq module for functional pipelines.
For further optimization Gian recommended the Nessos LinqOptimizer:
An automatic query optimizer-compiler for Sequential and Parallel LINQ. LinqOptimizer compiles declarative LINQ queries into fast loop-based imperative code. The compiled code has fewer virtual calls and heap allocations, better data locality and speedups of up to 15x
The benchmarks are impressive:
Reactive Extensions (Rx)
One of the questions in the talk and on twitter later was, given Rx is also a push model, how does the performance compare:November 13, 2014
Clearly the Nessos Streams library and Rx have different goals (data processing vs event processing), but I thought it would be interesting to compare them all the same:
open System.Reactive.Linq let rxValue = data .ToObservable() .Where(fun x -> x%2L = 0L) .Select(fun x -> x * x) .Sum() .ToEnumerable() |> Seq.head // Real: 00:00:02.895, CPU: 00:00:02.843, GC gen0: 120, gen1: 0, gen2: 0 let streamValue = data |> Stream.ofArray |> Stream.filter (fun x -> x%2L = 0L) |> Stream.map (fun x -> x * x) |> Stream.sum // Real: 00:00:00.130, CPU: 00:00:00.109, GC gen0: 0, gen1: 0, gen2: 0
In this naive comparison you can see Nessos Streams is roughly 20 times faster than Rx.
F# also has a built-in Observable module for operations over IObservable<T> (support for operations over events was added to F# back in 2006). Based on the claims on Rx performance made by Matt Podwysocki I was curious to see how it stacked up:
let obsValue = data |> Observable.ofSeq |> Observable.filter (fun x -> x%2L = 0L) |> Observable.map (fun x -> x * x) |> Observable.sum |> Observable.first // Real: 00:00:00.479, CPU: 00:00:00.468, GC gen0: 18, gen1: 0, gen2: 0
As you can see Observable module comes off roughly 5 times faster.
Note: I had to add some simple combinators to make this work, you can see the full snippet here: http://fssnip.net/ow
To offer more seamless integration of Google products within your app, we‚Äôre excited to start the rollout of the latest version of Google Play services.
Google Play services 6.5 includes new features in Google Maps, Google Drive and Google Wallet as well as the recently launched Google Fit API. We are also providing developers with more granular control over which Google Play services APIs your app depends on to help you maintain a lean app.Google Maps
We‚Äôre making it easier to get directions to places from your app! The Google Maps Android API now offers a map toolbar to let users open Google Maps and immediately get directions and turn by turn navigation to the selected marker. This map toolbar will show by default when you compile against Google Play services 6.5.
In addition, there is also a new ‚Äėlite mode‚Äô map option, ideal for situations where you want to provide a number of smaller maps, or a map that is so small that meaningful interaction is impractical, such as a thumbnail in a list. A lite mode map is a bitmap image of a map at a specified location and zoom level.
In lite mode, markers and shapes are drawn client-side on top of the static image, so you still have full control over them. Lite mode supports all of the map types, the My Location layer, and a subset of the functionality of a fully-interactive map. Users can tap on the map to launch Google Maps when they need more details.
The Google Maps Android API also exposes a new getMapAsync(OnMapReadyCallback) method to MapFragment and MapView which will notify you exactly when the map is ready. This serves as a replacement for the now deprecated getMap() method.
We‚Äôre also exposing the Google Maps for Android app intents available to your apps including displaying the map, searching, starting turn by turn navigation, and opening Street View so you can build upon the familiar and powerful maps already available on the device.Drive
You can now add both public and application private custom file properties to a Drive file which can be used to build very efficient search queries and allow apps to save information which is guaranteed to persist across editing by other apps.
We‚Äôve also made it even easier to make syncing your files to Drive both user and battery friendly with the ability to control when files are uploaded by network type or charging status and cancel pending uploads.Google Wallet
In addition to the existing ‚ÄėBuy with Google‚Äô button available to quickly purchase goods & services using Google Wallet, this release adds a ‚ÄėDonate with Google‚Äô button for providing the same ease of use in collecting donations.Google Fit
The Google Fit SDK was recently officially released as part of Google Play services and can be used to super-charge your fitness apps with a simple API for working with sensors, recording activity data, and reading the user‚Äôs aggregated fitness data.
In this release, we‚Äôve made it easier for developers to add activity segments (predefined time periods of running, walking, cycling, etc) when inserting sessions, making it easy to support pauses or multiple activity type workouts. We‚Äôll also be adding additional samples to help kick-start your Google Fit integration.Granular Dependency Management
As we‚Äôve continued to add more APIs across the wide range of Google services, it can be hard to maintain a lean app, particularly if you're only using a portion of the available APIs. Now with Google Play services 6.5, you‚Äôll be able to depend only on a minimal common library and the exact APIs your app needs. This makes it very lightweight to get started with Google Play services.SDK Coming Soon!
We‚Äôll be rolling out Google Play services 6.5 over the next few days, and we‚Äôll update this blog post, publish the documentation, and make the SDK available once the rollout completes.
To learn more about Google Play services and the APIs available to you through it, visit the Google Play Services section on the Android Developer site.Join the discussion on
The release planning event in the Scaled Agile Framework Enterprise (SAFe) is a two-day program-level planning event that focuses the efforts of a group of teams to pursue a common vision and mission. As we have noted, the event includes participation by everyone involved in the Agile release train (ART), participation is in-person (if humanly possible), occurs every 8 ‚Äď 12 weeks and has a formal structured agenda.¬†¬† The agenda has five major components:
Each of these subcomponents provide the team with an understanding of what they are being asked to do and the environment they are being asked to operate within. The flow begins by grounding the team the business context for the program increment (the 8 -12 weeks). Each step after the business context increased in technical detail.
The release planning meeting operationalizes a program increment. A program increment represents 8 – 12 week planning horizon within a larger Agile Release Train. The large scale planning event helps keep all of the teams involved in the ART synchronized. The release planning meeting might be SAFe‚Äôs special sauce.
Do we really need another messaging system? We might if it promises to move millions of messages a second, at small microsecond latencies between machines, with consistent response times, to large numbers of clients, using an innovative design.
And that’s the promise of Aeron (the Celtic god of battle, not the chair, though tell that to the search engines), a new high-performance open source message transport library from the team of Todd Montgomery, a multicast and reliable protocol expert, Richard Warburton, an expert on compiler optimizations, and Martin Thompson, the pasty faced performance gangster.
The claims are Aeron is already beating the best products out there on throughput and latency matches the best commercial products up to the 90th percentile. Aeron can push small 40 byte messages at 6 million messages a second, which is a very difficult case.
Here’s a talk Martin gave on Aeron at Strangeloop: Aeron: Open-source high-performance messaging. I’ll give a gloss of his talk as well as integrating in sources of information listed at the end of this article.
Martin and his team were in the enviable position of having a client that required a product like Aeron and was willing to both finance its development while also making it open source. So go git Aeron on GitHub. Note, it’s early days for Aeron and they are still in the heavy optimization phase.
The world has changed therefore endpoints need to scale as never before. This is why Martin says we need a new messaging system. It’s now a multi-everything world. We have multi-core, multi-socket, multi-cloud, multi-billion user computing, where communication is happening all the time. Huge numbers of consumers regularly pound a channel to read from same publisher, which causes lock contention, queueing effects, which causes throughput to drop and latency to spike.
What’s needed is a new messaging library to make the most of this new world. The move to microservices only heightens the need:
As we move to a world of micro services then we need very low and predictable latency from our communications otherwise the coherence component of USL will come to rain fire and brimstone on our designs.
With Aeron the goal is to keep things pure and focused. The benchmarking we have done so far suggests a step forward in throughput and latency. What is quite unique is that you do not have to choose between throughput and latency. With other high-end messaging transports this is a distinct choice. The algorithms employed by Aeron give maximum throughput while minimising latency up until saturation.
“Many messaging products are a Swiss Army knife; Aeron is a scalpel,” says Martin, which is a good way to understand Aeron. It’s not a full featured messaging product in the way you may be used to, like Kafka. Aeron does not persist messages, it doesn’t support guaranteed delivery, nor clustering, nor does it support topics. Aeron won’t know if a client has crashed and be able to sync it back up from history or initialize a new client from history.
The best way to place Aeron in your mental matrix might be as a message oriented replacement for TCP, with higher level services written on top. Todd Montgomery expands on this idea:
Aeron being an ISO layer 4 protocol provides a number of things that messaging systems can't and also doesn't provide several things that some messaging systems do.... if that makes any sense. Let me explain slightly more wrt all typical messaging systems (not just Kafka and 0MQ).
One way to think more about where Aeron fits is TCP, but with the option of reliable multicast delivery. However, that is a little limited in that Aeron also, by design, has a number of possible uses that go well beyond what TCP can do. Here are a few things to consider:
Todd continues on with more detail, so please keep reading the article to see more on the subject.
At its core Aeron is a replicated persistent log of messages. And through a very conscious design process messages are wait-free and zero-copy along the entire path from publication to reception. This means latency is very good and very predictable.
That sums up Aeron is nutshell. It was created by an experienced team, using solid design principles sharpened on many previous projects, backed by techniques not everyone has in their tool chest. Every aspect has been well thought out to be clean, simple, highly performant, and highly concurrent.
If simplicity is indistinguishable from cleverness, then there’s a lot of cleverness going on in Aeron. Let’s see how they did it...
Let‚Äôs say you want to take your business to the Cloud -- How do you do it?
If you‚Äôre a small shop or a startup, it might be easy to just swipe your credit card and get going.
If, on the other hand, you‚Äôre a larger business that wants to start your journey to the Cloud, with a lot of investments and people that you need to bring along, you need a roadmap.
The roadmap will help you deal with setbacks, create confidence in the path, and help ensure that you can get from point A to point B (and that you know what point B actually is.) By building an implementable roadmap for your business transformation, you can also build a coalition of the willing to help you get their faster. And you can design your roadmap so that your journey flows continuous business value along the way.
In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share how top leaders build better roadmaps for their digital business transformation.Why You Need to Build a Roadmap for Your Digital Transformation
If you had infinite time and resources, maybe you could just wing it, and hope for the best. A better approach is to have a roadmap as a baseline. Even if your roadmap changes, at least you can share the path with others in your organization and get them on board to help make it happen.
Via Leading Digital:
‚ÄúIn a perfect world, your digital transformation would deliver an unmatched customer experience, enjoy the industry's most effective operations, and spawn innovative, new business models. There are a myriad of opportunities for digital technology to improve your business and no company can entertain them all at once. The reality of limited resources, limited attention spans, and limited capacity for change with force focused choices. This is the aim of your roadmap.‚ÄĚFind Your Entry Point
Your best starting point is a business capability that you want to exploit.
Via Leading Digital:
‚ÄúMany companies have come to realize that before they can create a wholesale change within their organization, they have to find an entry point that will begin shifting the needle. How? They start by building a roadmap that leverages existing assets and capabilities. Burberry, for example, enjoyed a globally recognized brand and a fleet of flagship retail locations around the world. The company started by revitalizing its brand and customer experience in stores and online. Others, like Codelco, began with the core operational processes of their business. Caesars Entertainment combined strong capabilities in analytics with a culture of customer service to deliver a highly personalized guest experience. There is no single right way to start your digital transformation. What matters is that you find the existing capability--your sweet spot--that will get your company off the starting blocks.
Once your initial focus is clear, you can start designing your transformation roadmap. Which investments and activities are necessary to close the gap to your vision? What is predictable, and what isn't? What is the timing and scheduling of each initiative? What are the dependencies between them? What organizational resources, such as analytics skills, are required?‚ÄĚEngage Practitioners Early in the Design
If you involve others in your roadmap, you get their buy-in, and they will help you with your business transformation.
Via Leading Digital:
‚ÄúDesigning your roadmap will require input from a broad set of stakeholders. Rather than limit the discussion to the top team, engage the operational specialists who bring an on-the-ground perspective. This will minimize the traditional vision-to-execution gap. You can crowd-source the design. Or, you can use facilitated workshops, as as 'digital days,' as an effective way to capture and distill the priorities and information you will need to consider. We've seen several Digital Masters do both.
Make no mistake; designing your roadmap will take time, effort, and multiple iterations. But you will find it a valuable exercise. it forces agreement on priorities and helps align senior management and the people tasked to execute the program. Your roadmap will become more than just a document. If executed well, it can be the canvas of the transformation itself. Because your roadmap is a living document, it will evolve as your implementation progresses.‚ÄĚDesign for Business Outcome, Not Technology
When you create your roadmap, focus on the business outcomes. Think in terms of adding incremental business capabilities. Don‚Äôt make it a big bang thing. Instead, start small, but iterate on building business capabilities that take advantage of Cloud, Mobile, Social, and Big Data technologies.
Via Leading Digital:
‚ÄúTechnology for its own sake is a common trap. Don't build your roadmap as a series of technology projects. Technology is only part of the story in digital transformation and often the least challenging one. For example, the major hurdles for Enterprise 2.0 platforms are not technical. Deploying the platform is relatively straightforward, and today's solutions are mature. The challenge lies in changing user behavior--encouraging adoption and sustaining engagement in the activities the platform is meant to enable.
Express your transformation roadmap in terms of business outcomes. For example, 'Establish a 360-degree understanding of our customers.' Build into your roadmap the many facets of organizational change that your transformation will require customer experiences, operational processes, employee ways of working, organization, culture, communication--the list goes on. This is why contributions from a wide variety is so critical.‚ÄĚ
There are lots of way to build a roadmap, but the best thing you can do is put something down on paper so that you can share the path with other people and start getting feedback and buy-in.
You‚Äôll be surprised but when you show business and IT leaders a roadmap, it helps turn strategy into execution and make things real in people‚Äôs minds.You Might Also Like