Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Auto Backup for Apps made simple

Android Developers Blog - 9 hours 1 min ago

Posted by Wojtek Kaliciński, Developer Advocate, Android

Auto Backup for Apps makes seamless app data backup and restore possible with zero lines of application code. This feature will be available on Android devices running the upcoming M release. All you need to do to enable it for your app is update the targetSdkVersion to 23. You can test it now on the M Developer Preview, where we’ve enabled Auto Backup for all apps regardless of targetSdkVersion.

Auto Backup for Apps is provided by Google to both users and developers at no charge. Even better, the backup data stored in Google Drive does not count against the user's quota. Please note that data transferred may still incur charges from the user's cellular / internet provider.


What is Auto-Backup for Apps?

By default, for users that have opted in to backup, all of the data files of an app are automatically copied out to a user’s Drive. That includes databases, shared preferences and other content in the application’s private directory, up to a limit of 25 megabytes per app. Any data residing in the locations denoted by Context.getCacheDir(), Context.getCodeCacheDir() and Context.getNoBackupFilesDir() is excluded from backup. As for files on external storage, only those in Context.getExternalFilesDir() are backed up.

How to control what is backed up

You can customize what app data is available for backup by creating a backup configuration file in the res/xml folder and referencing it in your app’s manifest:


<application
        android:fullBackupContent="@xml/mybackupscheme">

In the configuration file, specify <include/> or <exclude/> rules that you need to fine tune the behavior of the default backup agent. Please refer to a detailed explanation of the rules syntax available in the documentation.

What to exclude from backup

You may not want to have certain app data eligible for backup. For such data, please use one of the mechanisms above. For example:

  • You must exclude any device specific identifiers, either issued by a server or generated on the device. This includes the Google Cloud Messaging (GCM) registration token which, when restored to another device, can render your app on that device unable to receive GCM messages.
  • Consider excluding account credentials or other sensitive information information, e.g., by asking the user to reauthenticate the first time they launch a restored app rather than allowing for storage of such information in the backup.

With such a diverse landscape of apps, it’s important that developers consider how to maximise the benefits to the user of automatic backups. The goal is to reduce the friction of setting up a new device, which in most cases means transferring over user preferences and locally saved content.

For example, if you have the user’s account stored in shared preferences such that it can be restored on install, they won’t have to even think about which account they used to sign in with previously - they can submit their password and get going!

If you support a variety of log-ins (Google Sign-In and other providers, username/password), it’s simple to keep track of which log-in method was used previously so the user doesn’t have to.

Transitioning from key/value backups

If you have previously implemented the legacy, key/value backup by subclassing BackupAgent and setting it in your Manifest (android:backupAgent), you’re just one step away from transitioning to full-data backups. Simply add the android:fullBackupOnly="true" attribute on <application/>. This is ignored on pre-M versions of Android, meaning onBackup/onRestore will still be called, while on M+ devices it lets the system know you wish to use full-data backups while still providing your own BackupAgent.

You can use the same approach even if you’re not using key/value backups, but want to do any custom processing in onCreate(), onFullBackup() or be notified when a restore operation happens in onRestoreFinished(). Just remember to call super.onFullBackup() if you want to retain the system implementation of XML include/exclude rules handling.

What is the backup/restore lifecycle?

The data restore happens as part of the package installation, before the user has a chance to launch your app. Backup runs at most once a day, when your device is charging and connected to Wi-Fi. If your app exceeds the data limit (currently set at 25 MB), no more backups will take place and the last saved snapshot will be used for subsequent restores. Your app’s process is killed after a full backup happens and before a restore if you invoke it manually through the bmgr command (more about that below).

Test your apps now

Before you begin testing Auto Backup, make sure you have the latest M Developer Preview on your device or emulator. After you’ve installed your APK, use the adb shell command to access the bmgr tool.

Bmgr is a tool you can use to interact with the Backup Manager:

  • bmgr run schedules an immediate backup pass; you need to run this command once after installing your app on the device so that the Backup Manager has a chance to initialize properly
  • bmgr fullbackup <packagename> starts a full-data backup operation.
  • bmgr restore <packagename> restores previously backed up data

If you forget to invoke bmgr run, you might see errors in Logcat when trying the fullbackup and restore commands. If you are still having problems, make sure you have Backup enabled and a Google account set up in system Settings -> Backup & reset.

Learn more

You can find a sample application that shows how to use Auto Backup on our GitHub. The full documentation is available on developer.android.com

Join the Android M Developer Preview Community on Google+ for more information on Android M features and remember to report any bugs you find with Auto Backup in the bug tracker.

Categories: Programming

What is Insight?

"A moment's insight is sometimes worth a life's experience." -- Oliver Wendell Holmes, Sr.

Some say we’re in the Age of Insight.  Others say insight is the new currency in the Digital Economy.

And still others say that insight is the backbone of innovation.

Either way, we use “insight” an awful lot without talking about what insight actually is.

So, what is insight?

I thought it was time to finally do a deeper dive on what insight actually is.  Here is my elaboration of “insight” on Sources of Insight:

Insight

You can think of it as “insight explained.”

The simple way that I think of insight, or those “ah ha” moments, is by remembering a question Ward Cunningham uses a lot:

“What did you learn that you didn’t expect?” or “What surprised you?”

Ward uses these questions to reveal insights, rather than have somebody tell him a bunch of obvious or uneventful things he already knows.  For example, if you ask somebody what they learned at their presentation training, they’ll tell you that they learned how to present more effectively, speak more confidently, and communicate their ideas better.

No kidding.

But if you instead ask them, “What did you learn that you didn’t expect?” they might actually reveal some insight and say something more like this:

“Even though we say don’t shoot the messenger all the time, you ARE the message.”

Or

“If you win the heart, the mind follows.”

It’s the non-obvious stuff, that surprises you (at least at first).  Or sometimes, insight strikes us as something that should have been obvious all along and becomes the new obvious, or the new normal.

Ward used this insights gathering technique to more effectively share software patterns.  He wanted stories and insights from people, rather than descriptions of the obvious.

I’ve used it myself over the years and it really helps get to deeper truths.  If you are a truth seeker or a lover of insights, you’ll enjoy how you can tease out more insights, just by changing your questions.   For example, if you have kids, don’t ask, “How was your day?”   Ask them, “What was the favorite part of your day?” or “What did you learn that surprised you?”

Wow, I now this is a short post, but I almost left without defining insight.

According to the dictionary, insight is “The capacity to gain an accurate and deep intuitive understanding of a person or thing.”   Or you may see insight explained as inner sight, mental vision, or wisdom.

I like Edward de Bono’s simple description of insight as “Eureka moments.”

Some people count steps in their day.  I count my “ah-ha” moments.  After all, the most important ingredient of effective ideation and innovation is …yep, you guessed it – insight!

For a deeper dive on the power of insight, read my page on Insight explained, on Sources Of Insight.com

Categories: Architecture, Programming

A Well Known But Forgotten Trick: Object Pooling

This is a guest repost by Alex Petrov. Find the original article here.

Most problem are quite straightforward to solve: when something is slow, you can either optimize it or parallelize it. When you hit a throughput barrier, you partition a workload to more workers. Although when you face problems that involve Garbage Collection pauses or simply hit the limit of the virtual machine you're working with, it gets much harder to fix them.

When you're working on top of a VM, you may face things that are simply out of your control. Namely, time drifts and latency. Gladly, there are enough battle-tested solutions, that require a bit of understanding of how JVM works.

If you can serve 10K requests per second, conforming with certain performance (memory and CPU parameters), it doesn't automatically mean that you'll be able to linearly scale it up to 20K. If you're allocating too many objects on heap, or waste CPU cycles on something that can be avoided, you'll eventually hit the wall.

The simplest (yet underrated) way of saving up on memory allocations is object pooling. Even though the concept is sounds similar to just pooling objects and socket descriptors, there's a slight difference.

When we're talking about socket descriptors, we have limited, rather small (tens, hundreds, or max thousands) amount of descriptors to go through. These resources are pooled because of the high initialization cost (establishing connection, performing a handshake over the network, memory-mapping the file or whatever else). In this article we'll talk about pooling larger amounts of short-lived objects which are not so expensive to initialize, to save allocation and deallocation costs and avoid memory fragmentation.

Object Pooling
Categories: Architecture

Architecture -Center ERP Systems in the Manufacturing Domain

Herding Cats - Glen Alleman - 14 hours 9 min ago

I found another paper presented at in Newspaper systems journal around architecture in manufacturing and ERP.

One of the 12 Principles of agile says The best architectures, requirements, and designs emerge from self-organizing teams. This is a developers point of view of architecture. The architects point of view looks like.

Architectured Centered Design from Glen Alleman
Categories: Project Management

Great Review of Predicting the Unpredictable

Ryan Ripley “highly recommends” Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule. See his post: Pragmatic Agile Estimation: Predicting the Unpredictable.

He says this:

This is a practical book about the work of creating software and providing estimates when needed. Her estimation troubleshooting guide highlights many of the hidden issues with estimating such as: multitasking, student syndrome, using the wrong units to estimate, and trying to estimates things that are too big. — Ryan Ripley

Thank you, Ryan!

PredictingUnpredictable-small See Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule for more information.

Categories: Project Management

IT Risk Management

Herding Cats - Glen Alleman - Wed, 07/29/2015 - 02:46

I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system.

Here's one on Risk Management

Information Technology Risk Management from Glen Alleman This work was done early in the risk management development process. Tim Lister's quote came later Risk management is how adults management projects.
Categories: Project Management

The Definition of Done at Scale

19452897313_0dd46dd8fa_k

While there is agreement that you should use DoD at scale, how to apply it is less clear.

The Definition of Done (DoD) is an important technique for increasing the operational effectiveness of team-level Agile. The DoD provides a team with a set of criteria that they can use to plan and bound their work. As Agile is scaled up to deliver larger, more integrated solutions the question that is often asked is whether the concept of the DoD can be applied. And if it is applied, does the application require another layer of done (more complexity)?

The answer to the first question is simple and straightforward. If the question is whether the Definition of Done technique can be used as Agile projects are scaled, then the answer is an unequivocal ‘yes’. In preparation for this essay I surveyed a few dozen practitioners and coaches on the topic to ensure that my use of the technique at scale wasn’t extraordinary. To a person, they all used the technique in some form. Mario Lucero, an Agile Coach in Chile, (interviewed on SPaMCAST 334) said it succinctly, “No, the use of Definition of Done doesn’t depend on how large is the project.”

While everyone agreed that the DoD makes sense in a scaled Agile environment, there is far less consensus on how to apply the technique. The divergence of opinion and practice centered on whether or not the teams working together continually integrated their code as part of their build management process. There are two different camps. The first camp typically finds themselves in organizations that integrated functions either as a final step in a sprint, performed integration as a separate function outside of development or as a separate hardening sprint. This camp generally feels that to apply the Definition of Done requires a separate DoD specifically for integration. This DoD would include requirements for integrating functions, testing integration and architectural requirements that span teams. The second camp of respondent finds themselves in environments where continuous integration is performed. In this scenario each respondent either added integration criteria in the team DoD or did nothing at all. The primary difference boiled down to whether the team members were responsible for making sure their code integrated with the overall system or whether someone else (real or perceived) was responsible.

In practice the way that DoD is practiced includes a bit of the infamous “it depends” magic. During our discussion on the topic, Luc Bourgault from Wolters Kluwer stated, “in a perfect world the definition should be same, but I think we should be accept differences when it makes sense.” Pradeep Chennavajhula, Senior Global VP at QAI, made three points:

  1. Principles/characteristics of Definition of done do not change by size of the project.
  2. However, the considerations and detail will be certainly impacted.
  3. This may however, create a perception that Definition of Done varies by size of project.

The Definition of Done is useful for all Agile work whether a single team or a large scaled effort. However, how you have organized your Agile effort will have more of a potential impact on your approach.


Categories: Process Management

SE-Radio Episode 233: Fangjin Yang on OLAP and the Druid Real-Time Analytical Data Store

Fangjin Yang, creator of the Druid real-time analytical database, talks with Robert Blumen. They discuss the OLAP (online analytical processing) domain, OLAP concepts (hypercube, dimension, metric, and pivot), types of OLAP queries (roll-up, drill-down, and slicing and dicing), use cases for OLAP by organizations, the OLAP store’s position in the enterprise workflow, what “real time” […]
Categories: Programming

Neo4j: MERGE’ing on super nodes

Mark Needham - Tue, 07/28/2015 - 22:04

In my continued playing with the Chicago crime data set I wanted to connect the crimes committed to their position in the FBI crime type hierarchy.

These are the sub graphs that I want to connect:

2015 07 26 22 19 04

We have a ‘fbiCode’ on each ‘Crime’ node which indicates which ‘Crime Sub Category’ the crime belongs to.

I started with the following query to connect the nodes together:

MATCH (crime:Crime)
WITH crime SKIP {skip} LIMIT 10000
 
MATCH (subCat:SubCategory {code: crime.fbiCode})
MERGE (crime)-[:CATEGORY]->(subCat)
RETURN COUNT(*) AS crimesProcessed

I had this running inside a Python script which incremented ‘skip’ by 10,000 on each iteration as long as ‘crimesProcessed’ came back with a value > 0.

To start with the ‘CATEGORY’ relationships were being created very quickly but it slowed down quite noticeably about 1 million nodes in.

I profiled the queries but the query plans didn’t show anything obviously wrong. My suspicion was that I had a super node problem where the cypher run time was iterating through all of the sub category’s relationships to check whether one of them pointed to the crime on the other side of the ‘MERGE’ statement.

I cancelled the import job and wrote a query to check how many relationships each sub category had. It varied from 1,000 to 93,000 somewhat confirming my suspicion.

Michael suggested tweaking the query to use the shortestpath function to check for the existence of the relationship and then use the ‘CREATE’ clause to create it if it didn’t exist.

The neat thing about the shortestpath function is that it will start from the side with the lowest cardinality and as soon as it finds a relationship it will stop searching. Let’s have a look at that version of the query:

MATCH (crime:Crime)
WITH crime SKIP {skip} LIMIT 10000
MATCH (subCat:SubCategory {code: crime.fbiCode})
WITH crime, subCat, shortestPath((crime)-[:CATEGORY]->(subCat)) AS path
FOREACH(ignoreMe IN CASE WHEN path is NULL THEN [1] ELSE [] END |
  CREATE (crime)-[:CATEGORY]->(subCat))
RETURN COUNT(*)

This worked much better – 10,000 nodes processed in ~ 2.5 seconds – and the time remained constant as more relationships were added. This allowed me to create all the category nodes but we can actually do even better if we use CREATE UNIQUE instead of MERGE

MATCH (crime:Crime)
WITH crime SKIP {skip} LIMIT 10000
 
MATCH (subCat:SubCategory {code: crime.fbiCode})
CREATE UNIQUE (crime)-[:CATEGORY]->(subCat)
RETURN COUNT(*) AS crimesProcessed

Using this query 10,000 nodes took ~ 250ms -900ms second to process which means we can process all the nodes in 5-6 minutes – good times!

I’m not super familiar with the ‘CREATE UNIQUE’ code so I’m not sure that it’s always a good substitute for ‘MERGE’ but on this occasion it does the job.

The lesson for me here is that if a query is taking longer than you think it should try and use other constructs / a combination of other constructs and see whether things improve – they just might!

Categories: Programming

[New eBook] Download The No-nonsense Guide to App Growth

Android Developers Blog - Tue, 07/28/2015 - 21:26

Originally posted on the AdMob Blog.

What’s the secret to rapid growth for your app?

Play Store or App Store optimization? A sophisticated paid advertising strategy? A viral social media campaign?

While all of these strategies could help you grow your user base, the foundation for rapid growth is much more basic and fundamental—you need an engaging app.

This handbook will walk you through practical ways to increase your app’s user engagement to help you eventually transition to growth. You’ll learn how to:

  • Pick the right metric to represent user engagement
  • Look at data to audit your app and find areas to fix
  • Promote your app after you’ve reached a healthy level of user engagement

Download a free copy here.

For more tips on app monetization, be sure to stay connected on all things AdMob by following our Twitter and Google+ pages.

Posted by Raj Ajrawat, Product Specialist, AdMob

Categories: Programming

Python: Difference between two datetimes in milliseconds

Mark Needham - Tue, 07/28/2015 - 21:05

I’ve been doing a bit of adhoc measurement of some cypher queries executed via py2neo and wanted to work out how many milliseconds each query was taking end to end.

I thought there’d be an obvious way of doing this but if there is it’s evaded me so far and I ended up calculating the different between two datetime objects which gave me the following timedelta object:

>>> import datetime
>>> start = datetime.datetime.now()
>>> end = datetime.datetime.now()
 
>>> end - start
datetime.timedelta(0, 3, 519319)

The 3 parts of this object are ‘days’, ‘seconds’ and ‘microseconds’ which I found quite strange!

These are the methods/attributes we have available to us:

>>> dir(end - start)
['__abs__', '__add__', '__class__', '__delattr__', '__div__', '__doc__', '__eq__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__mul__', '__ne__', '__neg__', '__new__', '__nonzero__', '__pos__', '__radd__', '__rdiv__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rmul__', '__rsub__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', 'days', 'max', 'microseconds', 'min', 'resolution', 'seconds', 'total_seconds']

There’s no ‘milliseconds’ on there so we’ll have to calculate it from what we do have:

>>> diff = end - start
>>> elapsed_ms = (diff.days * 86400000) + (diff.seconds * 1000) + (diff.microseconds / 1000)
 
>>> elapsed_ms
3519

Or we could do the following slightly simpler calculation:

>>> diff.total_seconds() * 1000
3519.319

And now back to the query profiling!

Categories: Programming

Estimates on Split Stories Do Not Need to Equal the Original

Mike Cohn's Blog - Tue, 07/28/2015 - 15:00

It is good practice to first write large user stories (commonly known as epics) and then to split them into smaller pieces, a process known as product backlog refinement or grooming. When product backlog items are split, they are often re-estimated.

I’m often asked if the sum of the estimates on the smaller stories must equal the estimate on the original, larger story.

No.

Part of the reason for splitting the stories is to understand them better. Team members discuss the story with the product owner. As a product owner clarifies a user story, the team will know more about the work they are to do.

That improved knowledge should be reflected in any estimates they provide. If those estimates don’t sum to the same value as the original story, so be it.

But What About the Burndown?

But, I hear you asking, what about the release burndown chart? A boss, client or customer was told that a story was equal to 20 points. Now that the team split it apart, it’s become bigger.

Well, first, and I always feel compelled to say this: We should always stress to our bosses, clients and customers that estimates are estimates and not commitments.

When we told them the story would be 20 points, that meant perhaps 20, perhaps 15, perhaps 25. Perhaps even 10 or 40 if things went particularly well or poorly.

OK, you’ve probably delivered that message, and it may have gone in one ear and out the other of your boss, client or customer. So here’s something else you should be doing that can protect you against a story becoming larger when split and its parts are re-estimated.

I’ve always written and trained that the numbers in Planning Poker are best thought of as buckets of water.

You have, for example an 8 and a 13 but not a 10 card. If you have a story that you think is a 10, you need to estimate it as a 13. This slight rounding up (which only occurs on medium to large numbers) will mitigate the effect of stories becoming larger when split.

Consider the example of a story a team thinks is a 15. If they play Planning Poker the way I recommend, they will call that large story a 20.

Later, they split it into multiple smaller stories. Let’s say they split it into stories they estimate as 8, 8 and 5. That’s 21. That’s significantly larger than the 15 they really thought it was, but not much larger at all than the 20 they put on the story.

In practice, I’ve found this slight pessimistic bias to work well to counter the natural tendency I believe many developers have to underestimate, and to provide a balance against those who will be overly shocked when any actual overruns its estimate.

Why Guessing is not Estimating and Estimating is not Guessing

Herding Cats - Glen Alleman - Mon, 07/27/2015 - 19:00

I hear all the time estimating is the same as guessing. This is not true mathematically nor is not true business process wise. This is an approach used by many (guessing), not understanding that making decisions in the presence of uncertainty requires we understand the impact of that decision. When that future is uncertain, we need to know that impact in probabilistic terms. And with this, comes confidence, precision, and accuracy of the estimate.

What’s the difference between estimate and guess? The distinction between the two words is one of the degree of care taken in arriving at a conclusion.

The word Estimate is derived from the Latin word aestimare, meaning to value. The term is has the origin of estimable, which means capable of being estimated or worthy of esteem, and of course esteem, which means regard as in High Regard.

To estimate means to judge the extent, nature, or value of something - connected to the regard - he is held in high regard, with the implication that the result is based on expertise or familiarity. An estimate is the resulting calculation or judgment. A related term is approximation, meaning close or near.

In between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this type of middle-ground conclusion is ballpark figure. The origin of this American English idiom, which alludes to a baseball stadium, is not certain, but one conclusion is that it is related to in the ballpark, meaning close in the sense that one at such a location may not be in a precise location but is in the stadium.

To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Synonyms for guess include conjecture and surmise, which like guess can be employed both as verbs and as nouns.

We could have a hunch or an intuition, or we can engage in guesswork or speculation. Dead reckoning is same thing as guesswork.  Dead reckoning was originally referred to a navigation process based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor include hypothesis and supposition, as well as theory and thesis.

A guess is a casual, perhaps spontaneous conclusion. An estimate is based on intentional  thought processes supported by data.

What Does This Mean For Projects?

If we're guessing we're making uninformed conclusions usually in the absence of data, experience, or any evidence of credibility. If we're estimating we are making informed conclusions based on data, past performance, models - including Monte Carlo models, and parametric models.

When we hear decisions can be made without estimates. Or all estimating is guessing, we now mathematically and business process - neither of this is true.

This post is derived from Daily Writing Tips 

Related articles Making Conjectures Without Testable Outcomes Strategy is Not the Same as Operational Effectiveness Are Estimates Really The Smell of Dysfunction? Information Technology Estimating Quality
Categories: Project Management

Algolia's Fury Road to a Worldwide API Part 3

The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:

  • Our high-end dedicated machines are hosted in 13 worldwide regions with 25 data-centers

  • our master-master setup replicates our search engine on at least 3 different machines

  • we process over 6 billion queries per month

  • we receive and handle over 20 billion write operations per month

Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.

The first blog post of this series focused on our early days in beta and the second post on the first 18 months of the service, including our first outages. In this last post, I will describe how we transformed our "startup" architecture into something new that was able to meet the expectation of big public companies.

Step 11: February 2015 Launch of our Synchronized Worldwide infrastructure
Categories: Architecture

The monolithic frontend in the microservices architecture

Xebia Blog - Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to implement a server which does this yourself. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

You Don’t Have to Ask Permission

Making the Complex Simple - John Sonmez - Mon, 07/27/2015 - 16:00

For a long time, one of the major things that held me back in life was thinking I needed to ask permission to do something or be someone. I lived with a mentality that allowed others to limit and define my potential. I allowed other people to tell me who I was, what I was […]

The post You Don’t Have to Ask Permission appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia July 2015

From the Editor of Methods & Tools - Mon, 07/27/2015 - 14:50
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Agile retrospectives, remote teams, Agile testing, Cloud architecture, exploratory testing, software entropy, introverted software developers and Scrum myths. Blog: 7 Best Practices for Facilitating Agile Retrospectives Blog: How Pairing Powers […]

More Material Design with Topeka for Android

Android Developers Blog - Mon, 07/27/2015 - 12:28
Posted by Ben Weiss, Developer Programs Engineer

Update 27th July 2015:
The Design Support Library is now available, simplifying the implementation of elements like the Floating Action Button, check out the post for details.

Original Post:
Material design is a new system for visual, interaction and motion design. We originally launched the Topeka web app as an Open Source example of material design on the web.
Today, we’re publishing a new material design example: The Android version of Topeka. It demonstrates that the same branding and material design principles can be used to create a consistent experience across platforms. Grab the code today on GitHub. table, th, td { border: clear; border-collapse: collapse; }
The juicy bits While the project demonstrates a lot of different aspects of material design, let’s take a quick look at some of the most interesting bits.
Transitions Topeka for Android features several possibilities for transition implementation. For starters the Transitions API within ActivityOptions provides an easy, yet effective way to make great transitions between Activities.
To achieve this, we register the shared string in a resources file like this:
<resources>
    <string name="transition_avatar">AvatarTransition</string>
</resources>
Then we use it within the source’s and target’s view as transitionName
<ImageView
    android:id="@+id/avatar"
    android:layout_width="@dimen/avatar_size"
    android:layout_height="@dimen/avatar_size"
    android:layout_marginEnd="@dimen/keyline_16"
    android:transitionName="@string/transition_avatar"/>
And then make the actual transition happen within SignInFragment.
private void performSignInWithTransition(View v) {
    Activity activity = getActivity();
    ActivityOptions activityOptions = ActivityOptions
            .makeSceneTransitionAnimation(activity, v,
                    activity.getString(R.string.transition_avatar));
    CategorySelectionActivity.start(activity, mPlayer, activityOptions);
    activity.finishAfterTransition();
}
For multiple transition participants with ActivityOptions you can take a look at the CategorySelectionFragment.
Animations When it comes to more complex animations you can orchestrate your own animations as we did for scoring.
To get this right it is important to make sure all elements are carefully choreographed. The AbsQuizView class performs a handful of carefully crafted animations when a question has been answered:
The animation starts with a color change for the floating action button, depending on the provided answer. After this has finished, the button shrinks out of view with a scale animation. The view holding the question itself also moves offscreen. We scale this view to a small green square before sliding it up behind the app bar. During the scaling the foreground of the view changes color to match the color of the fab that just disappeared. This establishes continuity across the various quiz question states.
All this takes place in less than a second’s time. We introduced a number of minor pauses (start delays) to keep the animation from being too overwhelming, while ensuring it’s still fast.
The code responsible for this exists within AbsQuizView’s performScoreAnimation method.
FAB placement The recently announced Floating Action Buttons are great for executing promoted actions. In the case of Topeka, we use it to submit an answer. The FAB also straddles two surfaces with variable heights; like this:
To achieve this we query the height of the top view (R.id.question_view) and then set padding on the FloatingActionButton once the view hierarchy has been laid out:
private void addFloatingActionButton() {
    final int fabSize = getResources().getDimensionPixelSize(R.dimen.fab_size);
    int bottomOfQuestionView = findViewById(R.id.question_view).getBottom();
    final LayoutParams fabLayoutParams = new LayoutParams(fabSize, fabSize,
            Gravity.END | Gravity.TOP);
    final int fabPadding = getResources().getDimensionPixelSize(R.dimen.padding_fab);
    final int halfAFab = fabSize / 2;
    fabLayoutParams.setMargins(0, // left
        bottomOfQuestionView - halfAFab, //top
        0, // right
        fabPadding); // bottom
    addView(mSubmitAnswer, fabLayoutParams);
}
To make sure that this only happens after the initial layout, we use an OnLayoutChangeListener in the AbsQuizView’s constructor:
addOnLayoutChangeListener(new OnLayoutChangeListener() {
    @Override
    public void onLayoutChange(View v, int l, int t, int r, int b,
            int oldLeft, int oldTop, int oldRight, int oldBottom) {
        removeOnLayoutChangeListener(this);
        addFloatingActionButton();
    }
});
Round OutlineProvider Creating circular masks on API 21 onward is now really simple. Just extend the ViewOutlineProvider class and override the getOutline() method like this:
@Override
public final void getOutline(View view, Outline outline) {
    final int size = view.getResources().
        getDimensionPixelSize(R.id.view_size);
    outline.setOval(0, 0, size, size);
}
and setClipToOutline(true) on the target view in order to get the right shadow shape.
Check out more details within the outlineprovider package within Topeka for Android.
Vector Drawables We use vector drawables to display icons in several places throughout the app. You might be aware of our collection of Material Design Icons on GitHub which contains about 750 icons for you to use. The best thing for Android developers: As of Lollipop you can use these VectorDrawables within your apps so they will look crisp no matter what density the device’s screen. For example, the back arrow ic_arrow_back from the icons repository has been adapted to Android’s vector drawable format.
<vector xmlns:android="http://schemas.android.com/apk/res/android"
    android:width="24dp"
    android:height="24dp"
    android:viewportWidth="48"
    android:viewportHeight="48">
    <path
        android:pathData="M40 22H15.66l11.17-11.17L24 8 8 24l16 16 2.83-2.83L15.66 26H40v-4z"
        android:fillColor="?android:attr/textColorPrimary" />
</vector>
The vector drawable only has to be stored once within the res/drawable folder. This means less disk space is being used for drawable assets.
Property Animations Did you know that you can easily animate any property of a View beyond the standard transformations offered by the ViewPropertyAnimator class (and it’s handy View#animate syntax)? For example in AbsQuizView we define a property for animating the view’s foreground color.
// Property for animating the foreground
public static final Property FOREGROUND_COLOR =
        new IntProperty("foregroundColor") {

            @Override
            public void setValue(FrameLayout layout, int value) {
                if (layout.getForeground() instanceof ColorDrawable) {
                    ((ColorDrawable) layout.getForeground()).setColor(value);
                } else {
                    layout.setForeground(new ColorDrawable(value));
                }
            }

            @Override
            public Integer get(FrameLayout layout) {
                return ((ColorDrawable) layout.getForeground()).getColor();
            }
        };
This can later be used to animate changes to said foreground color from one value to another like this:
final ObjectAnimator foregroundAnimator = ObjectAnimator
        .ofArgb(this, FOREGROUND_COLOR, Color.WHITE, backgroundColor);
This is not particularly new, as it has been added with API 12, but still can come in quite handy when you want to animate color changes in an easy fashion.
Tests In addition to exemplifying material design components, Topeka for Android also features a set of unit and instrumentation tests that utilize the new testing APIs, namely “Gradle Unit Test Support” and the “Android Testing Support Library.” The implemented tests make the app resilient against changes to the data model. This catches breakages early, gives you more confidence in your code and allows for easy refactoring. Take a look at the androidTest and test folders for more details on how these tests are implemented within Topeka. For a deeper dive into Testing on Android, start reading about the Testing Tools.
What’s next? With Topeka for Android, you can see how material design lets you create a more consistent experience across Android and the web. The project also highlights some of the best material design features of the Android 5.0 SDK and the new Android Design Library.
While the project currently only supports API 21+, there’s already a feature request open to support earlier versions, using tools like AppCompat and the new Android Design Support Library.
Have a look at the project and let us know in the project issue tracker if you’d like to contribute, or on Google+ or Twitter if you have questions.
Join the discussion on

+Android Developers
Categories: Programming

Super fast unit test execution with WallabyJS

Xebia Blog - Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.