Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Discover and celebrate the best local games at Indonesia Games Contest

Android Developers Blog - Wed, 01/25/2017 - 05:00

Posted by David Yin, Business Development Manager, Indonesia, Google Play.

It is a great time to be a mobile game developer on Android with the opportunity reaching more than a billion global users on Google Play. At the same time, developers in fast growing mobile markets like Indonesia have an additional opportunity in the form of a huge local audience that is hungry for local content. We have already seen thousands of Indonesian developers launch high quality, locally relevant games for this new audience, such as "Tahu Bulat" & "Tebak Gambar".

In our continuous quest to discover, nurture growth, and showcase the best games from Indonesia, we are really happy to announce Indonesia Games Contest. This contest celebrates the passion and great potential of local game developers, and provides an opportunity to raise awareness of your game with global and local industry experts, together with gamers, from across Indonesia. It's also a chance to showcase your creativity and win cool prizes.
Entering the contest

The contest is only open to developers based in Indonesia who have published a new game on Google Play after 1 January 2016. Make sure to visit our contest website for the full list of eligibility criteria and terms. A quick summary of the process is below:
  1. If you are eligible, submit your game by 19 March 2017.
  2. Entries will be reviewed by Google Play team and industry experts, and up to 15 finalists will be announced in early April 2017.
  3. The finalists will get to showcase their games at the final event in Jakarta on 26 April 2017.
  4. Winner and runners up will be announced at final event.
To get started

Visit our contest website to find out more about the contest and submit your game.
Terima Kasih!


How useful did you find this blogpost?                                                                                 
Categories: Programming

Discover and celebrate the best local games at Indonesia Games Contest

Android Developers Blog - Wed, 01/25/2017 - 05:00

Posted by David Yin, Business Development Manager, Indonesia, Google Play.

It is a great time to be a mobile game developer on Android with the opportunity reaching more than a billion global users on Google Play. At the same time, developers in fast growing mobile markets like Indonesia have an additional opportunity in the form of a huge local audience that is hungry for local content. We have already seen thousands of Indonesian developers launch high quality, locally relevant games for this new audience, such as "Tahu Bulat" & "Tebak Gambar".

In our continuous quest to discover, nurture growth, and showcase the best games from Indonesia, we are really happy to announce Indonesia Games Contest. This contest celebrates the passion and great potential of local game developers, and provides an opportunity to raise awareness of your game with global and local industry experts, together with gamers, from across Indonesia. It's also a chance to showcase your creativity and win cool prizes.
Entering the contest

The contest is only open to developers based in Indonesia who have published a new game on Google Play after 1 January 2016. Make sure to visit our contest website for the full list of eligibility criteria and terms. A quick summary of the process is below:
  1. If you are eligible, submit your game by 19 March 2017.
  2. Entries will be reviewed by Google Play team and industry experts, and up to 15 finalists will be announced in early April 2017.
  3. The finalists will get to showcase their games at the final event in Jakarta on 26 April 2017.
  4. Winner and runners up will be announced at final event.
To get started

Visit our contest website to find out more about the contest and submit your game.
Terima Kasih!


How useful did you find this blogpost?                                                                                 
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Wed, 01/25/2017 - 03:06

On a long enough timeline the survival rate of everyone drops to zero

Categories: Project Management

Product Owner: Information Conduit or Networker

img_2572

Product owners play a pivotal role in all Agile projects. This is true whether a product owner exists or not. Since the role is so critical it should be easy to define the role of product owner. However, most definitions tend to quickly devolve into a list of responsibilities that depend on implementation and scale.  One of the most critical and often most variable roles of the product owner is that of the interface between the team and the outside world.  The outside world includes users, customers, stakeholders, sponsors, and sometimes other product owners and teams. We can visualize the interface role on a continuum that on one end includes the product owner acting as a highly structured conduit for information from outside the team to a facilitator of conversations on the other end of the continuum.  

The conduit role guides and shapes the information flow to the team.  When the product owner interprets their role in this manner they gather and parse information to the team.  Where the product owner has great personal charisma and/or power they often have an enormous reach, which opens up all sorts of information sources that would not be available.

The conduit interpretation of the product owner role is risky for at least two reasons. The first is because the product owner gathers and parses all of the information the team pulls in and his or her natural biases will affect what the team hears.  If the product owner’s interpretation is wrong so will be the rest of the team.  Secondly, any process flow that must go through a single point is apt to become a bottleneck.  Information bottlenecks will reduce the efficiency of the team, and if the team gets too frustrated with the flow of information they will potentially go around the bottleneck or just make up answers. This will reduce team effectiveness.

The facilitator interpretation of the role is the best scenario. In this interpretation of the interface role, the product owner helps anyone on the team to the network to the source of information they need.  In theory, this is wonderful because it reduces any chance of a bottleneck and maximizes the flow of information. Product owners with strong networks and good organizational knowledge are required to make this work effectively.  The most significant potential downside to the facilitator interpretation is if there is no strong central vision of what is being delivered, the lack of unifying interpretation of information coming to the group can cause team members to strike off in wrong directions. The strong project or product vision is crucial keeping everyone on the same page without having to have the product owner acting as a hall monitor.

One of the important aspects of the role of a product owner is to guide the vision of what is being delivered.  Information flow integral to the vision.  Having a well-crafted and shared vision allows the product owner to decide whether they have to collect and parse each bit of information a team needs or can introduce others and let the information flow more freely.  Both ends of the continuum can be effective depending on the team and organization culture. (We will discuss the product owner’s role in shaping culture next).


Categories: Process Management

Never trust a passing test

Actively Lazy - Tue, 01/24/2017 - 23:23

One of the lessons when practising TDD is to never trust a passing test. If you haven’t seen the test fail, are you sure it can fail?

traffic-lights-208253_1920Red Green Refactor

Getting used to the red-green-refactor cycle can be difficult. It’s very natural for a developer new to TDD to immediately jump into writing the production code. Even if you’ve written the test first, the natural instinct is to keep typing until the production code is finished, too. But running the test is vital: if you don’t see the test fail, how do you know the test is valid? If you only see it pass, is it passing because of your changes or for some other reason?

For example, maybe the test itself is not correct. A mistake in the test setup could mean we’re actually exercising a different branch, one that has already been implemented. In this case, the test would already pass without writing new code. Only by running the test and seeing it unexpectedly pass, can we know the test itself is wrong.

Or alternatively there could be an error in the assertions. Ever written assertTrue() instead of assertFalse() by mistake? These kind of logic errors in tests are very easy to make and the easiest way to defend against them is to ensure the test fails before you try and make it pass.

Failing for the Right Reason

It’s not enough to see a test fail. This is another common beginner mistake with TDD: run the test, see a red bar, jump into writing production code. But is the test failing for the right reason? Or is the test failing because there’s an error in the test setup? For example, a NullReferenceException may not be a valid failure – it may suggest that you need to enhance the test setup, maybe there’s a missing collaborator. However, if you currently have a function returning null and your intention with this increment is to not return null, then maybe a NullReferenceException is a perfectly valid failure.

This is why determining whether a test is failing for the right reason can be hard: it depends on the production code change you’re intending to make. This depends not only on knowledge of the code but also the experience of doing TDD to have an instinct for the type of change you’re intending to make with each cycle.

When Good Tests Go Bad

A tragically common occurrence is that we see the test fail, we write the production code, the test still fails. We’re pretty sure the production code is right. But we were pretty sure the test was right, too. Eventually we realise the test was wrong. What to do now? The obvious thing is to go fix the test. Woohoo! A green bar. Commit. Push.

But wait, did we just trust a passing test? After changing the test, we never actually saw the test fail. At this point, it’s vital to undo your production code changes and re-run the test. Either git stash them or comment them out. Make sure you run the modified test against the unmodified production code: that way you know the test can fail. If the test still passes, your test is still wrong.

TDD done well is a highly disciplined process. This can be hard for developers just learning it to appreciate. You’ll only internalise these steps once you’ve seen why they are vital (and not just read about it on the internets). And only by regularly practising TDD will this discipline become second nature.


Categories: Programming, Testing & QA

SE-Radio Episode 280: Gerald Weinberg on Bugs Errors and Software Quality

Marcus Blankenship talks with Gerald Weinberg about software errors, the fallacy of perfection, how languages and process can reduce errors, and the attitude great programmers have about their work.  Gerald’s new book, Errors: Bugs, Boo-boos, and Blunders, focuses on why programmers make errors, how teams can improve their software, and how management should think of […]
Categories: Programming

SE-Radio Episode 280: Gerald Weinberg on Bugs Errors and Software Quality

Marcus Blankenship talks with Gerald Weinberg about software errors, the fallacy of perfection, how languages and process can reduce errors, and the attitude great programmers have about their work.  Gerald’s new book, Errors: Bugs, Boo-boos, and Blunders, focuses on why programmers make errors, how teams can improve their software, and how management should think of […]
Categories: Programming

Don't use ReactiveUI

Eric.Weblog() - Eric Sink - Tue, 01/24/2017 - 19:00
TL;DR

This blog post says the opposite of its lazy and deliberately provocative title. I have become a huge fan of ReactiveUI. I just want to ramble about the path I took to get here.

Listening to Paul Betts

I first heard about ReactiveUI at a conference presentation by Paul Betts. I think it was at Xamarin Evolve. Mostly I remember feeling dumb. He said a lot of things that I didn't understand.

I went to that session without much real experience in Model-View-ViewModel (MVVM) development. Conceptually, I understood the idea of a ViewModel. But Paul mostly talked about how ReactiveUI avoids certain problems. And since I had not experienced those problems, his words didn't sink in.

Talking to teenagers about risk

Each time one of my kids was approaching adolescence, I sat down and explained the risks associated with certain choices. Laws and moral judgements aside, the simple fact is that many choices involve risks, and I thought it would be helpful to pass along that bit of information.

And in each case, my child said, "Thanks Dad", and proceeded to always make wise and low-risk choices from that point on.

Well, actually, no.

Teenagers simply do not learn that way. They process risk very differently from people who are more mature. Tell a 16-year-old that "if you drive too fast you might get a ticket". The adolescent will immediately begin driving too fast, and, in all likelihood, will not get ticket. This is how teenagers realize they are smarter than their parents.

Tangent #1: It is almost certainly a good thing that young people are more brave. It would be Very Bad if everybody started out with the same level of risk aversion as the average 65-year-old. Go watch the "Tapestry" episode of Star Trek TNG.

Tangent #2: I really should claim no expertise in parenting, but if somebody forced me to write a book on parenting a teenager, I would say this: Let your kid suffer from their own choices. That said, it is worth the effort to try and help them avoid the really bad mistakes, the ones with consequences that last for decades. But they do have to learn to make their own choices. Realize this as early as you can. The path to frustration starts with making everything all about you.

How we learn new technologies

My metaphor has many problems. For starters, Paul Betts is not my Dad.

Also, the element of adolescent rebellion was not present. I didn't hear Paul's wisdom and run in the opposite direction because of my deep need to separate my identity from his. In fact, I started devouring everything I could find on MVVM and IObservable. I really wanted to understand what he was saying.

But the metaphor works in one significant way: Like a teenager, I had to learn by doing. Nobody's words made much of a difference. None of that reading helped me become a a user of ReactiveUI. I went down another path.

Actually, I went down several other paths.

Maybe it's just me

I observe that most developers want content that explains how to get something done. "If your objective is to do X, then do the following steps." The most popular books and articles tend to follow this pattern. Questions of this form are the ones that do well on StackOverflow.

But this is almost never what I want.

I much prefer content that explains how things work. Once I understand that, I can figure out the steps myself.

When I am developing software, I always, ALWAYS do better when I understand what is going on "under hood", when I can see through the leaky abstractions.

And as I mentioned, I am apparently in the very small minority on this. If 90% of the world disagrees with me, does that put me in the top 10% ? Or does it mean my approach is somehow defective? Modesty aside, my history contains enough successes to allow me some confidence in believing that my approach is better.

I also observe that my approach is just a different spelling for the old adage, "Give a man a fish and you feed him for a day. Teach a man to fish and he eats for a lifetime."

If you tell a software developer what to type and where to click, you can help them complete today's task. But if you instead teach them how things work, they will be able to apply that understanding on other days too.

Hmmm. I'm talking myself into this. I don't know why most people prefer shallow recipes, but I really do think deep understanding is better.

Still, I like to stay open-minded about things. I've got a lot of failures too.

The truth is that my approach has tradeoffs. The need to understand everything tends to slow me down during the early stages. I usually gain some of that back in the fourth quarter of the game, where deeper understanding is often helpful in diagnosing tricky problems.

But again, in the decision making around software development, absolutes are rare. I'll admit that sometimes a simple set of steps without depth are exactly what is needed.

Maybe the ReactiveUI docs are just bad?

I don't know. Maybe. I've read the docs plenty. They don't seem bad to me. I also see nothing there that makes me want to defend them as the best docs ever.

Suppose that I regret not choosing ReactiveUI sooner. Further suppose that I wanted to blame somebody else for my choices. I guess I could find something to complain about. But I also don't tend to find that criticizing somebody else's work is helpful.

And remember, I started this journey sitting in front of an audience, listening to Paul Betts, and feeling dumb. To be clear, in that kind of context, I *like* feeling dumb. It's an opporunity to learn.

So why did I not choose ReactiveUI sooner?

I guess I don't really know. But I'm pretty sure that nothing has made me appreciate ReactiveUI more than the suffering that comes from not using it.

And that remark isn't very helpful, is it? I'd like to try and do better. Let's see...

"Son, it's just basic statistics. If you're going to always drive 15 MPH over the speed limit, you will eventually get caught. Suppose you roll the dice 20 times in a row without getting a 12. You still might get a 12 on the next roll, right?"

Oh, wait, wrong topic. Let me try again.

Why is ReactiveUI awesome?

In some software development situations, like mobile apps, if you take a step back and look at the forest instead of the trees, you will see that most of your code is reacting to something that changed.

There are lots of tools you can use to approach this kind of app. You can use C# events and callbacks and switch statements and delegates and lambdas and observables and notifications and bindings and more.

For simple apps, none of these approaches are much better than any other. But as your app gets more complicated, some approaches cope more gracefully than others.

Most cars drive pretty smooth at 30 MPH. But at 75 MPH, some vehicles are still giving a smooth ride, while others are shaking.

Let's try a conceptual example or two. Suppose you have a button, and you want something to happen when the user presses that button. This is pretty simple. All reasonable solutions to this problem are about the same.

On the other hand, let's say you have a list of items. The items in that list come from a SQL query. That query has 4 inputs, each of which comes from a UI control. Every time one of those controls changes its value, the query needs to be re-run and the list needs to be updated. A couple of those controls need to be disabled under certain circumstances.

These UI elements have a complicated relationship. We still have plenty of choices in how to express that relationship in code, but this situation is complicated enough that we start to see differences between those approaches. Some of the ones that worked out really well in the simple case seem kinda tedious for this case.

If my driveway has half an inch of snow, all methods of clearing it are about the same. But if my driveway has 15 inches of snow, a shovel is decidedly inferior to a tractor.

Why do I like ReactiveUI? Because I have found that it copes gracefully as the situation gets more complicated.

Why is this? Much of the credit goes to the "reactive" foundation on which ReactiveUI is built. Reactive Extensions. Rx. IObservable. These building blocks are particularly adept at expressing the relationship between a group of things that are changing. ReactiveUI adds another layer (or two) on top of these things to make that expressiveness more convenient when implementing user interfaces.

To be honest, I fudged a little bit when I said that all solutions are roughly equivalent when the problem is simple. That's not quite true. For simple situations, I'd have to admit that ReactiveUI might be a little worse. There is a learning curve.

If I am writing a grocery list, I could use a word processor, but a pencil and paper is actually simpler. But if I am writing a novel, the word processor is the clear winner.

I'm claiming that the effort to learn Rx and ReactiveUI is worth the trouble. My claim is based on this notion that ReactiveUI shines as complexity increases, but also on my belief that most people underestimate the complexity of their app.

If you disagreed with me above when I said that "most of your code is reacting to something that changed", you might be underestimating the complexity of your app. It is in fact very common to start implementing under the assumption that something will not change and then later realize that you need notifications or events. Or an observable.

Hmmmm.

Would the paragraphs above have changed my course earlier?

I don't know. Probably not.

I didn't start this believing that I could write the best ReactiveUI advocacy ever. Looking at it now, I can't believe I wrote it with no code in it. The canonical ReactiveUI evangelism pamphlet has gotta have WhenAnyValue() in it somewhere.

I just think it's interesting that despite my best efforts, I was unable to really understand the benefits of ReactiveUI until I tried using its alternatives. My current project is late. If I had chosen ReactiveUI earlier, maybe it would be, er, less late? There are questions here worth asking.

But am I 100% certain that it is always better to spare yourself the learning experience of using less effective approaches? No.

Can I credibly claim that everyone should choose ReactiveUI in every situation? Certainly not.

Maybe all I can say is that I am currently having a great experience with ReactiveUI.

Maybe that means the rest of this blog post is useless.

But you should have known that when you saw the cheesy title.

 

Software Development Linkopedia January 2017

From the Editor of Methods & Tools - Tue, 01/24/2017 - 07:54
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Test-Driven Development (TDD), big meetings, product ownership, UX stories, cognitive bias, ScrumMaster job description, test management in JIRA, #NoProjects, team performance and software testing culture. Blog: TDD and the “6 […]

Mindset: The New Psychology of Success, Reviews, Carol S. Dweck, Ph.D.: Re-Read Week 2, Chapter 1 Mindsets

Mindset Book Cover

This week we begin to get into the nitty gritty of the re-read of Carol Dweck’s Mindset: The New Psychology of Success. Today we are reflecting on Chapter 1, Mindsets, from the 2008 Ballantine Books Trade paperback edition version of the book.  First, we will summarize the chapter then we examine the concept from the point of view of an Agile coach.

Chapter 1 -Mindsets

Summary:

Dweck’s research has identified two different mindsets. The two mindsets are called the fixed mindset and the growth mindset. Dweck defined the two mindsets through her research into why people succeed and fail.

People with a fixed mindset believe that their qualities/attributes are carved in stone.  For example, someone with a fixed mindset believes that they are as smart or as innovative as they will ever be.  The belief that human basic attributes are fixed pushes people with a fixed mindset into an urgent need to prove those capabilities to themselves and others over and over.  For people with a fixed mindset anything, such as a challenge, that can cause them fail will challenge their perception of their capabilities and value.  For people with a fixed mindset proving their capability is a top priority.

People with a growth mindset believe that their capabilities and attributes are what they are right now and can change and grow through application and experience. Dweck suggests that capabilities and attributes are not boundless, but rather a person with a growth mindset does know where the upper boundary is. People with a growth mindset recognize that boundaries are unknown which generates a passion for learning.  Seeking and accepting challenges are a reflection of a passion for learning and pushing the boundaries of their capabilities.    Development is a priority for those with a growth mindset.

Dweck uses the attribute of self-insight to delineate the difference between those with fixed and growth mindsets.  People with a fixed mindset are apt to misestimate and misreport both their performance and ability. Those with a growth mindset are more accurate in perceiving their strengths and weaknesses.  When they fail, those with a growth mindset use failure as a learning experience.  The propensity of those with fixed mindset to explain away negative outcomes and to magnify their successes reduces any possibility for change generated from self-insight.

The last paragraph of the chapter establishes the central premise of the book: How can a mindset create establish boundaries that cause a person to either love or avoid a challenge? Or to believe that effort and resilience can generate growth rather reduce capabilities?

The chapter concludes with an exercise to help readers to understand whether they have a fixed or a growth mindset.  The two sets of questions in the exercise focus first on intelligence and then on personal qualities.  Take the exercise and share the results in the comments for Chapter 1.  Ask a friend or colleague to answer the questions then use the results to validate the attributes of a fixed and growth mindsets identified in Chapter 1.  Note – Remember that in the introduction that we stated that mindsets are not fixed.

Chapter 1 from a coach’s perspective.

Coaches are often involved in transforming organizations and/or facilitating teams and individuals.  Using the construct of mindsets during a transformation is useful.  Coaches often classify stakeholders using mindsets as an exercise when defining a change management plan.  The approach to change will be different for each type of mindset.  Stakeholders with a growth mindset will accept challenges while those with fixed mindset will respond better to calls that they perceive will be safe and make them seem successful.  These are two very different messages.

Using mindsets for coaching a team or to coach an individual provides a coach with a powerful starting point for predicting how individuals will act. For example as a manager and a leader over the years I have been amazed by employees that I had that would make no effort on their own to learn something new.  They only focused on what they were good at today.  They would fight any change tooth and nail that impacted their specialty area.  They had a fixed mind state.  These people were great at what they did but they could not be counted on to stretch.  Almost all teams are some mixture of fixed and growth minded members, knowing where team members fall in the mindset dichotomy can be useful when a coach provides advice on who should explore new ideas and who should focus on more repetitive tasks. In a similar vein, a leader that knows that a team is populated with one or the other mindset would be able to steer the right kind of work team based on their mindset.

Previous Entries of Mindset:

 


Categories: Process Management

Android Instant Apps starts initial live testing

Android Developers Blog - Mon, 01/23/2017 - 19:48
Posted by Aurash Mahbod, Software Engineer, Google Play

Android Instant Apps was previewed at Google I/O last year as a new way to run Android apps without requiring installation. Instant Apps is an important part of our effort to help users discover and run apps with minimal friction.

We’ve been working with a small number of developers to refine the user and developer experiences. Today, a few of these Instant Apps will be available to Android users for the first time in a limited test, including apps from BuzzFeed, Wish, Periscope, and Viki. By collecting user feedback and iterating on the product, we’ll be able to expand the experience to more apps and more users.
To develop an instant app, you’ll need to update your existing Android app to take advantage of Instant Apps functionality and then modularize your app so part of it can be downloaded and run on-the-fly. You’ll use the same Android APIs and Android Studio project. Today, you can also take some important steps to be ready for Instant Apps development. The full SDK will be available in the coming months.

There has already been a tremendous amount of interest in Instant Apps from thousands of developers. We can’t wait to hear your feedback and share more awesome experiences later this year. Stay tuned!

Categories: Programming

Seeking a New Host for SE Radio

We have an opening for a volunteer host to produce five episodes per year. Please contact the show editor Robert Blumen for details.
Categories: Programming

What’s Minimum: Thinking About Minimum Viable Experiments

When I talk about Minimum Viable Products or Minimum Viable Experiments, people often tell me that their minimum is several weeks (or months) long. They can’t possibly release anything without doing a ton of work.

I ask them questions, to see if they are talking about a Minimum Indispensable Feature Set or a Minimum Adoptable Feature Set instead of an MVE or an MVP. Often, they are.

Yes, it’s possible you need a number of stories to create an entire feature set before you release it to your entire customer base. And, that’s not an MVP or an MVE.

Do you know about Eric Ries’ Build-Measure-Learn loop? Or the Cynefin idea of small, safe-to-fail experiments? Here’s the thinking behind both of those ideas:

  • You have ideas you could implement in your product. If you are like my clients, you have more ideas than you could implement ever. This is a good thing!
  • You Build an idea for one product.
  • You Measure the result with data.
  • You Learn from that data to generate/reduce/change your ideas.
  • Do it again until you’ve learned enough.

When I think about the Build-Measure-Learn loop and apply it to the idea of a Minimum Viable Experiment, I often discover these possibilities:

  1. We have an MVE now. We need to define how to measure it and use the data.
  2. We don’t have to do much to collect some data.
  3. We can ask this question: What do we want to know and why? What is the benefit of gathering that data?

Here’s an example of how this affected a recent client. They have an embedded system. They thought that if the embedded part booted faster, they could find more applications for the system. In embedded software, speed is often a factor.

They chose one client, who had systems now. The Product Manager visited the client and asked about other vertical applications within the organization. Did they have a need for something like this system?

Yes, they did. They were concerned about speed, not just boot speed, but application processing speed.

The Product Manager asked if they were willing to be part of an MVE. He explained that the team would watch how they implemented and used the embedded system. Yes, they would all sign non-disclosures. The client also had to know that the team might not actually implement for real what the experiment was.

The customer agreed. The team implemented four very small performance enhancements—only through the happy paths, no alternative/error paths—and visited the customer to see what would happen. It took the team three days to do this.

The team visited for one day to watch how the client’s engineers used the product.

They were astounded. Boot speed was irrelevant. One specific path through the processing was highly relevant. The other three were irrelevant for this specific customer.

This particular MVE was a little on the expensive side. It took a team-week to develop, measure, and learn. There was some paperwork that both sides had to manage. If you have a different kind of product, it might take you less time.

And, look how inexpensive that week was. That week taught the team what one vertical product line needed and didn’t need. They managed to avoid all those “necessary” features. This client didn’t need them. It turned out a different kind of vertical needed two of them, and no one appears to need the remaining one.

The Product Manager was able to prune many of the ideas in the backlog for this vertical market. The Product Owner knew and (knew why) which features were more important and how to write stories and rank them.

That’s one example of an MVE. Your experiments will probably look different. What’s key here is this question: What is the smallest thing you can measure that will provide you value so you can make a decision for the product?

Categories: Project Management

Master-Master Replication and Scaling of an Application between Each of the IoT Devices and the Cloud

In this article, I want to share with you how I solved a very interesting problem of synchronizing data between IoT devices and a cloud application.

I’ll start by outlining the general idea and the goals of my project. Then I’ll describe my implementation in greater detail. This is going to be a more technically advanced part, where I’ll be talking about the Contiki OS, databases, protocols and the like. In the end, I’ll summarize the technologies I used to implement the whole system.

Project overview

So, let’s talk about the general idea first.

Here’s a scheme illustrating the final state of the whole system:

I have a user who can connect to IoT devices via a cloud service or directly (that is over Wi-Fi).

Also, I have an application server somewhere in the cloud and the cloud itself somewhere on the Internet. This cloud can be anything — for example, an AWS or Azure instance or it could be a dedicated server, it could be anything :)

The application server is connected to IoT devices over some protocol. I need this connection to exchange data between the application server and the IoT devices.

The IoT devices are connected to each other in some way (say, over Ethernet or Wi-Fi).

Also, I have more IoT devices generating some telemetry data, like light or temperature readings. There can be more than 100 and even over 1,000 devices.

Basically, my goal was to make it possible to exchange data between the cloud and these IoT devices.

Before I proceed, let me outline some requirements for my system:

Categories: Architecture

SPaMCAST 427 – Onward to Post-Agile Age, Product Owner in Testing, Requirements and Configuration Management

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 427 begins with an essay on the Post-Agile Age, titled Onward to Post-Agile Age.  The Post-Agile Age is coming and it is a bed that human nature and commercial pressures have created.

Next Jeremy Berriault brings his QA Corner to the Cast to discuss how he views the role of product owner in Agile testing . Visit Jermey’s new blog at https://jberria.wordpress.com/

The Software Sensei, Kim Pries, discusses requirements and weird tools like the Z notation.  Reach out to Kim on LinkedIn.

Jon M Quigley, brings his column, the Alpha and Omega of Product Development to the cast.  In this installment, Jon concludes a three part series on configuration management.  This week Jon puts all of the pieces together. One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we start to get into the nitty gritty of our re-read of Carol Dweck’s Mindset: The New Psychology of Success. This week we discuss Chapter one and then explore some the applications of the mindset concepts to coaching.  

Remember to buy a copy of  Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 428 features an interview with Dr Mark Bojeun.  We discussed the concept of project visions, their use and why they make sense in the Agile or Post-Agile age!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 427 - Onward to Post-Agile Age, Product Owner in Testing, Requirements and Configuration Management

Software Process and Measurement Cast - Sun, 01/22/2017 - 23:33

The Software Process and Measurement Cast 427 begins with an essay on the Post-Agile Age, titled Onward to Post-Agile Age.  The Post-Agile Age is coming and it is a bed that human nature and commercial pressures have created.

Next Jeremy Berriault brings his QA Corner to the Cast to discuss how he views the role of product owner in Agile testing . Visit Jermey’s new blog at https://jberria.wordpress.com/

The Software Sensei, Kim Pries, discusses requirements and weird tools like the Z notation.  Reach out to Kim on LinkedIn.

Jon M Quigley, brings his column, the Alpha and Omega of Product Development to the cast.  In this installment, Jon concludes a three part series on configuration management.  This week Jon puts all of the pieces together. One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we start to get into the nitty gritty of our re-read of Carol Dweck’s Mindset: The New Psychology of Success. This week we discuss Chapter one and then explore some the applications of the mindset concepts to coaching.  

Remember to buy a copy of  Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 428 features an interview with Dr Mark Bojeun.  We discussed the concept of project visions, their use and why they make sense in the Agile or Post-Agile age!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 01/22/2017 - 06:31

It is the first responsibility of every citizen to question authority ― Benjamin Franklin

Think of this when you hear someone say protesting is a waste of time, and tell you to just be quiet and accept the situation you are in.

Categories: Project Management

Happy 10th Birthday Google Testing Blog!

Google Testing Blog - Sat, 01/21/2017 - 22:57
by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007

The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.

Categories: Testing & QA

Go vs Python: Parsing a JSON response from a HTTP API

Mark Needham - Sat, 01/21/2017 - 11:49

As part of a recommendations with Neo4j talk that I’ve presented a few times over the last year I have a set of scripts that download some data from the meetup.com API.

They’re all written in Python but I thought it’d be a fun exercise to see what they’d look like in Go. My eventual goal is to try and parallelise the API calls.

This is the Python version of the script:

import requests
import os
import json

key =  os.environ['MEETUP_API_KEY']
lat = "51.5072"
lon = "0.1275"

seed_topic = "nosql"
uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(seed_topic, lat, lon, key)

r = requests.get(uri)
all_topics = [topic["urlkey"]  for result in r.json()["results"] for topic in result["topics"]]

for topic in all_topics:
    print topic

We’re using the requests library to send a request to the meetup API to get the groups which have the topic ‘nosql’ in the London area. We then parse the response and print out the topics.

Now to do the same thing in Go! The first bit of the script is almost identical:

import (
	"fmt"
	"os"
	"net/http"
	"log"
	"time"
)

func handleError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

func main() {
	var httpClient = &http.Client{Timeout: 10 * time.Second}

	seedTopic := "nosql"
	lat := "51.5072"
	lon := "0.1275"
	key := os.Getenv("MEETUP_API_KEY")

	uri := fmt.Sprintf("https://api.meetup.com/2/groups?&topic=%s&lat=%s&lon=%s&key=%s", seedTopic, lat, lon, key)

	response, err := httpClient.Get(uri)
	handleError(err)
	defer response.Body.Close()
	fmt.Println(response)
}

If we run that this is the output we see:

$ go cmd/blog/main.go

&{200 OK 200 HTTP/2.0 2 0 map[X-Meetup-Request-Id:[2d3be3c7-a393-4127-b7aa-076f150499e6] X-Ratelimit-Reset:[10] Cf-Ray:[324093a73f1135d2-LHR] X-Oauth-Scopes:[basic] Etag:["35a941c5ea3df9df4204d8a4a2d60150"] Server:[cloudflare-nginx] Set-Cookie:[__cfduid=d54db475299a62af4bb963039787e2e3d1484894864; expires=Sat, 20-Jan-18 06:47:44 GMT; path=/; domain=.meetup.com; HttpOnly] X-Meetup-Server:[api7] X-Ratelimit-Limit:[30] X-Ratelimit-Remaining:[29] X-Accepted-Oauth-Scopes:[basic] Vary:[Accept-Encoding,User-Agent,Accept-Language] Date:[Fri, 20 Jan 2017 06:47:45 GMT] Content-Type:[application/json;charset=utf-8]] 0xc420442260 -1 [] false true map[] 0xc4200d01e0 0xc4202b2420}

So far so good. Now we need to parse the response that comes back.

Most of the examples that I came across suggest creating a struct with all the fields that you want to extract from the JSON document but that feels a bit over kill for such a simple script.

Instead we can just create maps of (string -> interface{}) and then apply type conversions where appropriate. I ended up with the following code to extract the topics:

import "encoding/json"

var target map[string]interface{}
decoder := json.NewDecoder(response.Body)
decoder.Decode(&target)

for _, rawGroup := range target["results"].([]interface{}) {
    group := rawGroup.(map[string]interface{})
    for _, rawTopic := range group["topics"].([]interface{}) {
        topic := rawTopic.(map[string]interface{})
        fmt.Println(topic["urlkey"])
    }
}

It’s more verbose that the Python version because we have to explicitly type each thing we take out of the map at every stage, but it’s not too bad. This is the full script:

package main

import (
	"fmt"
	"os"
	"net/http"
	"log"
	"time"
	"encoding/json"
)

func handleError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

func main() {
	var httpClient = &http.Client{Timeout: 10 * time.Second}

	seedTopic := "nosql"
	lat := "51.5072"
	lon := "0.1275"
	key := os.Getenv("MEETUP_API_KEY")

	uri := fmt.Sprintf("https://api.meetup.com/2/groups?&topic=%s&lat=%s&lon=%s&key=%s", seedTopic, lat, lon, key)

	response, err := httpClient.Get(uri)
	handleError(err)
	defer response.Body.Close()

	var target map[string]interface{}
	decoder := json.NewDecoder(response.Body)
	decoder.Decode(&target)

	for _, rawGroup := range target["results"].([]interface{}) {
		group := rawGroup.(map[string]interface{})
		for _, rawTopic := range group["topics"].([]interface{}) {
			topic := rawTopic.(map[string]interface{})
			fmt.Println(topic["urlkey"])
		}
	}
}

Once I’ve got these topics the next step is to make more API calls to get the groups for those topics.

I want to make those API calls in parallel while making sure I don’t exceed the rate limit restrictions on the API and I think I can make use of go routines, channels, and timers to do that. But that’s for another post!

The post Go vs Python: Parsing a JSON response from a HTTP API appeared first on Mark Needham.

Categories: Programming

Southeast Asian indie game developers find success on Google Play

Android Developers Blog - Fri, 01/20/2017 - 18:46
Posted by Vineet Tanwar, Business Development Manager, Google Play

Indie game developers bring high quality, artistic, and innovative content to Google Play and raise the bar for all developers in the process. In fact, they also make up a large portion of our 'Editor's Choice' recommended titles.
Southeast Asia, in particular, has a vibrant indie game developer ecosystem, and we've been working closely with them to provide tools that help them build successful businesses on Google Play. Today, we're sharing stories from three Indie developers based in Singapore, Vietnam, and Indonesia, who joined us at our 'Indie Game Developers Day' workshops in May 2016 and all of whom have experienced significant growth since.

Inzen Studio from Singapore learned how to use store listing experiments and has improved the conversion rate of their newly launched game Dark Dot by 25%. Indonesia based studio, Niji Games, creator of Cute Munchies, implemented 'Saved Games' and 'Events and Quests' from Google Play games services to significantly improve user retention, and also earned an 'Editor's Choice' badge in the process. Ho Chi Minh City based developer, VGames, optimized monetization and introduced new paid products for their game Gungun online, and grew revenue by over 100%.


Indie game developers who are interested in meeting members of Google Play and who would like to work closer with us are invited to join our next round of SEA workshops in March 2017. To apply for these events, just fill in this form and we will reach out to you.


How useful did you find this blogpost?

   
Categories: Programming