Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Data Science is the Art of Asking Better Questions

I heard a colleague make a great comment today …

“Data science is the art of asking better questions.

It’s not the art of finding a solution … the data keeps evolving.”

Categories: Architecture, Programming

Billions without Buzz

Eric.Weblog() - Eric Sink - Fri, 06/27/2014 - 19:00

I've been thinking a lot lately about the distorted perspective I get when I extrapolate from my daily sources of content.

Using Twitter buzz to sip from a firehose

I currently follow 304 people on Twitter. These people are a primary means for me to hear about stuff that is (1) happening, and (2) important to me.

I rely more on echoes than voices. For example, I don't follow Satya Nadella (@satyanadella). But whenever he tweets something I would find important, I hear about it anyway, because a dozen people I do follow are talking about it.

Twitter for me is all about conversations. It's about buzz. I'm interested in what people are talking about. And I like having the option of participating in the chatter.

And the people I choose to follow pretty much cover everything important. I don't miss anything that I need to know.

Or do I?

What am I missing?

Obviously, my method is designed to exclude information. Sometimes my buzz filter works exactly as I want. I rarely see any tweets about Miley Cyrus. That is "by design".

I also like the fact that buzz is naturally slanted toward things that are new even though lots of very important things are old. (As long as I don't get confused. If I extrapolate from buzz with a black-and-white mentality, I might believe that COBOL, SQL, and Subversion are dead because everybody has switched to Java, Mongo, and Git. Older technologies have little or no buzz, but tons of people are still using them everyday.)

The problem is the stuff that is new and important (to me) but has very little buzz.

The boring wave of B2B apps

I am absolutely convinced that the B2B wave of mobile apps has barely started.

As Benedict Evans (@BenedictEvans) says, mobile is eating the world. But so far, most of the action is with consumers, not businesses.

And this whole wave has been very high-buzz, mostly because everything had to get bigger, and the stuff that could not was, er, "disrupted".

Add two billion mobile devices to the world and lots of people are forced to think about scale in a whole new way:

  • Manufacturing. Foxconn is enormous. What was the biggest manufacturing facility on earth before the iPhone?

  • Venture Capital. I remember when $100M was an exit. Now it's a series A.

  • Servers. Mobile is accelerating the growth of the cloud, served by data centers that make Soldier Field look small.

  • Databases. When you have 500 million users, you can't afford unique constraints and foreign keys in the db layer anymore.

All this big-ness is causing a lot of disruption. And buzz.

Meanwhile, the majority of the corporate world is still trying to figure out what to do about mobile.

Consider this in terms of the classic marketing bell curve:

  • In the consumer world, mobile is in the conservatives, and starting to sell to the laggards.

  • In the corporate world, mobile is in the early adopters. It's pre-chasm.

Companies who still need high growth to justify their stock price (read: Apple) are distressed about the notion that everybody on the planet who can afford a smart phone already has one. Marketing reseach firms are still doing surveys asking corporate IT about when they are going to dive into mobile even as my Mom has two tablets.

Has there ever been a technology wave where the consumers got so far ahead of business?

I was one of those people in the early 80s who had a computer before they were prevalent in the business world. But I was a hobbyist and a nerd. In terms of volume and sheer revenue, adoption of PCs was driven by companies, not consumers. They wanted to run Lotus 1-2-3, so they led the way. It was a long time before computers solved real problems for consumers in the way that they solved real problems for business. In fact, I'd argue this didn't happen until around 1995 when the Web came along.

In mobile devices, consumers have gotten so far ahead that they have provided the client side of the infrastructure that Corporate IT will use. Going forward, it's going to be BYOD (Bring Your Own Device). Big companies are not going to buy 10,000 BlackBerries for their workforce when everybody already has a smart phone. Instead, they have to figure out how they're going to securely integrate all these different devices into their corporate systems (which is why cross-platform is getting even more important in the mobile space, which is contributing to Xamarin's pursuit of world domination). This is not the kind of policy to which Corporate IT is accustomed, and that is slowing them down even more.

But this is going to happen. A lot of enterprise apps are going to get written.

And this wave will be very low-buzz compared to the "Angry Birds and Candy Crush" wave. People won't be talking about it (and even when they do, the sound will get drowned out by the buzz over the Internet of Things wave).

Quite simply, the B2B apps wave is not interesting enough to get serious buzz:

  • It won't be as disruptive.

  • It won't re-challenge our notions of scale. (Walmart's 2.2 million employees would be a small user base by today's standards.)

  • It's going to be built on, and integrated with, technologies that are old. (Corporate IT will prefer to add mobile incrementally, with as little change to existing systems as possible.)

Compared to my first computer(s), an iPhone is a supercomputer. We are moving into a wave where companies like Procter & Gamble are going to gain operational efficiency because all their employees carry supercomputers in their pocket. And this will be considered boring.

But despite all the yawns, this wave is going to involve a lot of money. Many billions of dollars are going to move around as all these enterprise apps get written.

In other words, this is going to be boring in the same way that IBM has been boring for the last decade. This week, my Twitter feed is dominated by people talking about the Google I/O conference. Nobody is talking about IBM, even though they have almost twice as much revenue as Google. That's how buzz works.

If I want to closely follow something that doesn't have much buzz, I'm gonna have to work harder.

Let's talk about Alex Bratton

I am currently reading a book called Billion Dollar Apps, by Alex Bratton (@alexbratton), CEO of Lextech, a custom mobile app dev shop in Chicago.

My buzz filter certainly didn't find this book for me:

  • @alexbratton has even fewer Twitter followers than I do.

  • @LextechApps has fewer followers than some high school students I know.

  • Almost all of Bratton's followers are people I don't know.

  • Google searches reveal no apparent connection between Lextech and Xamarin.

  • Twitter searches suggest that Bratton and Lextech have seldom or never been mentioned by Scott Hansleman (@shanselman).

There just aren't many connections between Lextech's world and mine. In fact, it looks like maybe I am the only occupant of the overlapping portion of the Venn diagram.

Maybe Lextech just doesn't have much buzz. Or maybe Lextech has the kind of buzz that happens mostly off Twitter. Maybe the CIOs of Fortune 500 companies share rides on their private jets where they sip Macallan 25 and talk about how great Lextech is.

Whatever. I've been following Alex Bratton on Twitter largely because he and I were students at UIUC around the same time, and I sort-of vaguely remember him. Were it not for this very thin college connection, I might never have heard of him or his company or his book.

And that would be sad, because Lextech's activities land squarely in an area of interest for me. Bratton's company may be low-buzz, but they're building B2B mobile apps for banner-name companies including GE, Fidelity, John Deere, Blue Cross, and H&R Block. They're riding the front of the wave. I think that's pretty darn cool.

So I am halfway through the book, and two things are already quite clear:

  • The book is really good.

  • I find it boring.

Basically, I'm not enjoying this book because it was clearly not written for people like me. I enjoy fiction books about crime investigation and books with lots of curly braces and semicolons. Billion Dollar Apps is neither of these. It's written for CxOs at big companies who have to make big decisions about mobile technology.

But I'm forcing myself to read this book for the same reason I force myself to eat certain unappealing vegetables. I don't like the taste, but I think it's good for me.

Thinking about Lextech and its customers is forcing me to widen my perspective. It is reminding me that the boredom goes both ways:

  • Most developers don't care about the details of how John Deere is using mobile to make its business run better. They tend to focus more on the technology than on solutions to problems.

  • Probably nobody at John Deere is spending their time critiquing the design of Xamarin.Forms or figuring out best practices for PCLs. They see mobile technology as a solution to their problems.

This book is good because it speaks to its intended audience, and that group of people doesn't care about the details of why certain .NET things are incompatible with the AOT compiler in Xamarin.iOS. Billion Dollar Apps is a book that talks about sensibly applying mobile technology to make regular non-geek businesses work better.

Like most other geeks, I am prone to getting lost in technology for its own sake. But I'm not just a developer. I'm an entrepreneur. I need to keep a sense of balance.

So, I don't particularly want to read this book, but I need to read it.

Bratton's book is about the beginning of a big boring bazaar where beaucoup billions bounce bereft of buzz.

(Sorry, the forced alliteration was shameful, but I couldn't resist.)

Bottom line

I've wandered around a bit, so let me close with a summary of key points you might take away from this blog entry:

  • Consider reading Alex Bratton's book. If you are a CxO of a Fortune 500 company (and you probably are not, because you're reading my blog), you'll probably like it. If you are a developer (and you probably are, because you're reading my blog), you probably won't like it any more than I do. Eat your brussel sprouts.

  • Widening your perspective is always good advice. If you're like me, there's a really good chance that your daily information flow has boundaries. Those boundaries make you efficient, but they also constantly protect you from seeing the perspective of people who are not like you. Look outside your boundaries.

  • If you are interested in the boring B2B wave of mobile apps, so am I. Maybe we should talk. Maybe I should be following you on Twitter. Maybe we should see if we can create a little buzz.

 

Don't be an Accidental Project Manager

Herding Cats - Glen Alleman - Fri, 06/27/2014 - 18:44

A common problem in our development of the Program Management Office is getting so caught up in putting out fires. This is Covey's “addiction of the urgent.” In this process we lose the big-picture perspective. This note is about the big-picture view of the project management process as it pertains to our collection of projects. These are very rudimentary principles, but they are important to keep in mind.

5 Basic Principles

1. Be conscious of what you're doing, don’t be an accidental manager. Learn PM theory and practice. Realize you don't often have direct control. Focus on being a professional and the PM's mantra:

"I am a project professional. I work on projects. Projects are undertakings that are goal-oriented, complex, finite, and unique. They pass through a life cycle, which begins with project selection and ends with project termination."

2. Invest in front-end work; get it right the first time. We often leap before we look due to an over–focus on results-oriented processes, simple and many times simple-minded platitudes about project management and the technical processes and ignore basic steps. Trailblazers often achieve breakthroughs, but projects need forethought. Projects are complex, and the planning, structure, and time spent with stakeholders are required for success. Doing things right takes time and effort, but this time and effort is much cheaper than rework.

3. Anticipate the problems that will inevitably arise. Most problems are predictable. Well-known examples are:

  • Little direct control over staff, little staff commitment to the project.
  • Staff workers are not precisely what we want or need.
  • Functional managers have different goals, and these will suboptimize the project.
  • Variances to schedule and budget will occur, and customer needs will shift.
  • Project requirements will be misinterpreted.
  • Overplanning and overcontrol are as bad as underplanning and weak control.
  • There are hidden agendas, and these are probably more important than the stated one.

4. Go beneath surface illusions; dig deep to find the real situation. Don't accept things at face value. Don't treat the symptom, treat the root cause, and the symptoms will be corrected. Our customers usually understands their own needs, but further probing will bring out new needs. Robert Block suggests a series of steps: 

  • Identify all the players, in particular those who can impact project outcome.
  • Determine the goals of each player and organization, focusing on hidden goals.
  • Assess your own situation and start to define the problems.

5. Be as flexible as possible; don’t get sucked into unnecessary rigidity and formality. Project Management is the reverse of Fermi's 2nd law: we're trying to create order out of chaos. But in this effort:

  • More formal structure & bureaucracy doesn't necessarily reduce chaos.
  • We need flexibility to bend but not break to deal with surprises, especially with intangibles our information-technology projects.
  • The goal is to have both order and flexibility at the same time.
  • Heavy formality is appropriate on large budget or low-risk projects with lots of communication expense and few surprises. Information-age projects have a low need for this because they deal more with information and intangibles, and have a high degree of uncertainly.

[1] The Politics of Projects, Robert Block, Yourdon Press, 1983.

Related articles Elements of Project Success How to Deal With Complexity In Software Projects?
Categories: Project Management

Operationalism

Herding Cats - Glen Alleman - Fri, 06/27/2014 - 16:11

When we hear about a process, a technique, or a tool, ask in what unit of measure are you assessing the beneficial outcome of applying those?

This idea started with P. W. Bridgman's principle that the meaning of any concept is in its measurement or other test. This was put forth in the 1930's in which Bridgman made a famous, useful, and very operational statement, usually remembered as:

The scientific method is doing your damnedest, no holds barred. †

Developing software is not a scientific process, even though Computer Science is a discipline at the university level, where probability and statistics are taught, IEEE/ACM Computer Science Education Curricula

When we want to make choices about a future outcome, we can apply statistical thinking using the mathematics  used in scientific discussions - cost, schedule, and performance (C,S,P which are random variables).

These decisions are based on the probabilistic and statistical behavior of the underlying processes that create the alternatives for our decisions. Should we spend $X on a system that will return $Y value? Since both X and Y are random variables - they are in the future - our decision making processes needs to estimate the behaviour of these random variables and determine the impact on our outcomes.

Probability and Statistics

When we hear there are alternatives to making decisions about the future impacted by cost, schedule and technical performance without estimating the impact of that decision, we need to ask what are those alternatives, what are their units of measure, and when can we find them described?

For those interested in further reading on the topic of Decision Making in the Presence of Uncertainty

† Reflections of a Physicist, P. W. Bridgman, pp. 535. The passage reads, "The scientific method, as far as it is a method, is nothing more than doing one's damnedest with one's mind, no holds barred."

Categories: Project Management

Neo4j: Cypher – Separation of concerns

Mark Needham - Fri, 06/27/2014 - 11:51

While preparing my talk on building Neo4j backed applications with Clojure I realised that some of the queries I’d written were incredibly complicated and went against anything I’d learnt about separating different concerns.

One example of this was the query I used to generate the data for the following page of the meetup application I’ve been working on:

2014 06 27 08 19 34 2014 06 27 08 31 13

Depending on the selected tab you can choose to see the people signed up for the meetup and the date that they signed up or the topics that those people are interested in.

For reference, this is an outline of the schema of the graph behind the application:

2014 06 27 11 51 00

This was my initial query to get the data:

MATCH (event:Event {id: {eventId}})-[:HELD_AT]->(venue)
OPTIONAL MATCH (event)<-[:TO]-(rsvp)<-[:RSVPD]-(person)
OPTIONAL MATCH (person)-[:INTERESTED_IN]->(topic) WHERE ()-[:HAS_TOPIC]->(topic)
WITH event, venue, rsvp, person, COLLECT(topic) as topics ORDER BY rsvp.time
OPTIONAL MATCH (rsvp)<-[:NEXT]-(initial)
WITH event, venue, COLLECT({rsvp: rsvp, initial: initial, person: person, topics: topics}) AS responses
WITH event, venue,
    [response in responses WHERE response.initial is null AND response.rsvp.response = "yes"] as attendees,
    [response in responses WHERE NOT response.initial is null] as dropouts, responses
UNWIND([response in attendees | response.topics]) AS topics
UNWIND(topics) AS topic
WITH event, venue, attendees, dropouts, {id: topic.id, name:topic.name, freq:COUNT(*)} AS t
RETURN event, venue, attendees, dropouts, COLLECT(t) AS topics

The first two lines of the query works out which people have RSVP’d to a particular event, the 3rd line captures the topics they’re interested in as long as the topic is linked to at least one of the NoSQL London groups.

We then optionally capture their initial RSVP in case they’ve changed it before doing a bit of data manipulation to group everything together.

If we run a slight variation of that which only shows a few of the topics, attendees and dropouts this is the type of result we get:

+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| event.name                               | venue.name      | [a IN attendees[0..5] | a.person.name]                                 | [d in dropouts[0..5] | d.person.name]                              | topics[0..5]                                                                                                                                                                                                                                                    |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "Building Neo4j backed web applications" | "Skills Matter" | ["Mark Needham","Alistair Jones","Jim Webber","Axel Morgner","Ramesh"] | ["Frank Gibson","Keith Hinde","Richard Mason","Ollie Glass","Tom"] | [{id -> 10538, name -> "Business Intelligence", freq -> 3},{id -> 61680, name -> "HBase", freq -> 3},{id -> 61679, name -> "Hive", freq -> 2},{id -> 193021, name -> "Graph Databases", freq -> 12},{id -> 85951, name -> "JavaScript Frameworks", freq -> 10}] |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

The problem is we’ve mixed together two different concerns – the attendees to a meetup and the topics they’re interested in – which made the query quite hard to understand when I came back to it a couple of months later.

Instead what we can do is split the query in two and make two different calls to the server. We then end up with the following:

// Get the event + attendees + dropouts
MATCH (event:Event {id: {eventId}})-[:HELD_AT]->(venue)
OPTIONAL MATCH (event)<-[:TO]-(rsvp)<-[:RSVPD]-(person)
WITH event, venue, rsvp, person ORDER BY rsvp.time
OPTIONAL MATCH (rsvp)<-[:NEXT]-(initial)
WITH event, venue, COLLECT({rsvp: rsvp, initial: initial, person: person}) AS responses
WITH event, venue,
    [response in responses WHERE response.initial is null 
                           AND response.rsvp.response = "yes"] as attendees,
    [response in responses WHERE NOT response.initial is null] as dropouts
RETURN event, venue, attendees, dropouts
// Get the topics the attendees are interested in
MATCH (event:Event {id: {eventId}})
MATCH (event)<-[:TO]-(rsvp {response: "yes"})<-[:RSVPD]-(person)-[:INTERESTED_IN]->(topic)
WHERE ()-[:HAS_TOPIC]->(topic)
RETURN topic.id AS id, topic.name AS name, COUNT(*) AS freq

The first query is still a bit complex but that’s because there’s a bit of tricky logic to distinguish people who signed up and dropped out. However, the second query is now quite easy to read and expresses it’s intent very clearly.

Categories: Programming

How do I split these 40 stories?

Good Requirements - Jeffrey Davidson - Fri, 06/27/2014 - 04:15

A student from last week’s Agile Bootcamp Class taught at Harvard University (yes, that Harvard) asks,

 

Question: I have a project that involves 3 different users who want approximately 25 new fields on an application. Most of these fields are view only with the exception of 4-5 fields which allows input changes. However, user #1 wants to see all 25 fields, user #2 wants to see only 10 of the same fields, and user #3 wants to see 5 of the same fields. How should I approach writing these stories? Would I write a story for each user and each field – which could potentially be 40 stories? Or should I just combine the fields that all 3 users would want to view? For example, one field might be “name”. My user story might say“As a user #1, user #2, user #3, I want to see all names of employees eligible for an annual salary increase so that I can view all eligible employees.”

 

Answer: This sounds like an interesting set-up. My answers, of course, will be a bit general because I don’t know all the specifics. Also, I will make more stories if the developers are unfamiliar with the systems / tables / data, and probably fewer stories if the team knows quite a bit about the different systems. With caveats out of the way, let’s dig in! First, 40 stories? Yuck. Too many. Second, because you have 3 different users, I would start with stories for satisfying all of them (unless there is a reason to focus on just one). My presumption is doing this adds value the quickest, which is my guiding answer for how to break up stories.

  1. As user #3, I want to view < information from 1 field> so I can do my job.
  2. As user #3, I want to see < more information > so I can . . . .
  3. As user #2, I want to view < even more information > so I can . . . .
  4. As user #1, I want to see < all the information > so I can . . . .

The point of #1 is to prove we can display information, while the other stories add more details for the same and additional users. None of these is about editing the data. Depending on what keeps the users and developers happiest, there are a couple of options. I can insert story 1A; Edit the first field. After this I would insert more edit stories, probably grouped like the last 3 stories above, as appropriate. A different approach might be to insert the edit stories after the last story. Again, where is the value?

A couple side notes:
  • It doesn’t make much difference if you use “see” or “view,” the goal is understanding, not the worlds best grammar.
  • I would be remiss if I didn’t mention the value statement in your user story is a bit weak. What is the specific reason to view eligible employees; the actual awarding of salary increases, a validation check, a fascination with other people’s salary, something else entirely?
Categories: Requirements

Coming out of the closet - the life and adventure of a traditional project manager turned Agilist

Software Development Today - Vasco Duarte - Fri, 06/27/2014 - 04:00

I’m coming out of the closet today. No, not that closet. Another closet, the tabu closet in the Agile community. Yes, I was (and to a point still am) a control freak, traditional, command and control project manager. Yes, that’s right you read it correctly. Here’s why this is important: in 2003 when I first started to consider Agile in any shape or form I was a strong believer of the Church of Order. I did all the rites of passage, I did my Gantt charts, my PERT charts, my EVM-charts and, of course, my certification.

I was certified Project Manager by IPMA, the European cousin of PMI.

I too was a control freak, order junkie, command and control project manager. And I've been clean for 9 years and 154 days.

Why did I turn to Agile? No, it wasn’t because I was a failed project manager, just ask anyone who worked with me then. It was the opposite reason. I was a very successful project manager, and that success made me believe I was right. That I had the recipe. After all, I had been successful for many years already at that point.

I was so convinced I was right, that I decided to run our first Agile project. A pilot project that was designed to test Agile - to show how Agile fails miserably (I thought, at that time). So I decided to do the project by the book. I read the book and went to work.

I was so convinced I was right that I wanted to prove Agile was wrong. Turned out, I was wrong.

The project was a success... I swear, I did not see that coming! After that project I could never look back. I found - NO! - I experienced a better way to develop software that spoiled me forever. I could no longer look back to my past as a traditional project manager and continue to believe the things I believed then. I saw a new land, and I knew I was meant to continue my journey in that land. Agile was my new land.

Many of you have probably experienced a similar journey. Maybe it was with Test-Driven Development, or maybe it was with Acceptance Testing, or even Lean Startup. All these methods have one thing in common: they represent a change in context for software development. This means: they fundamentally change the assumptions on which the previous methods were based. They were, in our little software development world a paradigm shift.

Test-driven development, acceptance testing, lean startup are methods that fundamentally change the assumptions on which the previous software development methods were based.

NoEstimates is just another approach that challenges basic assumptions of how we work in software development. It wasn’t the first, it will not be the last, but it is a paradigm shift. I know this because I’ve used traditional, Agile with estimation, and Agile with #NoEstimates approaches to project management and software delivery.

A world premier?

That’s why me and Woody Zuill will be hosting the first ever (unless someone jumps the gun ;) #NoEstimates public workshop in the world. It will happen in Finland, of course, because that’s the country most likely to change the world of software development. A country of only five million people yet with a huge track record of innovation: The first ever mobile phone throwing world championship was created in Finland. The first ever wife-carrying world championship was created in Finland. The first ever swamp football championship was created in Finland. And my favourite: the Air Guitar World Championship is hosted in Finland.

#NoEstimates being such an exotic approach to software development it must, of course, have its first world-premier workshop in Finland as well! Me and Woody Zuill (his blog) will host a workshop on #NoEstimates on the week of October 20th in Helsinki. So whether you love it, or hate it you can meet us both in Helsinki!

In this workshop will cover topics such as:

  • Decision making frameworks for projects that do not require estimates.
  • Investment models for software projects that do not require estimates.
  • Project management (risk management, scope management, progress reporting, etc.) approaches that do not require estimates.
  • We will give you the tools and arguments you need to prove the value of #NoEstimates to your boss, and how to get started applying it right away.
  • We will discuss where we see #NoEstimates going and what are the likely changes to software development that will come next. This is the future delivered to you!

Which of these topics interest you the most? What topics would you like us to cover in the workshop. Tell us now and you have a chance to affect the topics we will cover.

Contact us at vasco.duarte@oikosofy.com and tell us. We will reply to all emails, even flame bombs! :)

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Picture credit: John Hammink, follow him on twitter

The Top Five Issues In Project Estimation

 

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

When I recently asked a group of people the question “What are the two largest issues in project estimation?”, I received a wide range of answers. The range of answers is probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

 

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.


Categories: Process Management

Cast Away with Android TV and Google Cast

Android Developers Blog - Thu, 06/26/2014 - 22:02

By Dave Burke and Majd Bakar, Engineering Directors and TV Junkies

Last summer, we launched Chromecast, a small, affordable device that lets you cast online video, music and anything from the web to your TV. Today at Google I/O, we announced Android TV, the newest form factor to the Android platform, and a way to extend the reach of Google Cast to more devices, like televisions, set-top boxes and consoles.

Check out Coming to a Screen Near You for some details on everything we’re doing to make your TV the place to be.

For developers though--sorry, you don’t get to unwind in front of the TV. We need you to get to work and help us create the best possible TV experience, with all of the new features announced at I/O today.

Get started with Android TV

In addition to Google Cast apps that send content to the TV, you can now build immersive native apps and console-style games on Android TV devices. These native apps work with TV remotes and gamepads, even if you don’t have your phone handy. The Android L Developer Preview SDK includes the new Leanback support library that allows you to design smoother, simpler, living room apps.

And this is just the beginning. In the fall, new APIs will allow you to cast directly to these apps, so users can control the app with the phone, the remote, or even their Android Wear watch. You’ll also start seeing Android TV set-top boxes, consoles and televisions from Sony, TP Vision, Sharp, Asus, Razer and more.

Help more users find your Google Cast app

We want to help users more easily find your content, so we’ve improved the Google Cast SDK developer console to let you upload your app icon, app name, and app category for Android, iOS and Chrome. These changes will help your app get discovered on chromecast.com/apps and on Google Play.

Additional capabilities have also been added to the Google Cast SDK. These include: Media Player Library enhancements, bringing easier integration with MPEG-DASH Smooth Streaming, and HLS. We’ve also added WebAudio & WebGL support, made the Cast Companion Library available, and added enhanced Closed Caption support. And coming soon, we will add support for queuing and ID delegation.

Ready to get started? Visit developer.android.com/tv and developers.google.com/cast for the SDKs, style guides, tutorials, sample code, and the API references. You can also request an ADT-1 devkit to bootstrap your Android TV development.

.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

Games at Google I/O '14: Everyone's Playing Games

Android Developers Blog - Thu, 06/26/2014 - 22:01

By Greg Hartrell, Product Manager, Google Play games

With Google I/O ‘14 here, we see Android and Google Play as a huge opportunity for game developers: 3 in 4 Android users are playing games, and with over one billion active Android users around the world, games are reaching and delighting almost everyone.

At Google, we see a great future where mobile and cloud services bring games to all the screens in your life and connect you with others. Today we announced a number of games related launches and upcoming technologies across Google Play Games, the Android platform and its new form factors.

Google Play Games

At last year’s Google I/O, we announced Google Play Games -- Google’s online game platform, with services and user experiences designed to bring players together and take Android and mobile games to the next level.

Google Play Games has grown at tremendous speed, activating 100 million users in the past 6 months. It’s the fastest growing mobile game network, and with such an incredible response, we announced more awesome enhancements to Google Play Games today. Game Profile

The Play Games app now gives players a Game Profile, where they earn points and vanity titles from unlocking achievements. Players can also compare their profile with friends. Developers can benefit from this meta-game by continuing to design great achievements that reward players for exploring all the content and depth of their game.

Quests and Saved Games

Two new game services will launch with the next update for Google Play Services on Android, and through the Play Games iOS SDK:

  • Quests is a service that enables developers to create online, time-based goals in their games without having to launch an update each time. Now developers can easily run weekend or daily challenges for their player community, and reward them in unique ways.
  • Saved Games is a service that stores a player’s game progress across many screens, along with a cover image, description and total time played. Players never have to play level 1 again by having their progress stored with Google, and cover images and descriptions are used in Play Games experiences to indicate where they left off and attract them to launch their favorite game again.

We have many great partners who have started integrating Quests and Saved Games, here are just a few current or upcoming games.

More tools for game developers

Other developer tools are now available for Play Games, including:

  • Play Games Statistics — Play Games adopters get easy effort game analytics through the Google Play Developer console today, including visualization of Player & Engagement statistics. What’s new is aggregated player demographic information for signed-in users, so you can understand the distribution of your player’s ages, genders and countries.
  • Play Games C++ SDK is updated with more cross-platform support for the new services and experiences we announced. Cocos2D-x, a popular game engine, is an early adopter of the Play Games C++ SDK bringing the power of Play Games to their developers.
Game enhancements for the Android Platform

With the announcement of the developer preview of the Android L-release, there are some new platform capabilities that will make Android an even more compelling platform for game development.

  • Support for OpenGL ES 3.1 in the L Developer Preview — Introducing powerful features like compute shaders, stencil textures, and texture gather, enables more interesting physics or pixel effects on mobile devices. Additional API and shading language improvements improve usability and reduce overhead.
  • Android Extension Pack (AEP) in the L Developer Preview — a new set of extensions to OpenGL ES that bring desktop class graphics to Android. Games will be able to take advantage of tessellation and geometry shaders, and use ASTC texture compression.

    We're pleased to be working with different GPU vendors to adopt AEP including Nvidia, ARM, Qualcomm, and Imagination Technologies.

  • Google Gamepad standards — We recently published a standard for gamepad input for OEMs and partners who create and enable these awesome input devices on Android. The standard makes this input mechanism compatible across Google platforms on Android, Chrome and Chromebooks. You can learn more here: Supporting Game Controllers.
Play Games on Android TV

And Google's game network is a part of the Android TV announcement — so think of Android on a TV, with a rich interface on a large screen, and fun games in your living room! Players will be able to earn achievements, climb leaderboards and play online with friends from an Android TV. This is only available through the developer preview, so game developers seeking a hardware development kit (the ADT-1) can make a request at http://developer.android.com/tv.

Updates rolling out soon

That’s a lot of games announcements! Our Play Games changes will roll out over the next few weeks with the update of Google Play Services and the Play Games App, and Android L-release changes are part of the announced developer preview. This gets us a big step closer to a world where Android and our cloud services enable games to reach all the screens in your life and connect you with others.

Greg Hartrell is the lead product manager for Google Play Games: Google's game platform that helps developers reach and unite millions of players. Before joining Google, he was VP of Product Development at Capcom/Beeline, and prior to that, led product development for 8 years at Microsoft for Xbox Live/360 and other consumer and enterprise product lines. In his spare time, he enjoys flying birds through plumbing structures, boss battles and pulling rare objects out of mystery boxes.

.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

New in Android: L Developer Preview and Google Play Services 5.0

Android Developers Blog - Thu, 06/26/2014 - 22:00

By Jamal Eason, Product Manager, Android

Earlier today, at Google I/O, we showed a number of projects we’ve been working on to the thousands of developers in the audience and the millions more tuning in on the livestream. These projects extend Android to the TV (Android TV), to the car (Android Auto) and to wearables (Android Wear), among others.

At Google, our focus is providing a seamless experience for users across all of the screens in their lives. An important component to that is making sure that you as developers have all of the tools necessary to easily deploy your apps across to those screens. Increasingly, Android is becoming the fabric that weaves these experiences together, which is why you’ll be excited about a number of things we unveiled today.

Android L Developer Preview

For the first time since we launched Android, we’re giving you early access to a development version of an upcoming release. The L Developer Preview, available starting tomorrow, lets you explore many of the new features and capabilities of the next version of Android, and offers everything you need to get started developing and testing on the new platform. This is important because the platform is evolving in a significant way — not only for mobile but also moving beyond phones and tablets. Here are a few of the highlights for developers:

  • Material design for the multiscreen world — We’ve been working on a new design language at Google that takes a comprehensive approach to visual, motion, and interaction design across a number of platforms and form factors. Material design is a new aesthetic for designing apps in today’s multi-device world. The L Developer Preview brings material design to Android, with a full set of tools for your apps. The system is incredibly flexible, allowing your app to express its individual character and brand with bold colors and a variety of responsive UI patterns and themeable elements.
  • Enhanced notifications — New lockscreen notifications let you surface content, updates, and actions to users at a glance, without unlocking. Visibility controls let you manage the types of information shown on the lockscreen. Heads-up notifications display content and actions in a small floating window that’s managed by the system, no matter which app is in the foreground. Notifications are material themed and you can express your brand through accent colors and more.
  • Document-centric Recents — Now you can organize your app by tasks and present these concurrently as individual “documents” in the Recents screen. Users can flip through Recents to find the specific task they want and then jump deep into your app with a single tap.
  • Project Volta — New tools and APIs help your app run efficiently and conserve power. Battery Historian is a new tool that lets you visualize power events over time and understand how your app is using battery. A job scheduler API lets you set the conditions under which your background tasks and other jobs should run, such as when the device is idle or connected to an unmetered to a charger, to minimize battery impact.
  • BLE Peripheral Mode — Android devices can now function in Bluetooth Low Energy (BLE) peripheral mode. Apps can use this capability to broadcast their presence to nearby devices — for example, you can now build apps that let a device to function as a pedometer or health monitor and transmit data to another BLE device.
  • Multi-networking — Apps can work with the system to dynamically scan for available networks with specific capabilities and then automatically connect. This is useful when you want to manage handoffs or connect to a specialized network, such as a carrier-billing network.
  • Advanced camera capabilities — A new camera API gives you new capabilities for image capture and processing. On supported devices, your app can capture uncompressed YUV capture at full 8 megapixel resolution at 30 FPS. The API also lets you capture raw sensor data and control parameters such as exposure time, ISO sensitivity, and frame duration, on a per-frame basis.
  • New features for game developers — Support for OpenGL ES 3.1, gives you capabilities such as compute shaders, stencil textures, and texture gather for your games. Android Extension Pack (AEP) is a new set of extensions to OpenGL ES that bring desktop-class graphics to Android. Games will be able to take advantage of tessellation and geometry shaders, and use ASTC texture compression across multiple GPU techonolgies.
  • Android Runtime (ART) — The L Developer Preview introduces the Android Runtime (ART) as the system default. ART offers ahead-of-time (AOT) compilation, more efficient garbage collection, and improved development and debugging features. In many cases it improves performance of the device with no action required by the developer.
  • 64-bit support — The L Developer Preview adds support for 64-bit ABIs, for additional address space and improved performance with certain compute workloads. Apps written in the Java language can run immediately on 64-bit architectures with no modifications required. To support apps using native code, we’re also releasing an updated NDK that includes 64-bit support.

Watch for more details coming out tomorrow (26 June) on what’s in the L Developer Preview and how to get it.

Google Play Services 5.0

Along with the L Developer Preview, we also announced a new version of Google Play services that brings new capabilities and the latest optimizations to devices across the Android ecosystem. Google Play services ensures that you can build on the latest features from Google for your users, with the confidence that those services will work properly everywhere. The latest version has begun rolling out and here are some of the highlights:

  • Services for Android wearables — Your apps can more easily communicate and sync with code running on Android wearables through an automatically synchronized, persistent data store and a reliable messaging interface.
  • Play Games services — Build a great gaming experience with Quests, which allow event-based challenges for players to complete for rewards, Saved Games (a snapshot API allow synchronization of game data along with a cover-image and description), and Game Profile (providing experience points for players).
  • App Indexing API — Surface deep content in your native mobile applications on Google search and drive additional user engagement.
  • Google Cast — Use media tracks to enable closed-caption support for Chromecast.
  • Drive — Sort query results, create offline folders, and select any mime type in the file picker by default.
  • Wallet — Build a "Save to Wallet" button for offers directly into your app; use geo-fenced in-store notifications to prompt the user to show and scan digital cards. Split tender allows payment to be split between Wallet Balance and a credit/debit card in Google Wallet.
  • Analytics — Get insights into the full user journey and understand how different user acquisition campaigns are performing with Enhanced Ecommerce, letting you measure product impressions, product clicks, and more.
  • Mobile Ads — Use improved in-app purchase ads and integrations for the Play store in-app purchase API client.
  • Dynamic Security Provider — Offers an alternative to the platform's secure networking APIs that can be updated more frequently, for faster delivery of security patches.

We expect the rollout of Google Play services 5.0 to take several days, after which time you’ll be able to get started developing with these new APIs.

Join us at the Google I/O sessions

If you’d like to learn more, join us for sessions on Android development, material design, game development, and more. You’ll find the full session list on the Google I/O 2014 site, and you can filter the schedule to find livestreamed sessions of interest.

.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

Google I/O: Design, Develop, Distribute

Android Developers Blog - Thu, 06/26/2014 - 21:59

By Monica Tran, Head of Developer Marketing

Today at Moscone, we kicked off our 7th annual Google I/O. This year, we’re focusing on three key themes: design, develop, distribute, helping you build your app from start to finish.

It’s been amazing to see how far you’ve come: in fact, since the last Google I/O, we’ve paid developers more than $5 billion, a testament to the experiences you’re creating. In the keynote, we had a number of announcements geared towards meeting the user wherever they go: on the TV, in the car and on your wrist. Below is a taste of some of the goodies we unveiled to help you along the way.

DESIGN
  • Material design — we introduced material design, which uses tactile surfaces, bold graphic design, and fluid motion to create beautiful, intuitive experiences.
  • L-Release of Android, with material design — Bringing material design to Android is a big part of the L-Release of Android: we’ve added the new Material theme (which you can apply to your apps for a new style) and the ability to specify a view’s elevation, allowing you to cast dynamic, real-time shadows in your apps.
  • Bringing material design to Polymer — As a developer, you’ll now have access to all the capabilities of material design via Polymer, bringing tangibility, bold graphics, and animations to your applications on the web, all at 60fps.
DEVELOP
  • Android L Developer Preview — Get extra lead time to make great apps for the next version of Android, with lots of new APIs to make Android simpler and more consistent on screens everywhere
  • Google Play services 5.0 is rolling out worldwide with great new features for developers.
  • Android TV SDK — Explore, learn and build apps and games for the biggest screen in the home. Your hard work will pay off in the fall when Asus, Razer and other partners launch their first Android TV devices.
  • Google Cast SDK — Help users find your content more easily with the improved Google Cast SDK developer console, which lets your app get discovered on chromecast.com/apps and on Google Play.
  • Android Auto SDK coming — Bring your app experience to the car by extending your existing app with Android Auto APIs. Be in millions of cars — with just one app.
  • Google Fit — An open fitness platform giving users control of their fitness data so that developers can focus on building smarter apps and manufacturers can focus on creating amazing devices.
  • Gaming — Learn what's new about Google Play Games and the Android platform to take games to the next level.
  • Google Cloud Platform — Get help with debugging, tracing, and monitoring applications in with new developer productivity tooling. Also, try Cloud Dataflow, a new fully managed service that simplifies the process of creating data pipelines.
  • The new Gmail API — Add Gmail features to your app with RESTful access to threads, messages, labels, drafts and history.
  • Android features for Enterprise — Secure apps and data without complicating the user experience. Build for the enterprise with no changes to the apps you're already developing. Learn more here.
DISTRIBUTE .footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

Android L Developer Preview and Android Studio Beta

Android Developers Blog - Thu, 06/26/2014 - 21:57

By Jamal Eason, Product Manager, Android

At the Google I/O keynote yesterday we announced the L Developer Preview — a development version of an upcoming Android release. The Developer Preview lets you explore features and capabilities of the L release and get started developing and testing on the new platform. You can take a look at the developer features and APIs in the API Overview page.

Starting today, the L Developer Preview is available for download from the L Developer Preview site. We're also announcing that Android Studio is now in beta, and making great progress toward a full release.

Let’s take a deeper dive into what’s included in the preview and what it means for you as a developer as you prepare your apps for the next Android release.

What’s in the L Developer Preview

The L Developer Preview includes updated SDK tools, system images for testing on an emulator, and system images for testing on a Nexus 5 or Nexus 7 device.

You can download these components through the Android SDK Manager:

  • L Developer Preview SDK Tools
  • L Developer Preview Emulator System Image - 32-bit (64-bit experimental emulator image coming soon)
  • L Developer Preview Emulator System Image for Android TV (32-bit)

(Note: the full release of Android Wear is a part of Android KitKat, API Level 20. Read more about Android Wear development here.)

Today, we are also providing system image downloads for these Nexus devices to help with your testing as well:

  • Nexus 5 (GSM/LTE) “hammerhead” Device System Image
  • Nexus 7 [2013] - (Wifi) “razor” Device System Image

You can download both of these system images from the L Developer Preview site.

With the SDK Tools, and Nexus device images, you can get a head start on testing out your app on the latest Android platform months before the official launch. You can use the extra lead time to take advantage of all the new app features and APIs in your apps. The Nexus device images can help you with testing, but keep in mind that they are meant for development purposes only and should not be used on a production device.

Notes on APIs and publishing

The L Developer Preview is a development release and does not have a standard API level. The APIs are not final, and you can expect minor API changes over time.

To ensure a great user experience and broad compatibility, you can not publish versions of your app to Google Play that are compiled against L Developer Preview. Apps built for L Developer Preview will have to wait until the full official launch to publish on Google Play.

Android Studio Beta

To help you develop your apps for the upcoming Android version and for new Android device types, we’re also happy to announce Android Studio Beta. Android Studio Beta helps you develop apps by enabling you to:

  • Incorporate the new material design and interaction elements of the L Developer Preview SDK
  • Quickly create and build apps with a new app wizard and layout editor support for Android Wear and Android TV

Building on top of the build variants and flavors features we introduced last year, the Android Studio build system now supports creating multiple apks, such as for devices like Android Wear. You can try out all the new features with the L Developer Preview by downloading the Android Studio Beta today.

How to get started

To get started with the L Developer Preview and prepare your apps for the full release, just follow these steps:

  1. Try out Android Studio Beta
  2. Visit the L Developer Preview site
  3. Explore the new APIs
  4. Enable the material theme and try out material design on your apps
  5. Get the emulator system images through the SDK Manager or download the Nexus device system images.
  6. Test your app on the new Android Runtime (ART) with your device or emulator
  7. Give us feedback

As you use the new developer features and APIs in the L Developer Preview, we encourage you to give us your feedback using the L Developer Preview Issue Tracker. During the developer preview period, we aim to incorporate your feedback into our new APIs and adjust features as best as we can.

You can get all the latest downloads, documentation, and tools information from the L Developer Preview site on developer.android.com. You can also check our Android Developer Preview Google+ page for updates and information.

We hope you try the L Developer Preview as you start building the next generation of amazing Android user experiences.

.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

Cloud Enabling your Mobile App

Google Code Blog - Thu, 06/26/2014 - 20:00
By Jason Polites, Cloud Platform Team

Many mobile apps today suffer from “app-nesia” — the affliction that causes an app to forget who you are. Have you ever re-installed an app only to discover you have to re-create all your carefully crafted preferences? This is typically because the user’s app data lives only on the device.

By connecting your apps to a backend platform, you can solve this issue, but it can be challenging. Whether it’s building basic plumbing, or just trying to load and save data in a network & battery-efficient way, spending time dealing with the backend can take precious time away from building an awesome app. So, we’re introducing two new features to help make your life easier.

Google Cloud Save
Google Cloud Save allows you to easily load and save user data to the cloud without needing to code up the backend. This is handy for situations where you want to save user state and have that state synchronized to multiple devices, or survive an app reinstall.

We handle all the backend logic as well as the synchronization services on the client. The synchronization services work in the background, providing offline support for the data, and minimizing impact on the battery. All you need to do is tell us when and what to save, and you do this with just 4 simple methods:
  • .save(client, List<Entity>)
  • .delete(client, Query)
  • .query(client, Query)
  • .requestSync(client)
All data is written locally first, then automatically synchronized in the background. The save, delete and query methods provide your basic CRUD operations while the requestSync method allows you to force a synchronization at any time.On the backend the data is stored in Google Cloud Datastore which means you can access the raw data directly from a Google App Engine or Google Compute Engine instance using the existing Datastore API. Changes on the server will even be automatically synced back to client devices. Importantly, this per-user data belongs to you, the developer, and stored in your own Google Cloud Datastore database. Cloud Save (3).pngGoogle Cloud Save is currently in private beta and will be available for general use soon. If you’re interested in participating in the private beta, you can sign up here!

Cloud Tools for Android Studio
To simplify the process of adding an App Engine backend to your app, Android Studio now provides three App Engine backend module templates which you can add to your app:
  • App Engine Java Servlet Module - Minimal Backend
  • App Engine Java Endpoints Module - Basic Endpoint scaffolding 
  • App Engine with Google Cloud Messaging - Push notification wireup 
When you choose one of these template types your project is updated with a new Gradle module containing your new App Engine backend. All of the required dependencies/permissions will be automatically set up for you. Built-in rich editing support for Google Cloud Endpoints
Once you have added the backend module to your Android application, you can use Google Cloud Endpoints to streamline the communication between your backend and your Android app. Cloud Endpoints automatically generates strongly-typed, mobile optimized client libraries from simple Java server-side API annotations, automates Java object marshalling to and from JSON, and provides built-in OAuth 2.0 support.On deployment, this annotated Endpoints API definition class generates a RESTful API. You can explore this generated API (and even make calls to it) by navigating to Endpoints API explorer as shown in the image below: api-explorer.png

To simplify calling this generated API from your Android app, Android Studio will automatically set up your project to include all compile dependencies and permissions required to consume Cloud Endpoints, and will re-generate strongly-typed client libraries if your backend changes. This means that you can start calling the client libraries from your Android app immediately after defining the server-side Endpoints API. The underlying work-horses: Gradle, and Gradle plug-in for App EngineUnder the hood, Gradle is used to build both your app and your App Engine backend. In fact, when you add an App Engine backend to your Android app, the open-source App Engine plug-in for Gradle is automatically downloaded by Android Studio, and common App Engine tasks become available as Gradle targets. This allows you to use the same build system across your IDE, command-line or continuous integration environments.Checkout more details on the new Cloud Endpoints features in Android Studio on the Android Developer Blog.

Posted by Louis Gray, Googler
Categories: Programming

Mocking a REST backend for your AngularJS / Grunt web application

Xebia Blog - Thu, 06/26/2014 - 17:15

Anyone who ever developed a web application will know that a lot of time is spend in a browser to check if everything works as well and looks good. And you want to make sure it looks good in all possible situations. For a single-page application, build with a framework such as AngularJS, that gets all it's data from a REST backend this means you should verify your front-end against different responses from your backend. For a small application with primarily GET requests to display data, you might get away with testing against your real (development) backend. But for large and complex applications, you need to mock your backend.

In this post I'll go in to detail how you can solve this by mocking GET requests for an AngularJS web application that's built using Grunt.

In our current project, we're building a new mobile front-end for an existing web application. Very convenient since the backend already exists with all the REST services that we need. An even bigger convenience is that the team that built the existing web application also built an entire mock implementation of the backend. This mock implementation will give standard responses for every possible request. Great for our Protractor end-to-end tests! (Perhaps another post about that another day.) But this mock implementation is not so great for the non standard scenario's. Think of error messages, incomplete data, large numbers or a strange valuta. How can we make sure our UI displays these kind of cases correct? We usually cover all these cases in our unit tests, but sometimes you just want to see it right in front of you as well. So we started building a simple solution right inside our Grunt configuration.

To make this solution work, we need to make sure that all our REST requests go through the Grunt web server layer. Our web application is served by Grunt on localhost port 9000. This is the standard configuration that Yeoman generates (you really should use Yeoman to scaffold your project). Our development backend is also running on localhost, but on port 5000. In our web application we want to make all REST calls using the `/api` path so we need to rewrite all requests to http://localhost:9000/api to our backend: http://localhost:5000/api. We can do this by adding middleware in the connect:livereload configuration of our Gruntfile.

livereload: {
  options: {
    open: true,
    middleware: function (connect, options) {
      return [
        require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

        /* The lines below are generated by Yeoman */
        connect.static('.tmp'),
        connect().use(
          '/bower_components',
          connect.static('./bower_components')
        ),
        connect.static(appConfig.app)
      ];
    }
  }
},

Do the same for the connect:test section as well.

Since we're using 'connect-modrewrite' here, we'll have to add this to our project:

npm install connect-modrewrite --save-dev

With this configuration every request starting will http://localhost:9000/api will be passed on to http://localhost:5000/api so we can just use /api in our AngularJS application. Now that we have this working, we can write some custom middleware to mock some of our requests.

Let's say we have a GET request /api/user returning some JSON data:

{"id": 1, "name":"Bob"}

Now we'd like to see what happens with our application in case the name is missing:

{"id": 1}

It would be nice if we could send a simple POST request to change the response of all subsequent calls. Something like this:

curl -X POST -d '{"id": 1}' http://localhost:9000/mock/api/user

We prefixed the path that we want to mock with /mock in order to know when we should start mocking something. Let's see how we can implement this. In the same Gruntfile that contains our middleware configuration we add a new function that will help us mock our requests.

var mocks = [];
function captureMock() {
  return function (req, res, next) {

    // match on POST requests starting with /mock
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {

      // everything after /mock is the path that we need to mock
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        mocks[path] = body;

        res.writeHead(200);
        res.end();
      });
    } else {
      next();
    }
  };
}

And we need to add the above function to our middleware configuration:

middleware: function (connect, options) {
  return [
    captureMock(),
    require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Our function will be called for each incoming request. It will capture each request starting with /mock as a request to define a mock request. Next it stores the body in the mocks variable with the path as key. So if we execute our curl POST request we end up with something like this in our mocks array:

mocks['/api/user'] = '{"id": 1}';

Next we need to actually return this data for requests to http://localhost:9000/api/user. Let's make a new function for that.

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.url];
    if (mockedResponse) {
      res.writeHead(200);
      res.write(mockedResponse);
      res.end();
    } else {
      next();
    }
  };
}

And also add it to our middleware.

  ...
  captureMock(),
  mock(),
  require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),
  ...

Great, we now have a simple mocking solution in just a few lines of code that allows us to send simple POST requests to our server with the requests we want to mock. However, it can only send status codes of 200 and it cannot differentiate between different HTTP methods like GET, PUT, POST and DELETE. Let's change our functions a bit to support that functionality as well.

 var mocks = {
  GET: {},
  PUT: {},
  POST: {},
  PATCH: {},
  DELETE: {}
};

function mock() {
  return function (req, res, next) {
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        var headers = {
          'Content-Type': req.headers['content-type']
        };
        for (var key in req.headers) {
          if (req.headers.hasOwnProperty(key)) {
            if (key.indexOf('mock-header-') === 0) {
              headers[key.substring(12)] = req.headers[key];
            }
          }
        }

        mocks[req.headers['mock-method'] || 'GET'][path] = {
          body: body,
          responseCode: req.headers['mock-response'] || 200,
          headers: headers
        };

        res.writeHead(200);
        res.end();
      });
    }
  };
};

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.method][req.url];
    if (mockedResponse) {
      res.writeHead(mockedResponse.responseCode, mockedResponse.headers);
      res.write(mockedResponse.body);
      res.end();
    } else {
      next();
    }
  };
}

We can now create more advanced mocks:

curl -X POST \
    -H "mock-method: DELETE" \
    -H "mock-response: 403" \
    -H "Content-type: application/json" \
    -H "mock-header-Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT" \
    -d '{"error": "Not authorized"}' http://localhost:9000/mock/api/user

curl -D - -X DELETE http://localhost:9000/api/user
HTTP/1.1 403 Forbidden
Content-Type: application/json
last-modified: Tue, 15 Nov 1994 12:45:26 GMT
Date: Wed, 18 Jun 2014 13:39:30 GMT
Connection: keep-alive
Transfer-Encoding: chunked

{"error": "Not authorized"}

Since we thought this would be useful for other developers, we decided to make all this available as open source library on GitHub and NPM

To add this to your project, just install with npm:

npm install mock-rest-request --save-dev

And of course add it to your middleware configuration:

middleware: function (connect, options) {
  var mockRequests = require('mock-rest-request');
  return [
    mockRequests(),
    
    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Are You Going to ALE 2014?

NOOP.NL - Jurgen Appelo - Thu, 06/26/2014 - 15:57
ALE 2014

Did you know ALE is the only event where I pay so that I can attend?
I hope to see you in August in Kraków.

The post Are You Going to ALE 2014? appeared first on NOOP.NL.

Categories: Project Management

How I Structure My Business

Making the Complex Simple - John Sonmez - Thu, 06/26/2014 - 15:00

In this video I share how I structure Simple Programmer LLC.

The post How I Structure My Business appeared first on Simple Programmer.

Categories: Programming

Sneak peek: Google Cloud Dataflow, a Cloud-native data processing service

Google Code Blog - Thu, 06/26/2014 - 14:00
By Frances Perry, Google Cloud Platform Team

In today's world, information is being generated at an incredible rate. However, unlocking insights from large datasets can be cumbersome and costly, even for experts.

It doesn’t have to be that way. Yesterday, at Google I/O, you got a sneak peek of Google Cloud Dataflow, the latest step in our effort to make data and analytics accessible to everyone. You can use Cloud Dataflow:
  • for data integration and preparation (e.g. in preparation for interactive SQL in BigQuery)
  • to examine a real-time stream of events for significant patterns and activities
  • to implement advanced, multi-step processing pipelines to extract deep insight from datasets of any size
In these cases and many others, you use Cloud Dataflow’s data-centric model to easily express your data processing pipeline, monitor its execution, and get actionable insights from your data, free from the burden of deploying clusters, tuning configuration parameters, and optimizing resource usage. Just focus on your application, and leave the management, tuning, sweat and tears to Cloud Dataflow.

Cloud Dataflow is based on a highly efficient and popular model used internally at Google, which evolved from MapReduce and successor technologies like Flume and MillWheel. The underlying service is language-agnostic. Our first SDK is for Java, and allows you to write your entire pipeline in a single program using intuitive Cloud Dataflow constructs to express application semantics.

Cloud Dataflow represents all datasets, irrespective of size, uniformly via PCollections (“parallel collections”). A PCollection might be an in-memory collection, read from files on Cloud Storage, queried from a BigQuery table, read as a stream from a Pub/Sub topic, or calculated on demand by your custom code.

Because PCollections can be arbitrarily large, Cloud Dataflow includes a rich library of PTransforms (“parallel transforms”), which you can customize with your own application logic. For example, ParDo (“parallel do”) runs your code over each element in a PCollection independently (like both the Map and Reduce functions in MapReduce or WHERE in SQL), and GroupByKey takes a PCollection of key-value pairs and groups together all pairs with the same key (like the Shuffle step of MapReduce or GROUP BY and JOIN in SQL). In addition, anyone can define new custom transformations by composing other transformations -- this extensibility lets you write reusable building blocks which can be shared across programs. Cloud Dataflow provides a starter set of these composed transforms out of the box, including Count, Top, and Mean.

Writing in this modular, high-level style naturally leads to pipelines that make multiple logical passes over the same data. Cloud Dataflow automatically optimizes your data-centric pipeline code by collapsing multiple logical passes into a single execution pass. However, this doesn't turn the system into a black box: as you can see below, Cloud Dataflow’s monitoring UI uses the building block concept to show you the pipeline as you wrote it, not as the system chooses to execute it.

 Code snippet and monitoring UI from the Cloud Dataflow demo in the IO keynote.
The same Cloud Dataflow pipeline may run in different ways, depending on the data sources. As you start designing or debugging, you can run against data local to your development environment. When you’re ready to scale up to real data, that same pipeline can run in parallel batch mode against data in Cloud Storage or in distributed real-time processing mode against data coming in via a Pub/Sub topic. This flexibility makes it trivial to transition between different stages in the application development lifecycle: to develop and test applications, to adapt an existing batch pipeline to track time-sensitive trends, or to fix a bug in a real-time pipeline and backfill the historical results.

When you use Cloud Dataflow, you can focus solely on your application logic and let us handle everything else. You should not have to choose between scalability, ease of management and a simple coding model. With Cloud Dataflow, you can have it all.

If you’d like to be notified of future updates about Cloud Dataflow, please join our Google Group.

Posted by Louis Gray, Googler
Categories: Programming

Software Development Conferences Forecast June 2014

From the Editor of Methods & Tools - Thu, 06/26/2014 - 07:22
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. AGILE2014, July 28 – August 1, Orlando, USA Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA STARWEST, October 12-17 2014, Anaheim, USA JAX London, October 13-15 2014,London, UK Pacific Northwest ...

The New Competitive Landscape

"All men can see these tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved." -- Sun Tzu

If it feels like strategy cycles are shrinking, they are.

If it feels like competition is even more intense, it is.

If it feels like you are balancing between competing in the world and collaborating with the world, you are.

In the book, The Future of Management, Gary Hamel and Bill Breen share a great depiction of this new world of competition and the emerging business landscape.

Strategy Cycles are Shrinking

Strategy cycles are shrinking and innovation is the only effective response.

Via The Future of Management:

“In a world where strategy life cycles are shrinking, innovation is the only way a company can renew its lease on success.  It's also the only way it can survive in a world of bare-knuckle competition.”

Fortifications are Collapsing

What previously kept people out of the game, no longer works.

Via The Future of Management:

“In decades past, many companies were insulated from the fierce winds of Schumpeterian competition.  Regulatory barriers, patent protection, distribution monopolies, disempowered customers, proprietary standards, scale advantages, import protection, and capital hurdles were bulwarks that protected industry incumbents from the margin-crushing impact of Darwinian competition.  Today, many of the fortifications are collapsing.”

Upstarts No Longer Have to Build a Global Infrastructure to Reach a Worldwide Market

Any startup can reach the world, without having to build their own massive data center to do so.

Via The Future of Management:

“Deregulation and trade liberalization are reducing the barriers to entry in industries as diverse as banking, air transport, and telecommunications.  The power of the Web means upstarts no longer have to build a global infrastructure to reach a worldwide market.  This has allowed companies like Google, eBay, and My Space to scale their businesses freakishly fast.” 

The Disintegration of Large Companies and New Entrants Start Strong

There are global resource pools of top talent available to startups.

Via The Future of Management:

“The disintegration of large companies, via deverticalization and outsourcing has also helped new entrants.  In turning out more and more of their activities to third-party contractors, incumbents have created thousands of 'arms suppliers' that are willing to sell their services to anyone.  By tapping into this global supplier base of designers, brand consultants, and contract manufacturers, new entrants can emerge from the womb nearly full-grown.” 

Ultra-Low-Cost Competition and Less Ignorant Consumers

With smarter consumers and ultra-low-cost competition, it’s tough to compete.

Via The Future of Management:

“Incumbents must also contend with a growing horde of ultra-low-cost competitors - companies like Huawei, the Chinese telecom equipment maker that pays its engineers a starting salary of just $8,500 per year.  Not all cut-price competition comes from China and India.  Ikea, Zara, Ryanair, and AirAsia are just a few of the companies that have radically reinvented industry cost structures.  Web-empowered customers are also hammering down margins.  Before the Internet, most consumers couldn't be sure whether they were getting the best deal on their home mortgage, credit card debt, or auto laon.  This lack of enlightenment buttressed margins.  But consumers are becoming less ignorant by the day.  One U.K. Web site encourages customers to enter the details of their most-used credit cards, including current balances, and then shows them exactly how much they will save by switching to a card with better payment terms.  In addition, the Internet is zeroing-out transaction costs.  The commissions earned by market makers of all kinds -- dealers, brokers, and agents -- are falling off a cliff, or soon will be.”

Distribution Monopolies are Under Attack

You can build your own fan base and reach the world.

Via The Future of Management:

“Distribution monopolies -- another source of friction -- are under attack.  Unlike the publishers of newspapers and magazines, bloggers don't need a physical distribution network to reach their readers.  Similarly, new bands don't have to kiss up to record company reps when they can build a fan base via social networking sites like MySpace.”

Collapsing Entry Barriers and Customer Power Squeeze Margins

Customers have a lot more choice and power now.

Via The Future of Management:

“Collapsing entry barriers, hyper efficient competitors, customer power -- these forces will be squeezing margins for years to come.  In this harsh new world, every company will be faced with a stark choice: either set the fires of innovation ablaze, or be ready to scrape out a mean existence in a world where seabed labor costs are the only difference between making money and going bust.”

What’s the solution?

Innovation.

Innovation is the way to play, and it’s the way to stay in the game.

Innovation is how you reinvent your success, reimagine a new future, and change what your capable of, to compete more effectively in today’s ever-changing world.

You Might Also Like

4 Stages of Market Maturity

Brand is the Ultimate Differentiator

High-Leverage Strategies for Innovation

If You Can Differentiate, You Have a Competitive Monopoly

Short-Burst Work

Categories: Architecture, Programming